Research
I am interested in perception and computer graphics, especially anything involving computational display and psychophysics. Prior work involved perceptual metrics, brightness and color, stereo 3D, and display topics like virtual and augmented reality, frame rate, high dynamic range and more.
|
|
Skin-Screen: A Computational Fabrication Framework for Color Tattoos
Piovarci, Chapiro, Bickel
SIGGRAPH 2023 [journal]
bibtex /
project page
In this work, we examined tattoos through the lens of computational fabrication. To build our model, we created an automatic tattoo robot, and processes to generate synthetic skins to experiment on. We created a framework to predict and modify tattoo color for different skin tones, which we hope will lead to better tattoo quality for everyone. This work also has medical and robotics applications!
|
|
Critical Flicker Frequency (CFF) at high luminance levels
Chapiro, Matsuda, Ashraf, Mantiuk
Human Vision and Electronic Imaging (HVEI) 2023
bibtex
Flicker is a common temporal artifact that is affected by many parameters like luminance and retinal eccentricity. We gathered a high-luminance dataset for flicker fusion thresholds, showing that the popular Ferry-Porter law does not generally hold above 1,000 nits, and the increase in sensitivity saturates.
|
|
Modelling contrast sensitivity of discs
Ashraf, Mantiuk, Chapiro
Human Vision and Electronic Imaging (HVEI) 2023
bibtex
Studies on spatial and temporal sensitivity are often done using different types of stimuli. We studied how experiments conducted using discs can be predicted using data for Gabors. This can lead to more comprehensive models calibrated on both types of data.
|
|
Geo-metric: A Perceptual Dataset of Distortions on Faces
Wolski, Trutoiu, Dong, Shen, MacKenzie, Chapiro
SIGGRAPH Asia 2022 [journal]
bibtex /
video [TBD] /
code/data
We study the perception of geometric distortions. We create a novel demographically-balanced dataset of human faces, and find the perceived magnitudes of several relevant distortions through a large-scale subjective study.
|
|
Realistic Luminance in VR
Matsuda*, Chapiro*, Zhao, Bachy, Lanman
* = equal contribution
SIGGRAPH Asia 2022 [conference]
bibtex /
video [TBD] /
code/data
We used the Starburst HDR VR 20,000+ nits prototype display to run a study measuring user preferences for realism when immersed in natural scenes. We found that user preference extends beyond what is available in VR today, and changes significantly between indoor and outdoor scenes.
|
|
HDR VR
Matsuda, Zhao, Chapiro, Smith, Lanman
SIGGRAPH'22 E-tech
bibtex /
video /
best in show award
Our HDR VR prototype display can reach brightness values over 20,000 nits. This work won "best in show" in the Emerging Technologies section of SIGGRAPH'22, and has received widespread media attention: Adam Savage's Tested, CNET, UploadVR, DigitalTrends, TechRadar, Mashable, RoadToVR.
|
|
stelaCSF-A Unified Model of Contrast Sensitivity as the Function of Spatio-Temporal Frequency, Eccentricity, Luminance and Area
Mantiuk, Ashraf, Chapiro
SIGGRAPH 2022 [journal]
bibtex /
code & data /
project page
We unified contrast sensitivity datasets from the literature, which allowed us to create the most comprehensive and precise CSF model to date. This new model can be used to improve many applications in visual computing, such as metrics. Data, code, and additional information available on our project page.
|
|
FovVideoVDP: A Visible Difference Predictor for Wide Field-of-View Video
Mantiuk, Denes, Chapiro, Kaplanyan, Rufo, Bachy, Lian, Patney
SIGGRAPH 2021 [journal]
supplementary material /
bibtex /
github /
project page
We created a new foveated spatiotemporal metric, following the VDP line of work. This metric is fast, easy to use and has been carefully calibrated on several large datasets.
|
|
A Luminance-Aware Model of Judder Perception
Chapiro, Atkins, Daly.
ACM Transactions on Graphics (TOG), Presented at SIGGRAPH 2020
Link to ACM TOG /
supplementary material /
bibtex /
code /
presentation video
We studied the main perceptual components of judder, the perceptual artifact of non-smooth motion. In particular, adaptation luminance is a strong factor on judder that has changed significantly with modern generations of displays.
Errata: In the table of coefficients in Sec. A3, the two-before-last coefficient should be ~0 instead of 1.01. The coefficient is correctly written out in the supplementary material, but not in the manuscript.
|
|
Influence of Screen Size and Field of View on Perceived Brightness
Chapiro, Kunkel, Atkins, Daly.
ACM Transactions on Applied Perception (TAP) , 2018
Link to ACM TAP /
supplementary material /
bibtex
Author's version available here, link to ACM TAP version above. We studied the influence of screen size and distance on perceived brightness for screens as large as cinema and as small as mobile phones, an issue that affects artistic intent and appearance matching.
|
|
Unfolding the 8-bit Era
Zund, Berard, Chapiro, Schmid, Ryffel, Bermano, Gross, Sumner.
European Conference on Visual Media Production (CVMP) , 2015
project page /
bibtex
We created an immersive gaming system out of a legacy console. This work was presented at the Eurographics 2015 banquet, and the Ludicious game festival in Zurich. It also received wide media attention:
arstechnica, engadget, 20minuten, konbini, xataca, gamedeveloper, factornews, boingboing, and has over 200,000 views on YouTube.
|
|
Art-Directable/Continuous Dynamic Range Video
Chapiro, Aydin, Stefanoski, Croci, Smolic, Gross.
Computers and Graphics (CG&A) , 2015
project page /
bibtex
We defined the production and distribution challenges facing the content creation industry in the current HDR landscape and proposed Continuous Dynamic Range video as a solution.
|
|
Video Content and Structure Description Based on Keyframes, Clusters and Storyboards
Junyent, Beltran, Farre, Pont-Tuset, Chapiro, Smolic.
IEEE International Workshop on Multimedia Signal Processing (MMSP) , 2015
video /
bibtex
We developed a pipeline to segment and analyze video. Our technique could be applied for smarter editing.
|
|
Stereo from Shading
Chapiro, O'Sullivan, Jarosz, Gross, Smolic.
Eurographics Symposium on Rendering (EGSR) , 2015
project page /
bibtex /
supplementary
We use non-photorealistic shading as an alternative 3D cue, augmenting the feeling of depth in stereoscopic images.
|
|
Perceptual Evaluation of Cardboarding in 3D Content Visualization
Chapiro, O'Sullivan, Jarosz, Gross, Smolic.
ACM Symposium on Applied Perception (SAP), 2014
project page /
bibtex
We conducted perceptual experiments to quantify cardboarding - an artifact that occurs when not enough depth is given to a stereoscopic 3D image region.
|
|
Optimizing Stereo-to-Multiview Conversion for Autostereoscopic Displays
Chapiro, Heinzle, Aydin, Poulakos, Zwicker, Smolic, Gross.
Eurographics, 2014
project page /
bibtex
We measured perceptual aspects of autostereo content and created a depth re-mapping algorithm that tries to optimize content so that the most important regions have a fuller sense of depth while staying within the limits of the technology.
|
|
Towards Mobile HDR Video
Castro, Chapiro, Cicconet, Velho.
extended abstract in Eurographics, 2011
video
We created a capture and processing pipeline to generate HDR video on a mobile phone by taking sequential multiple exposures.
|
|
Detection of High Frequency Regions in Multiresolution
Mota, Perez, Castro, Chapiro, Vieira.
International Conference on Image Processing (ICIP), 2009
We improved our edge detector by using eigenvalues.
|
|
High Frequency Assessment from Multiresolution Analysis
Castro, Perez, Mota, Chapiro, Vieira, Freire.
International Conference on Computational Science (ICCS), 2009
We detected high frequencies using an orientation tensor and multiresolution.
|
Posters:
|
|
The Influence of Visual Salience on Video Consumption Behavior A Survival Analysis Approach
Huber, Scheibehenne, Chapiro, Frey, Sumner.
ACM Web Science, 2015
long paper version
We found that visual saliency can be used as a predictor for video watching behavior in online platforms.
|
|
Filter Based Deghosting for Exposure Fusion Video
Chapiro, Cicconet, Velho.
SIGGRAPH, 2011
video
We used an aditional per-pixel parameter to avoid ghosting from motion when generating exposure-fusion videos on a mobile phone. (Student research competition semi-finalist)
|
|
Towards Mobile HDR Video
Castro, Chapiro, Cicconet, Velho.
International Conference on Computational Photography (ICCP), 2011
video
We created a capture and processing pipeline to generate HDR video on a mobile phone by taking sequential multiple exposures.
|
Additional publications available upon request.
|
Website template taken from here
|
|