Alex Chapiro

I am a Staff Researcher at Meta, working in Ajit Ninan's Applied Perception Science team in Reality Labs. I previously worked in the Core Display Incubation team at Apple, the Applied Vision Science team at Dolby Laboratories, and the Stereo and Displays group at Disney Research Zurich.

I did my PhD in Markus Gross' Computer Graphics Laboratory at ETH Zurich. I got my master's in applied math at IMPA/Visgraf, and my bachelors in math at the Federal University of Juiz de Fora.

Email  /  CV  /  Biography  /  Thesis  /  Google Scholar  /  LinkedIn

Research

I am interested in perception and computer graphics, especially anything involving computational display and psychophysics. Prior work involved perceptual metrics, brightness and color, stereo 3D, and display topics like virtual and augmented reality, frame rate, high dynamic range and more.

JOV'24

castleCSF — A contrast sensitivity function of color, area, spatiotemporal frequency, luminance and eccentricity
Ashraf, Mantiuk, Chapiro, Wuerger
Journal of Vision 2024
bibtex / code & data / project page

We followed up on our work unifying contrast sensitivity datasets from the literature by integrating color. This extends our data-driven CSF model to now cover color, area, spatiotemporal frequency, luminance and eccentricity. Our model performs better than any existing work, and importantly covers critical perceptual parameters necessary for display engineering and computer graphics tasks. Data, code, and additional information available on our project page.

SIGGRAPH Asia'23

Perceptually Adaptive Real-Time Tone Mapping
Tariq, Matsuda, Penner, Jia, Lanman, Ninan, Chapiro
SIGGRAPH Asia 2023 [conference]
bibtex / video

We created a perceptual framework that determines the optimal parameters for a tone mapper in real time (>1ms per frame on Quest2). We use this system and the Starburst HDR VR prototype to demonstrate that content shown with an optimized version of the Photographic TMO is preferred to heuristics or unoptimized versions even when display luminance is reduced tenfold.

SIGGRAPH'23

Skin-Screen: A Computational Fabrication Framework for Color Tattoos
Piovarci, Chapiro, Bickel
SIGGRAPH 2023 [journal]
bibtex / project page

In this work, we examined tattoos through the lens of computational fabrication. To build our model, we created an automatic tattoo robot, and processes to generate synthetic skins to experiment on. We created a framework to predict and modify tattoo color for different skin tones, which we hope will lead to better tattoo quality for everyone. This work also has medical and robotics applications!

EI HVEI'23

Critical Flicker Frequency (CFF) at high luminance levels
Chapiro, Matsuda, Ashraf, Mantiuk
Human Vision and Electronic Imaging (HVEI) 2023
bibtex / presentation video

Flicker is a common temporal artifact that is affected by many parameters like luminance and retinal eccentricity. We gathered a high-luminance dataset for flicker fusion thresholds, showing that the popular Ferry-Porter law does not generally hold above 1,000 nits, and the increase in sensitivity saturates.

EI HVEI'23

Modelling contrast sensitivity of discs
Ashraf, Mantiuk, Chapiro
Human Vision and Electronic Imaging (HVEI) 2023
bibtex

Studies on spatial and temporal sensitivity are often done using different types of stimuli. We studied how experiments conducted using discs can be predicted using data for Gabors. This can lead to more comprehensive models calibrated on both types of data.

SIGGRAPH Asia'22

Geo-metric: A Perceptual Dataset of Distortions on Faces
Wolski, Trutoiu, Dong, Shen, MacKenzie, Chapiro
SIGGRAPH Asia 2022 [journal]
bibtex / video [TBD] / code/data

We study the perception of geometric distortions. We create a novel demographically-balanced dataset of human faces, and find the perceived magnitudes of several relevant distortions through a large-scale subjective study.

SIGGRAPH Asia'22

Realistic Luminance in VR
Matsuda*, Chapiro*, Zhao, Bachy, Lanman
* = equal contribution
SIGGRAPH Asia 2022 [conference]
bibtex / video [TBD] / code/data

We used the Starburst HDR VR 20,000+ nits prototype display to run a study measuring user preferences for realism when immersed in natural scenes. We found that user preference extends beyond what is available in VR today, and changes significantly between indoor and outdoor scenes.

SIGGRAPH'22 E-tech

HDR VR
Matsuda, Zhao, Chapiro, Smith, Lanman
SIGGRAPH'22 E-tech
bibtex / video / best in show award

Our HDR VR prototype display can reach brightness values over 20,000 nits. This work won "best in show" in the Emerging Technologies section of SIGGRAPH'22, and has received widespread media attention: Adam Savage's Tested, CNET, UploadVR, DigitalTrends, TechRadar, Mashable, RoadToVR.

SIGGRAPH'22

stelaCSF-A Unified Model of Contrast Sensitivity as the Function of Spatio-Temporal Frequency, Eccentricity, Luminance and Area
Mantiuk, Ashraf, Chapiro
SIGGRAPH 2022 [journal]
bibtex / code & data / project page

We unified contrast sensitivity datasets from the literature, which allowed us to create the most comprehensive and precise CSF model to date. This new model can be used to improve many applications in visual computing, such as metrics. Data, code, and additional information available on our project page.

SIGGRAPH'21

FovVideoVDP: A Visible Difference Predictor for Wide Field-of-View Video
Mantiuk, Denes, Chapiro, Kaplanyan, Rufo, Bachy, Lian, Patney
SIGGRAPH 2021 [journal]
supplementary material / bibtex / github / project page

We created a new foveated spatiotemporal metric, following the VDP line of work. This metric is fast, easy to use and has been carefully calibrated on several large datasets.

ACM TOG 2019

A Luminance-Aware Model of Judder Perception
Chapiro, Atkins, Daly.
ACM Transactions on Graphics (TOG), Presented at SIGGRAPH 2020
Link to ACM TOG / supplementary material / bibtex / code / presentation video

We studied the main perceptual components of judder, the perceptual artifact of non-smooth motion. In particular, adaptation luminance is a strong factor on judder that has changed significantly with modern generations of displays.

Errata: In the table of coefficients in Sec. A3, the two-before-last coefficient should be ~0 instead of 1.01. The coefficient is correctly written out in the supplementary material, but not in the manuscript.

ACM TAP 2018

Influence of Screen Size and Field of View on Perceived Brightness
Chapiro, Kunkel, Atkins, Daly.
ACM Transactions on Applied Perception (TAP) , 2018
Link to ACM TAP / supplementary material / bibtex

Author's version available here, link to ACM TAP version above. We studied the influence of screen size and distance on perceived brightness for screens as large as cinema and as small as mobile phones, an issue that affects artistic intent and appearance matching.

CVMP2015

Unfolding the 8-bit Era
Zund, Berard, Chapiro, Schmid, Ryffel, Bermano, Gross, Sumner.
European Conference on Visual Media Production (CVMP) , 2015
project page / bibtex

We created an immersive gaming system out of a legacy console. This work was presented at the Eurographics 2015 banquet, and the Ludicious game festival in Zurich. It also received wide media attention: arstechnica, engadget, 20minuten, konbini, xataca, gamedeveloper, factornews, boingboing, and has over 200,000 views on YouTube.

CGnA2015

Art-Directable/Continuous Dynamic Range Video
Chapiro, Aydin, Stefanoski, Croci, Smolic, Gross.
Computers and Graphics (CG&A) , 2015
project page / bibtex

We defined the production and distribution challenges facing the content creation industry in the current HDR landscape and proposed Continuous Dynamic Range video as a solution.

CGnA2015

Video Content and Structure Description Based on Keyframes, Clusters and Storyboards
Junyent, Beltran, Farre, Pont-Tuset, Chapiro, Smolic.
IEEE International Workshop on Multimedia Signal Processing (MMSP) , 2015
video / bibtex

We developed a pipeline to segment and analyze video. Our technique could be applied for smarter editing.

CGnA2015

Stereo from Shading
Chapiro, O'Sullivan, Jarosz, Gross, Smolic.
Eurographics Symposium on Rendering (EGSR) , 2015
project page / bibtex / supplementary

We use non-photorealistic shading as an alternative 3D cue, augmenting the feeling of depth in stereoscopic images.

CGnA2015

Perceptual Evaluation of Cardboarding in 3D Content Visualization
Chapiro, O'Sullivan, Jarosz, Gross, Smolic.
ACM Symposium on Applied Perception (SAP), 2014
project page / bibtex

We conducted perceptual experiments to quantify cardboarding - an artifact that occurs when not enough depth is given to a stereoscopic 3D image region.

CGnA2015

Optimizing Stereo-to-Multiview Conversion for Autostereoscopic Displays
Chapiro, Heinzle, Aydin, Poulakos, Zwicker, Smolic, Gross.
Eurographics, 2014
project page / bibtex

We measured perceptual aspects of autostereo content and created a depth re-mapping algorithm that tries to optimize content so that the most important regions have a fuller sense of depth while staying within the limits of the technology.

Cas11

Towards Mobile HDR Video
Castro, Chapiro, Cicconet, Velho.
extended abstract in Eurographics, 2011
video

We created a capture and processing pipeline to generate HDR video on a mobile phone by taking sequential multiple exposures.

Cas11

Detection of High Frequency Regions in Multiresolution
Mota, Perez, Castro, Chapiro, Vieira.
International Conference on Image Processing (ICIP), 2009

We improved our edge detector by using eigenvalues.

Cas9

High Frequency Assessment from Multiresolution Analysis
Castro, Perez, Mota, Chapiro, Vieira, Freire.
International Conference on Computational Science (ICCS), 2009

We detected high frequencies using an orientation tensor and multiresolution.

Posters:

Hub15

The Influence of Visual Salience on Video Consumption Behavior A Survival Analysis Approach
Huber, Scheibehenne, Chapiro, Frey, Sumner.
ACM Web Science, 2015
long paper version

We found that visual saliency can be used as a predictor for video watching behavior in online platforms.

Cha11

Filter Based Deghosting for Exposure Fusion Video
Chapiro, Cicconet, Velho.
SIGGRAPH, 2011
video

We used an aditional per-pixel parameter to avoid ghosting from motion when generating exposure-fusion videos on a mobile phone. (Student research competition semi-finalist)

Cas11b

Towards Mobile HDR Video
Castro, Chapiro, Cicconet, Velho.
International Conference on Computational Photography (ICCP), 2011
video

We created a capture and processing pipeline to generate HDR video on a mobile phone by taking sequential multiple exposures.

Additional publications available upon request.

Website template taken from here