Skip To Content

Share:

Depth Perception in AR/VR: Optics, Graphics and Content Virtual Panel Discussion

Hosted By: Display Technology Technical Group

29 June 2021 10:00 - 11:30

Eastern Time (US & Canada) (UTC -05:00)

Join the OSA Display Technology Technical Group for a virtual panel discussion on depth perception in AR/VR featuring presentations from Ana Serrano, Max Planck Institute for Informatics; Paul Linton, City, University of London; and Erdem Sahin, Tampere University. Details of the talks to be presented by our speakers follow. Each talk will be around 15 minutes, and a discussion session will follow the talks.

Studying Perception in Virtual Reality for Content Generation Applications presented by Ana Serrano:

Virtual Reality has the potential to dramatically change the way we create and consume content in our everyday life. This technology has the ability of unlocking unprecedented user experiences by allowing for an increased sense of presence, immersion, and engagement. In the last few years, we have witnessed an astonishing progress in technological developments, such as capture and display technologies, accompanied by a steady advance in the understanding of cognitive factors regarding users’ perception in this new medium. For VR to become commonplace and realize its full potential, various aspects of computer graphics and applied perception play a crucial role. In this talk, we will discuss how the study of perceptual and attentional behaviors can assist graphics applications, in particular focusing on content generation.

Size and Distance Perception in Virtual and Augmented Reality presented by Paul Linton:

As virtual and augmented reality move increasingly towards object interaction in near space, the accuracy of visual depth perception becomes paramount. However, evidence from hidden-hand reaching tasks suggests that the perception of distance in virtual reality continues to be distorted. It is natural to look for solutions in the human vision literature, and in this talk we consider three approaches. First, the fixation distance of the eyes (vergence) is thought to direct reaching and grasping in near space. But in two experiments we demonstrate that vergence does not affect the perceived size and distance of objects. Second, we consider the other distance cues in the vision science literature, and conclude that they are unlikely to significantly improve distance perception. Third, we explore increasing evidence that size and distance perception is reliant on higher-level cognitive influences, and explore how this suggests new avenues for future work.

Computational Imaging for Accommodation-Invariant Near-Eye Displays presented by Erdem Sahin:

Traditional stereoscopic near-eye displays (both VR and AR) rely on the stereo cues to deliver the depth information of 3D scenes. However, such displays fail to provide accurate focus cues. As a result, due to well-known vergence-accommodation conflict (VAC), visual discomfort occurs and the visual performance is hindered. The approaches addressing the VAC can be either accommodation-enabling (e.g., varifocal, multifocal, light field and holographic displays) or accommodation-invariant (AI). In this talk, we will discuss the latter approach, which is relatively less studied, from the perspective of computational imaging. We will talk about computational AI displays that combine neural network based (computational) coding with (optical) decoding through diffractive optical elements. We will elaborate on how machine learning enables optimally designing such AI displays in an end-to-end manner, resulting novel (static) computational eyepiece optics that can efficiently address the VAC.

 

About Our Speakers:

Ana Serrano, Max Planck Institute for Informatics

Ana Serrano received her PhD in Computer Science from Universidad de Zaragoza (Spain) in 2019, and she is currently a postdoctoral researcher at the Max Planck Institute for Informatics (Germany). During her PhD, she was the recipient of an Adobe Research Fellowship honorable mention in 2017, and a NVIDIA Graduate Fellowship in 2018. Her thesis has been awarded with one of the Eurographics 2020 PhD awards. Her research spans several areas of visual computing; in particular, she is interested in computational imaging, material appearance perception and editing, and virtual reality, with a focus on applying perceptually-motivated solutions. Her work has been published in top venues, including ACM Trans. On Graphics, Scientific Reports, and IEEE TVCG. She has also served in several technical papers program committees, including SIGGRAPH, Eurographics, and the ACM Symposium on Applied Perception.

 

 

Paul Linton, Centre for Applied Vision Research, City, University of London

Dr Paul Linton is a Research Fellow at the Centre for Applied Vision, City, University of London. His research focuses on how the human visual system processes visual scale and visual shape. He is the author of The Perception and Cognition of Visual Space (Palgrave, 2017), and co-organiser of the forthcoming Royal Society meeting on “New Approaches to 3D Vision”. He was a Research Intern on the Display Systems Research team at Facebook Reality Labs as part of the DeepFocus project, and was previously a Stipendiary Lecturer at the University of Oxford and a Teaching Fellow at University College London.

 

 

Erdem Sahin, Tampere University

Dr. Erdem Sahin received the Ph.D. degree from the Electrical and Electronics Engineering Department, Bilkent University in 2013. In 2014, he joined the 3D Media Group in Faculty of Information Technology and Communication Sciences at Tampere University, as Marie Curie Experienced Researcher, where he has been Senior Research Fellow since 2019. Erdem has co-initiated several national and international research projects on plenoptic imaging such as Academy of Finland project “Modeling and visualization of perceivable LFs” (09/2019-09/2023) and H2020 Marie Sklodowska-Curie Actions Innovative Training Networks project “PLENOPTIMA-Plenoptic imaging” (01/2021-01/2025). He has contributed to light field and holographic imaging fields with more than 20 peer-reviewed scientific articles. His current research interests include development of computational light field and holographic imaging algorithms and methods for next-generation 3D cameras and displays.

 

 

Image for keeping the session alive