3D Image Acquisition and Display: Technology, Perception and Applications

25 June 2018 – 28 June 2018 Wyndham Orlando Resort International Drive, Orlando, Florida United States

This topical meeting aims to bring together researchers and engineers from a broad range of interdisciplinary fields to present their work in the science, technology, and applications of non-holographic 3D image collection and display technologies. This meeting will cover research areas related to the acquisition, display, and applications of (non-holographic) 3D information as well as the perception, human factors, and visual comfort of 3D information displays.

Research related to scientific understanding of 3D information display and perception, technological innovation on 3D image acquisition or display methods, and task-specific design and applications of 3D acquisition or display technology is of particular interest. Application areas of interest include but not limited to virtual reality, augmented reality, biomedicine, microscopy, endoscopy, healthcare, autonomous vehicle, wearable displays, and entertainment.


Topics

  • Science and engineering of 3D information collection and display technologies
  • Acquisition, display, and applications of3D information
  • Wearable display methods and technologies
  • User interface technologies for 3D systems
  • Image processing for 3D acquisition or display applications
  • Perception, human factors, and visual comfort of 3D information displays
  • Applications of 3D image acquisition or display technologies, examples include virtual reality, augmented reality, autonomous vehicle, wearable displays, and entertainment
  • Healthcare applications including biomedicine, microscopy, endoscopy, medical and scientific visualization
  • Defense and security
  • Optical engineering design of 3D information acquisition and display technology
  • Sensor technology for 3D
  • 3D Display hardware and software technologies

Top


Speakers

  • Aristide Dogariu, University of Central Florida, CREOLUnited States
  • Atanas Gotchev, Tampereen Teknillinen YliopistoFinland 
    Densely-sampled light field: reconstruction, compression and applications
  • Viktor Gruev, Univ of Illinois at Urbana-ChampaignUnited States 
    Holographic goggles for near infrared fluorescence image guided surgery
  • Ryoichi Horisaki, Osaka UniversityJapan 
    Single-shot phase imaging with coded diffraction and its applications
  • Changwon Jang, Seoul National UniversitySouth Korea 
    Mixed Reality Near-eye Display with Focus Cue
  • Jae-Hyun Jung, Harvard Medical SchoolUnited States 
    Light-field Background De-cluttering for Visual Prostheses
  • Hirokazu Kato, Nara Institute of Science and TechnologyJapan 
    An Adaptive Rendering for Microlens Array HMD based on Eye-Gaze Tracking
  • Mathias Kolle, MITUnited States 
    Reconfigurable and Dynamically Tunable Droplet-based Compound Micro-lenses
  • Bernard Kress, Microsoft CorpUnited States 
    How Recent Optical Technology Breakthroughs Enable Next Generation Head Mounted Displays
  • Byoungho Lee, Seoul National UniversitySouth Korea
  • Hooman Mohseni, Northwestern UniversityUnited States
  • Jae-Hyeung Park, Inha UniversitySouth Korea 
    Optical See-through Three-dimensional Near-to-eye Display with Depth of Field Control
  • Filiberto Pla, Universitat Jaume I 
    3D Hand Gesture Recognition using Integral Imaging
  • Filiberto Pla, Universitat Jaume ISpain
  • Martina Poletti, University of RochesterUnited States 
    The Retinal Input During Fixation: Binocular Head/eye Coordination at the Fine Scale
  • Jim Schwiegerling, University of ArizonaUnited States 
    Optical Properties of Schematic Eye Models
  • Natan Shaked, Tel-Aviv UniversityIsrael
  • David Stork, Rambus Inc.United States 
    Smart optics for low-power computational sensing
  • Gordon Wetzstein, Stanford UniversityUnited States 
    Computational Near-eye Displays: Engineering the Interface between our Visual System and the Digital World

Top


Committee

  • Hong Hua, University of Arizona, United States , General Chair
  • Bahram Javidi, University of Connecticut , General Chair
  • Osamu Matoba, Kobe University, Japan , Program Chair
  • Adrian Stern, Ben Gurion University of the Negev, Israel , Program Chair
  • Yasuhiro Takaki, Tokyo Univ of Agriculture and Technology, Japan , Program Chair
  • Martin Banks, University of California Berkeley, United States
  • Oliver Bimber, Johannes Kepler University Linz, Austria
  • V. Michael Bove, Massachusetts Institute of Technology, United States
  • Toshiaki Fujii, Nagoya University, Japan
  • Ying Geng, Oculus VR LLC
  • Juan Liu, Beijing Institute of Technology, China
  • Lu Lu, Facebook Inc., United States
  • Yi Qin, Google, United States
  • Basel Salahieh, Intel Corporation, United States

Top


Plenary Session

Paul Debevec

Google VR, USA

Light Fields and Light Stages for Photoreal Movies, Games, and Virtual Reality

This talk will present work from USC ICT and Google VR in creating actors and environments for movies, games, and virtual reality.  The Light Stage computational illumination and facial scanning systems are geodesic spheres of inward-pointing LED lights which have been used to create digital actor effects in movies such as Avatar, Benjamin Button, and Gravity, and have recently been used to create photoreal digital actors based on real people in movies such as Furious 7, Blade Runner: 2049, and Ready Player One.  The lighting reproduction process of light stages allows omnidirectional lighting environments captured from the real world to be accurately reproduced in a studio, and has recently be extended with multispectral capabilities to enable LED lighting to accurately mimic the color rendition properties of daylight, incandescent, and mixed lighting environments.  They have also recently used their full-body light stage in conjuction with natural language processing and automultiscopic projection to record and project interactive conversations with survivors of the World War II Holocaust. Debevec will conclude by discussing the technology and production processes behind "Welcome to Light Fields", the first downloadable virtual reality experience based on light field capture techniques which allow the visual appearance of an explorable volume of space to be recorded and reprojected photorealistically in VR enabling full 6DOF head movement.

About the Speaker

Paul Debevec is a research professor at the University of Southern California and the associate director of graphics research at USC's Institute for Creative Technologies. Debevec's Ph.D. thesis (UC Berkeley, 1996) presented Façade, an image-based modeling and rendering system for creating photoreal architectural models from photographs. Using Façade he led the creation of virtual cinematography of the Berkeley campus for his 1997 film The Campanile Movie whose techniques were used to create virtual backgrounds in The Matrix. Subsequently, Debevec pioneered high dynamic range image-based lighting techniques in his films Rendering with Natural Light (1998), Fiat Lux (1999), and The Parthenon (2004); he also leads the design of HDR Shop, the first high dynamic range image editing program. At USC ICT, Debevec has led the development of a series of Light Stage devices for capturing and simulating how objects and people reflect light, used to create photoreal digital actors in films such as Spider Man 2, Superman Returns, and The Curious Case of Benjamin Button, and Avatar, as well as 3D Display devices for telepresence and teleconferencing. He received ACM SIGGRAPH's first Significant New Researcher Award in 2001, co-authored the 2005 book High Dynamic Range Imaging from Morgan Kaufmann, and chaired the SIGGRAPH 2007 Computer Animation Festival. He serves as Vice President of ACM SIGGRAPH and is a member of the Visual Effects Society, the Academy of Motion Picture Arts and Sciences, and the Academy's Science and Technology Council.

Jason Eichenholz

Luminar Technologies, USA

​OSA Light the Future Presentation: The Role of Optics and Photonics in the Vehicles of Tomorrow

In this presentation, OSA Fellow Jason Eichenholz will take a high level look at the future of optics and photonics technologies in autonomous vehicles. Optics are a crucial component in an industry headed for extreme disruption over the next few decades and will play a critical role in shaping the future of navigation, passenger experience and the ultimate safety of the autonomous trip. Eichenholz will discuss the key components of all-things-optic, including LiDAR, laser headlights, passenger monitoring and interior lighting and displays, the role each plays inside a future automobile and its impact on the transportation industry.

About the Speaker

Jason Eichenholz is a serial entrepreneur and pioneer in laser, optics and photonics product development and commercialization. Over the course of his twenty-five year career, his unique blend of business and technical leadership has resulted in hundreds of millions of dollars of new photonics products being brought to market. Eichenholz is an inventor on ten U.S. patents on new types of solid-state lasers, displays and photonic devices.

Laurent Pueyo

Space Telescope Science Institute, USA

Exoplanet Imaging: From Precision Optics to Precision Measurements

During this plenary talk Laurent will present recent observational results in exoplanet imaging and discuss prospects for similar experiments on NASA missions such as the upcoming James Webb Space Telescope and the currently studied The Large UV/Optical/IR Surveyor.

About the Speaker

Laurent Pueyo is an astronomer at the Space Telescope Science Institute, in Baltimore, Maryland. He earned his doctorate from Princeton University in 2008 and conducted his post-doctoral work as a NASA Fellow at the Jet Propulsion Laboratory and as a Sagan Fellow at the Johns Hopkins University. His research focuses on imaging faint planets around nearby stars. He has pioneered advanced data analysis methods that are now standard tools used to study extrasolar planets, and invented an optical technique that is now baselined for future NASA missions. At STScI his duties include optimizing the extrasolar-planet imaging capabilities of NASA's James Webb Space Telescope (JWST), scheduled to launch in late 2019. He is also a member of the Science and Technology Definition Team for the Large Ultraviolet Optical and Infrared telescope, a future observatory that will identify Earth-sized planets and assess their habitability.

Top


Special Events

 

Digital Holographic Microscopy: Present and Future Panel Discussion

Monday, 25 June, 12:30–14:00
Join the OSA Holography and Diffractive Optics Technical Group for a panel discussion exploring potential breakthroughs in digital holographic microscopy. Brief
presentations from our featured panelists will be followed by a moderated question and answer session, helping facilitate the exchange of information with our
community. Contact TGactivities@osa.org to register, pending availability.
Hosted by:   
 

Congress Reception

Monday, 25 June; 18:30–20:00
Come join your colleagues for drinks, networking and thoughtful discussion. Enjoy light fare while networking. The reception is open to all full conference attendees.
Conference attendees may purchase extra tickets for their guest.

Student & Early Career Professional Development & Networking Lunch and Learn

Tuesday, 26 June; 12:30-14:00
This program will provide a unique opportunity for students and early career professionals, who are close to finishing or who have recently finished their doctorate degree, to interact with experienced researchers. Key industry and academic leaders in the community will be matched for each student based on the student's preference or similarity of research interests. Students will have an opportunity to discuss their ongoing research and career plans with their mentor, while mentors will share their professional journey and provide useful tips to those who attend. Lunch will be provided. 

This Workshop is complimentary for OSA Members and space is limited. Not all who apply will be able to attend due to space limitations and priority will be given to those who have most recently or are close to graduation.
Hosted by  OSAF

50th Anniversary of Introduction to Fourier Optics by Joseph Goodman

Tuesday, 26 June; 13:30-19:30
This year marks the 50th anniversary of the publishing of Introduction to Fourier Optics by Joseph Goodman, a book that has fundamental influence in the field of optical imaging. To commemorate this anniversary a special series of talks will be presented covering Fourier optics in the classroom to the evolvement of the field.

Join the Image Sensing and Pattern Recognition Technical Group for a small reception immediately following the conclusion of the program.
 
  • Joseph W. Goodman, Stanford University, USA, Origins and Evolution of Introduction to Fourier Optics
  • James Fienup, University of Rochester, USA, ABCD Matrix Analysis for Fourier-Optics Imaging
  • Raymond Kostuk, University of Arizona, USA, A review of the wonderful discussion of Holography by Professor Goodman in his book: The Introduction to Fourier Optics.
  • James Leger, University of Minnesota Twin Cities, USA, What’s the Problem? Insight and Inspiration Derived from Solving the Exercises in J. Goodman’s Classic Book Introduction to Fourier Optics
  • Masud Mansuripur, University of Arizona, USA, Fourier Optics in the Classroom
  • Demetri Psaltis, Ecole Polytechnique Federale de Lausanne, Switzerland, The Transition of Fourier Optics Towards Computational Imaging and Digital Holography
  • William T. Rhodes, Florida Atlantic University, USA, Teaching Fourier Optics: What I do Differently after 50 Years
  • Bahaa Saleh, University of Central Florida, USA
Reception Hosted By 
 

Illumicon II

Tuesday, 26 June 2018, 19:00 – 21:00
 You are invited to join the OSA Display Technology Technical Group for Illumicon II, an exclusive members-only event. Building on the declarations established at the inaugural Illumicon, which was convened in 2016, attendees will come together to discuss and debate emerging trends, technologies and opportunities in advanced 3D displays. Our discussions will also seek input on how the Display Technology Technical Group can further engage the 3D community in the years ahead. Illumicon II attendees will converge over drinks and appetizers at the confidential location. Entrance will be granted to those able to provide the secret Illumicon II event password. RSVP to tgactivities@osa.org to receive the event location and password.
Hosted by 

 

Applications of Visual Science Technical Group Networking Lunch

Wednesday, 27 June 2018, 12:00–13:00
Members of the OSA Applications of Visual Science Technical Group are invited to join us for a networking lunch on Wednesday. The event will provide an opportunity
to connect with fellow attendees who share an interest in this field and to learn more about this technical group. Contact TGactivities@osa.org to register, pending availability.

Hosted by 


Tour of Laser Propagation Facilities at Kennedy Space Center

Thursday, 28 June (13:00-18:00) and Friday (07:00-12:00)
Additional Fee: $25 per person.
 *Fee includes only transportation. Transportation will leave from the conference hotel and return to the Orlando International Airport and the conference hotel. 

During this tour at Kennedy Space Center (KSC), you will see various facilities used for outside field experiments such as laser propagation measurements. The tour will include UCF’s Townes Institute Science and Technology Experimentation Facility (TISTEF), the Shuttle Landing Facility (SLF), and Vehicle Assembly Building (VAB). TISTEF is a site for experiments that require deployment in a fielded setting and consists of a 1 km grass range equipped with atmospheric monitoring instruments and multiple scintillometers as well as capabilities for optical tracking and remote sensing. From this site, slant path measurements can be made over the 13 km path to the top of the VAB. The 5 km long SLF is ideal for longer path measurements because of its homogeneity and flatness (earth’s curvature has been removed). This tour is possible because of the pcAOP committee and University of Central Florida.

 

Student Grand Challenge: the optical systems of the future

The challenge is open to OSA student members and their advisors interested in presenting concepts for enhanced machine visioning or systems that enhance the human vision system by augmenting or extending another human sense.  
 
Individuals or teams are invited to submit ideas for either a novel passive or active optical system in the form of a 35-word abstract and 2 page summary highlighting the novelty, originality and feasibility of the concept. (Additional materials may include videos highlighting system mock-up or demos.)
 
Up to four finalists will be chosen to attend the Imaging and Applied Optics Congress, 25-28 June 2018 in Orlando, FL USA to present a 3-minute synopsis of their concept as well as to host a poster during the conference poster session.  Finalists will receive a travel stipend up to $2,000 USD to cover airfare and hotel as well as full technical registration for the congress.
 
Two winners will be announced on-site and will receive a recognition plaque and a $250 prize.

Sponsored by Lockheed Martin and the OSA Foundation.

Passive Optical System Challenge Problem

The image processing community strives to duplicate human vision.  For certain specific and well defined tasks we have succeeded or surpassed human capability, but still struggle with poorly defined and dynamic environments. The category comparison between machine vision and human vision include:
  • Spectrum: Machine vision is superior as human vision is limited to the visible spectrum.  Machine vision is also more capable of seeing narrower spectrum steps and larger dynamic ranges than our eyes.
  • Resolution: Human vision is superior.  Current machine vision systems that are approaching 8K x 8K formats are starting to get there but, only with visible systems.
  • Focus: Human vision is superior being able to focus from very close to very far with a single lens element. The eye aperture is limited by the size of the pupil for objects far away.  Machine vision systems are specifically designed for very close or very far away and do not suffer from being aperture limited, but utilize many lens elements to accomplish the same human eye tasks.
  • Optical Processing: Human vision + brain is superior to machine vision on pattern recognition and decision making.
The passive optical systems challenge is to create a novel concept, technology or system for improving results in one of the 8 categories below:  
  • Image Processing: Ideas focused on detection and categorization of objects in the view field.
  • Lens Technology: Ideas that are focused on optical sensors.
  • High Speed Data Transport: Ideas that are focused on fast and efficient transport of high resolution image data and streams.
  • Adaptable Lens Technology: Ideas that are focused on adaptable optical sensors.
  • Liquid Lens Optical Sensors: Ideas focused on liquid lens.
  • Artificial Intelligence: Ideas that focus on image based cognition
  • AV/VR Technology: Ideas that are focused on Augmented and Virtual Reality technologies.
  • Other: Ideas that do not fall into one of the existing categories.

Active Optical System Challenge

Given you have a human vision system, which is intrinsically passive, how would you use active sensing techniques to augment that vision system to mimic or extend the human senses? Augmentation could mean adding higher precision 3D vision, active foveae imaging, active IR assisted sensing, vibrometry, polarimetry, sensing motion in the FOV, chemical/biological sensing, looking through fog/turbulence, etc.  Many such systems have been demonstrated, but they are often large, heavy, and costly.
 
The active optical system challenge is to come up with novel sensor concepts that mimic at least two of the human senses at a distance of at least 10 m, with the sensor fitting into one third of the human brain (roughly 0.5 liters).  More sensing modalities are encouraged, especially those that extend what humans can do.
 
  1. Sight (e.g., producing 2D or  3D images)
  2. Hearing (e.g., measuring object vibrations through optical means)
  3. Smell (e.g., chemical/biological sensing)
  4. Taste (e.g., chemical/biological sensing)
  5. Touch (e.g., characterization of surface texture and/or temperature)

Rules

  • Limited to undergraduate or graduate students.
  • The teams should be composed of at least one OSA student member and at least one advisor who is an OSA member.
  • Required submission format: PDF with a 35-word abstract and 2 page summary.
    • Optional submission material: videos, system mock-ups, demonstrations.

Key Criteria

  • Compliance: is the idea submission complete and does it comply with the rules of the challenge?
  • Novelty: does the idea describe a novel approach to providing a solution?
  • Originality: how original is the proposed technology or use of existing technology?
  • Relevance: How well does the idea relate to the topic and provide a solution aligned with the goals of this challenge?
  • Feasibility: how likely can the idea be prototyped?

Evaluation

Submissions will be evaluated by a committee of Imaging and Applied Optics Congress leadership and Lockheed Martin executives.

Top