Skip To Content

Luminar Technologies’ Co-Founder & CTO Jason Eichenholz, Welcomes Questions in this Ask Me Anything!

Rebecca Andersen


   
Creating LiDAR your life can depend on, Luminar uses advanced LiDAR sensors to measure millions of points per second, and put that resolution where it matters most. This allows Luminar sensors to see not just where objects are, but what they are — even at distance. Co-Founder and CTO, Jason Eichenholz is a serial entrepreneur and pioneer in laser, optics and photonics product development and commercialization. Over the past twenty-five years, he led the development of hundreds of millions of dollars of new photonics products.

Before joining Luminar as CTO and Co-Founder, Eichenholz was the CEO and founder of Open Photonics, an open innovation company dedicated to the commercialization of optics and photonics technologies. Prior to that, he served as the Divisional Technology Director at Halma PLC. In that role, he was responsible for supporting innovation, technology and strategic development for the Photonics and Health Optics Divisions. Before joining Halma, he was the CTO and Board Member of Ocean Optics Inc. as well as the Director of Strategic Marketing at Newport/Spectra-Physics.

Eichenholz is a Fellow of The Optical Society (OSA) and SPIE. He has served as the principal investigator for Air Force and DARPA funded research and development programs and holds ten U.S. patents on new types of solid-state lasers, displays and photonic devices. Eichenholz has a M.S. and Ph.D in Optical Science and Engineering from CREOL – The College of Optics and Photonics at the University of Central Florida and a B.S. in Physics from Rensselaer Polytechnic Institute.

We’ve highlighted five key technology insights from this captivating AMA!

1. kaushik_93 - Hello Jason, thanks for doing the AMA! I have two questions for you:
I know there are many companies out there that are making self-driving cars. Does Luminar Technologies use similar technologies? How does this self-driving car differ from previous/existing ones from other manufacturers?
(A more technical question) How do you optimize sensors to know the road? As in, how does the car take into account the geometry of the road, spacing between cars and such things? In addition, what sensors do you use to optimize for such criteria?
 
Jason_EichenholzHey Reddit! Happy to be here, excited to answer your LiDAR and self-driving vehicle questions. Here we go!

Luminar is producing LiDAR sensors for self-driving cars to power the entire industry. Most companies building LiDAR today are using the same off the shelf components and the technology hasn’t seen any major performance improvements in decades. Luminar’s LiDAR is designed from the chip level up, so we’re building all of our own components: our own lasers, receivers, scanning mechanisms and processing electronics. The architecture is entirely new and allows us to achieve much more resolution and much further range than today’s systems. Rather than buy silicon chips off the shelf, we have developed our own highly sensitive InGaAs chip by using a fraction of an InGaAs wafer — keeping costs down and performance up. These breakthroughs create the first dynamically configurable system operating at 1550 nm compared to most LiDARs at 905.

Driving conditions dynamically change and therefore resolution needs constantly change. Where you need resolution differs depending on when you’re driving on a highway vs. driving on a city street. Therefore, we have designed the first LiDAR sensor capable of adaptively covering the vertical field of view to configure for these sorts of changes. Our sensor is unique in that we have the ability to dynamically adjust the vertical resolution as needed. We always maintain a 120-degree horizontal field of view; we never want to miss important edge cases like children running out between parked cars.

2. adenovato - How long until snow and sleet are irrelevant to image resolution and identification?

Jason_Eichenholz - Like with human drivers, poor weather conditions will never be irrelevant. Our goal at Luminar is to deliver sensor systems capable of sensing the world as deeply through weather as we can so that a driving system can best deal with the driving task, even if that means reacting with slower speeds. Here is a run-down on some of the relevant details...

During the round-trip speed of light at automotive distances (100s of meters), the environment doesn’t move which limits the interaction with an active sensor like LiDAR. Each of our measurements is taken through a narrow instantaneous field of view rather than looking at the whole scene, so rain and snow simply adds just a little noise in the foreground - we maintain our ability to see the scene behind the rain or snow. Further, humidity (water vapor) can contribute absorption problems for very long range LiDAR (many kilometers range) but at automotive distances we see no impact. Because we are able to output dramatically more power at the eye-safe 1550nm wavelength, we’re successfully punching through inclement weather conditions.

Small, dense obscurants like fog, dust, exhaust, or smog, just like with human vision, results in light scatter, which rewards longer wavelength operation (see Mie Scattering, or the small-particle approximation Rayleigh Scattering). Therefore, our system experiences scatters with an effective 8.6x smaller cross-section than our competitors at 905nm.

Our system is capable of detecting multiple returns per laser pulse. This enables the above feature of seeing through scatters as well as an easy way to filter out low packing density objects like rain drops.
Finally, a related comment, is that a bigger practical problem to adverse weather conditions is what will happen to the external window of the sensor or sensor package - ice, salt, packed snow, bird droppings…

3. MrMasterplan - Hi, Tesla and Daimler are developing autonomous driving based on camera and visual cues alone. I have heard people claim that lidar is not useful at high speeds due to the long range is involved at which detection is necessary. Do you think LiDAR can compete with image recognition for this situation and if so what advantages does it bring to the table?

Jason_Eichenholz - To date, the only auto company attempting to develop fully autonomous driving systems without LiDAR is Tesla. While camera technology has matured to a large degree and all of them operate off of the same basic principles, LiDARs come in several flavors. All commercially available LiDAR sensors for the automotive market have severe limitations in maximum range, resolution, inclement weather performance, interference, scalability, and cost. This is why we have created the first LiDAR architecture that meets those requirements, and can accurately see even dark objects (<10% reflectivity) past 200 meters, which is same range typically targeted for detecting headlights and tail lights using camera systems. While cameras are often useful at seeing objects at distance, they cannot do so with the accuracy and reliability needed for safe, truly autonomous driving. Nevertheless, cameras alone are still quite useful for basic ADAS (assisted driving) developments.

Every major player we are working with have stated that they need a LiDAR system that can detect low reflectivity targets (<10%) at distances of 200m and beyond to autonomously drive safely.

4. darkconfidantislife - What are your opinions on flash LiDAR? Do you think they'll work well in the automotive space? Why or why not?

Jason_Eichenholz - A “Flash” LiDAR system literally sends out a flash of laser light that illuminates the full field of view of the sensor. These systems use a large format focal plane array (FPA) to capture all the returns across the field of view simultaneously. Thus, flash LiDARs require no moving parts, driving some companies to suggest using them in the automotive space, where increased mechanical reliability and longer mean time to failure are attractive features. However, since flash LiDAR exposes the entire FPA, this technique is sensitive to interference by sun, other sensors, and nefarious actors seeking to “spoof” the vehicle. As a result, flash LiDAR has shortcomings for safety-critical application in vehicles, but may have niche applications that support sensing for low-level autonomy.

Because of the quench time of current Geiger mode APD arrays, performance degrades quickly in the presence of solar background radiation. Care must be taken to balance the laser and solar power to make sure that all the detectors aren’t depleted by incoming solar photons before the laser energy returns from the surface. Overcoming solar noise usually requires more laser power and the use of neutral density filters in the receive channel. As a result, range and seeing dark objects is often an issue with flash-based architectures.

Also, while flash lidar based systems sound great in theory from a mechanical standpoint, there are other less attractive features of flash lidar systems. The primary barrier is the cost to produce the detector arrays that work in the non-visible spectrum. The FPAs sell in the 10’s of thousands of dollars range due to expensive materials and poor yields. Other factors, such as limited array size, will also pose challenges in the automotive space where 360 degree ranging around a vehicle is desired. For example, 360-degree coverage can only be accomplished by either: 1) installing 12-20 FPA-based systems around the vehicle assuming reasonable angular resolution per detector, or 2) mechanically scanning the array. The first option is cost prohibitive, while the second eliminates flash LiDAR’s theoretical mechanical stability over existing traditional spinning LiDARs.

5 - milrey76 -Top of Form Jason, what kind of impact has being involved in professional associations had on your career? What are some of the benefits?

Jason_Eichenholz - I was involved with OSA as a student chapter president while an undergraduate at Rensselaer Polytechnic Institute, in fact I helped to start that chapter! From there continue my involvement with the society in graduate school at CREOL at University of Central Florida. This ultimately led to becoming the student representative on the OSA Membership and Education Services (MES) Council, providing me with an opportunity to be the only student attending international leadership meetings of the society.

All of those connections and relationships directly led to opportunities to participate in OSA technically. I am fortunate to have served as technical group chair, technical division chair, and even on OSA’s meeting committee. In 2015, I was honored to be elected Fellow of OSA.
I am a firm believer in the value of social capital - paying it forward professionally. I have had the opportunity to hire former graduate students, mentor and interact professionally with people I have known for 20+ years because of this involvement.
 
Image for keeping the session alive