Hit enter to search or ESC to close
Technology

Suddenly, Everyone Is Talking About Lidar. Here Are 10 Things You Need to Know

It’s on autonomous cars and trucks. It’s on smartphones and tablets. It’s on the tip of every tech investor’s tongue. You guessed it: we’re talking about lidar. 

Stoked by a recent flurry of high-value investments in makers of “light detection and ranging” sensors, interest in lidar shows no signs of cooling off. This is in large part because of the starring role that remote sensing technology—that is, sensors that can detect and identify objects from a distance—will play in the future of mobility.

Nearly all major developers of advanced driver assistance systems (ADAS), defined as Levels 2 and 3 by SAE International, and safe self-driving or autonomous vehicle technology (Levels 4 and 5), utilize lidar as a key part of their suite of sensors—and for good reason. Lidar is used to create a precise, three-dimensional representation of the vehicle’s surroundings, making it an essential component for safety. 

Recent advancements in lidar technology, including the ability to detect objects with low reflectivity surfaces (i.e. dark colors) and at longer ranges, have opened up the possibility of safe autonomous driving at highway speeds. Yet although lidar is often talked about as a singular technology, it comes in many different formats, with a wide range of capabilities for a growing number of applications. There’s no one-size-fits-all solution.

With the lidar industry still in its infancy, developments are moving at pace, leading to plenty of bold statements, and just as many misconceptions. What we can say for certain is that lidar’s effectiveness in autonomous driving lies in its partnership with other sensors such as camera and radar. As we’ll outline, this “sensor fusion” generates a complete picture of a vehicle’s surroundings, not to mention a useful metaphor: When it comes to understanding the lidar market, it’s essential to see the complete picture. 

Here are 10 things you need to know about lidar.

1. Lidar is a hot commodity in automated driving, especially as unit prices drop

Arguably the most challenging sensor to develop and manufacture, lidar can be a key differentiator among automated driving technology competitors. 

As is common with emerging technologies, early automotive lidar units came at a considerable cost, clocking in at around $75,000 each a decade ago. Now, thanks to rapid advances in R&D and product design, improved technology, falling component prices, and new manufacturing solutions, you can find lidar on a sub-$1,000 smartphone. For the automotive industry, the race is on to produce automotive-grade lidar units (meaning durable enough to withstand extreme environmental conditions over extended periods of time) at similarly affordable prices.

Focusing on lidar unit price, however, is far too simplistic. “Price” is relative to the technical capability of the lidar unit, including whether it’s intended for advanced driver assistance systems (ADAS) sold as part of optional equipment for privately-owned vehicles, or for autonomous vehicles that will operate in ride-hailing or goods delivery service fleets. To understand what really separates one lidar from another, you have to go deeper.

2. You say ToF, I say FMCW… Lidar hardware comes in many forms

The sensor pod that sits atop most autonomous test vehicles contains at least one lidar unit, housing a complex laser set-up that fires millions of beams every second to detect the presence of objects, calculate their range, and create a “point cloud,” or 3D image, resulting from the location of the laser beams’ returns.

The two most common sensor types used or in development by automotive lidar makers are time of flight (ToF) and frequency-modulated continuous-wave (FMCW), and there are strengths and weaknesses to each.

ToF is the most common form of lidar in automated vehicle applications. It has multiple beams that send a pulse of laser energy, and measures the time it takes for each pulse to bounce directly back off any given object. The measured time, and the known speed of light, is used to calculate the object’s physical distance from the sensor. 

The more prevalent form of ToF is linear-mode ToF, which uses laser pulses to send out trillions of photons and measures the percentage of photons that come back in that one flash. It typically needs to receive tens if not hundreds of photons from each pulse to be able to form its point cloud. Referred to by some as “legacy ToF,” its sensors are not as sensitive as the less common “Geiger-mode” ToF. 

Named after the Geiger counter, which can measure a single radioactive particle at a time, Geiger-mode lidar uses beam detectors that can detect a single photon, the smallest measurable unit of light. It does this by sending laser pulses at very high repetition rates, and measuring the flashes that return, regardless of the number of photons. Geiger-mode can produce much more accurate measurements than linear lidar, at a longer range.

Frequency-modulated continuous wave or FMCW does not send pulses out: it continuously streams light out and changes the frequency of the beam over time. To measure range, FMCW uses a split beam technique to send part of the beam all the way to the target, while the other part of the beam remains inside the sensor.  When the emitted beam is reflected back, the FMCW receiver mixes the reflected beam with the internal beam being emitted at that moment. The FMCW receiver then measures the difference in frequency between the outgoing and incoming beam: the larger the difference, the greater the range to the object. One of the shortcomings of current FMCW lidar sensors in development is the challenging tradeoffs between field of view, frame rate, and resolution. While FMCW sensors currently in development offer a limited field of view, generally of no more than 120°, ToF offers a variety of options including full 360° applications.

Clearly, there’s no universal solution when it comes to lidar—cost, application, technical requirements, and vehicle design are among the factors which dictate which type of lidar is used.

3. The veracity of measuring velocity

While both types of ToF lidar can calculate the velocity of an object by inferring motion over multiple measurements, FMCW can detect an object’s “radial velocity” along the line-of-sight between the object and the lidar—as well as its range—in a single measurement. However, FMCW cannot measure the “lateral velocity” perpendicular to the line-of-sight in one measurement. In other words, a single FMCW measurement can detect the relative speed of an oncoming car, but not the speed of a car on a cross street up ahead. To assess the “full velocity” of an object requires the determination of both the radial and lateral components.

FMCW’s ability to measure range and radial velocity simultaneously is similar to radar. Indeed, FMCW lidar is the laser equivalent of the FMCW radar sensors that can be found in vehicle-safety devices such as blind-spot detection, park assist, and adaptive cruise control.

The high resolution of Geiger-mode ToF lidar means that lidar data can be treated like camera imagery, but improved with precise, pixel-by-pixel depth estimates. This sensor data richness enables a more precise full velocity to be estimated using tried-and-true computer-vision techniques.

Radial velocity measurements can be useful in very specific situations, such as helping to distinguish between two pedestrians moving in opposite directions towards or away from the sensor, or on limited-access highways where lateral velocity is minimal. However, obtaining full velocity, which includes the lateral motion, requires inferring motion from multiple measurements, since lateral motion is not measurable in a single FMCW measurement. So the velocity of two pedestrians walking in opposite directions on a crosswalk ahead of the vehicle—perpendicular to the vehicle’s travel direction—cannot be measured directly. 

That means that in real world operation, both ToF and FMCW lidar need multiple measurements to determine full velocity, although ToF can enable calculation of full velocity from one sensor that features a 360° field of view.

4. Beaming back at you: The importance of reflectivity

Because lidar both emits light and also detects reflections of that light, the brightness of an object’s surface—its reflectivity—is a key factor for lidar performance and reliability. After all, the distance a beam can be transmitted counts for little without accurately receiving the reflected beam. 

Reflectivity is measured as a percentage of the laser energy—put simply, the number of photons—transmitted by the lidar that returns to the lidar receiver. The lower the reflectivity, the fewer photons that return, and the harder it is to accurately detect the object. The greater the lidar sensor’s ability to detect objects with low reflectivity, the more reliable its performance. Many manufacturers of lidar are trying to develop a product that can detect a surface with less than 3% reflectivity at more than 200 meters, a key benchmark in being able to support safe autonomous vehicle operation at higher speeds, but so far, there are no solutions on the market.

On a simplistic level, consider the difference between white and black cars; white paint is generally 80% reflective, but black paint can be less than 1% reflective. Sensitivity to the lowest number of photons—ideally in the single digits—would enable the detection of darker, less reflective objects. Very dark materials reflect very little energy, making them hard to detect, particularly at longer range. And of course, the greater the range at which these photons can be detected, the better. 

As well as low reflectivity, a surface angled such that the energy is not directed back to the sensor also makes an object harder to detect; another factor is background “noise”—that is, light from sources other than the lidar beam, such as the sun or bright artificial lights that can swamp the reflected beam, making it difficult for the lidar to detect the true range.

What does this mean for lidar? Linear-mode ToF lidar requires many more photons to get above the noise also detected by the sensor. And low-reflectivity objects—like dark blue and black vehicles—often reflect too few photons to be seen, so these sensors cannot see dark vehicles beyond a few tens of meters. Geiger-mode ToF lidar overcomes this limit by providing a high probability of detection with reflections as minimal as even a single photon.

When it comes to resolution and reflectivity, it’s all about the photons—and the fewer the number of photons required for detection, the better.

5. You can’t just lidar away your problems. It’s all about “sensor fusion”

Despite significant advances in lidar technology, no one sensor type alone has the capability to guide an autonomous vehicle safely everywhere people want to travel. But when several sensors are used in partnership—typically lidar, radar, and camera—the advantages of each sensor complement the limitations of the others.

Thanks to what’s known as “sensor fusion,” which enables self-driving systems to sense speed, distance, depth, and object type in 360 degrees, the resultant information is greater than the sum of its parts.

Lidar’s strengths lie in producing accurate 3D images by calculating the distance to objects with the high spatial resolution inherent in sensing using light. Radar, by comparison, can accurately detect the radial velocity, as well as the range of a moving object, but lacks the resolution to accurately detect an object’s shape and exact position. On the other hand, unlike lidar, radar performs well even in fog.

Object classification is the major advantage of camera technology, with the added benefit of color recognition. However, cameras have a number of limitations. These include performance in the dark and in very bright light, and the transition between the two (such as when a vehicle emerges from a dark tunnel on a bright sunny day); their performance when confronted with reflections; their inability to detect objects of a particular color against the same colored background; and crucially, they cannot measure depth directly and instead must infer it, often with much poorer range accuracy than lidar or radar.

By combining the strengths of each, sensor fusion complements the others’ weaknesses and provides the data required for redundancy—the engineering term for “backup.”

6. When it comes to wavelength, size matters

Any ray of light has a specific wavelength, and it’s this property of light which has become a topic of much heated discussion in lidar engineering circles. Each wavelength has a number of advantages and disadvantages that are highly dependent on the application.

In automotive applications, the two most common lidar wavelengths are 905 nanometers (nm), which sits in the near-infrared (NIR) wavelength range, and 1550 nm, in the short-wave infrared (SWIR) range. Other lidar wavelengths at the shorter end include 850 and 940 nm, while the longer end ranges between 1400 and 1550 nm. NIR wavelengths are used to detect objects in the vehicle’s immediate surroundings even as close as mere centimeters away, while detection of faraway objects, at distances of over 200 meters, is better achieved with a SWIR wavelength. 

The greater the lidar range, the more time a self-driving vehicle has to detect objects—and to confirm, predict, and respond to their behavior smoothly and safely. As a result, the ability to detect objects from long distances, notably above 250 meters, is essential for safe autonomous driving at highway speeds and has become a key battleground in lidar development.

However, wavelength alone is an inadequate stand-in for lidar evaluation. Not only is wavelength selection application-specific, there are additional factors to consider, namely eye safety, weather, and ease of manufacturing.

7. Safety first: Autonomous vehicle lidar must be “eye-safe”

To avoid potential harm to human eyes, all commercial lasers in the United States, including those used for lidar, must comply with laser emission standards published by the International Electrotechnical Commission and regulated by the FDA.

However, the debate about eye safety has been clouded by some reports that have created a perception of one wavelength being safer than another. In truth, any wavelength can be made safe for the human eye, because eye safety is about more than just wavelength. What’s known as maximum permissible exposure (MPE) depends on a careful balance of various factors, notably wavelength, pulse frequency, and optical energy—that is, the amount of energy used to fire the laser beam. Put simply, the higher the amount of energy used, the greater the lidar’s potential range of detection—but too much optical energy has the potential to cause eye damage. 

While the precise MPE for a given lidar depends on many design details, the amount of laser energy that can be used safely at longer wavelengths above 1400 nm is generally at least 100 times higher than the laser energy that is safe at 905 nm. Because longer wavelength lidar solutions can safely emit more laser energy than short wavelength lidar, they can achieve longer sensing distances while still being eye-safe.

The bottom line: both the short (905 nm) and long wavelengths (above 1400 nm) currently used in automotive applications are safe for human eyes, but within the constraints of remaining eye-safe, long wavelengths provide better long-range performance.

8. Spinning around? Different methods of controlling the lidar field of view

Lidar makers use a number of techniques to send out and direct the lasers, known as beam steering. These can broadly be filed into two groups of lidar: scanning, and flash lidar.

Scanning lidar moves laser beams and detectors rapidly across the entire sensor field of view (FOV), and this can be done in two different ways: mechanical scanning and solid state scanning. 

Mechanical scanning lidar is a tried and tested technology and already in production. Some units point a single laser at a spinning mirror, while others use a single laser pointed at a mirror that can rotate in 2D, allowing it to scan an area of as much as 120 degrees both laterally and vertically. Another technique involves mounting the lasers on a rotating drum, like the lens on a lighthouse, enabling 360-degree FOV. Stacking more lidar units together can achieve higher angular resolution than from a single laser, but typically in a larger package. Unless manufacturers opt for a spinning lidar technique, the only way to achieve 360-degree awareness is to use multiple sensors, and because of the need for overlap to ensure accurate coverage, often four or more lidar units are required depending on the FOV. For example, a 120-degree FOV sensor would require four units placed around the vehicle, adding cost and complexity to the overall solution.

Solid state scanning lidar—also known as “optical phased array”—uses a silicon chip rather than moving parts to change the laser beam’s direction. Removing mechanical parts is alluring because it can enable more compact lidar unit design, and has the potential to simplify product complexity and assembly, reduce cost, and enable manufacturers to concentrate on increasing image resolution. These strengths are balanced by the present difficulty of achieving any significant FOV for scanning with solid state technology, and this FOV limitation imposes much greater complexity for obtaining 360-degree awareness around the vehicle. As a result, solid state is well behind mechanical scanning lidar in its development for manufacturing and deployment, and is more potential than reality.

Flash lidar uses truly motion-free, non-scanning devices which bathe an area with laser light, like a floodlight. This approach eliminates all moving parts, and avoids the science and manufacturing challenges of optical phased arrays, making them great sensors for certain applications…but typically not autonomous vehicles. Flash lidar forces a difficult trade-off between achieving the fine angular resolution needed for long-range object detection, and the wide FOV that is also needed—limited by the resolution of the detectors available today. And as with solid state and some mechanical scanning lidar, multiple sensors must be added to a vehicle to achieve 360-degree sensor coverage.

Just as there’s a range of lidar types, there’s also a range of beam steering techniques; ultimately, the decision on which to use depends on a number of factors, ranging from  technical requirements, vehicle design, and application, to commercial readiness at the time of launch.

9. Whatever the weather? Almost…

Lidar developers have clearly made spectacular advances in recent years, but even they have yet to find a way of controlling the weather. 

Lidar sensor performance is lessened by airborne water droplets, which interfere with beams and reduce the lidar’s effective range. What’s more, sensor units need to be constantly clean and clear of water droplets and snowflakes on the optical window of the sensor.

Adverse weather conditions affect all lidar technologies, and the extent of their impact is dependent upon lidar wavelength. Researchers at the Military University of Technology in Warsaw, Poland found that “optical radiation at 1550 nm propagates much better than at a 905 nm wavelength”and even in conditions up to 100% humidity, light transmits seven times more efficiently at longer wavelengths than at 905 nm. Granted, degradation in rain is slightly greater at 1550 nm than at 905 nm, but as the Warsaw researchers’ results show, overall transmission is still better at 1550 nm than at 905 nm by a factor of as much as five for distances relevant to automotive sensing. Finally, the authors explicitly ignored the fact that the background noise from sunlight is two times lower at 1550 nm than it is at 905 nm, providing an added advantage to designing lidar using these longer wavelengths.

The key challenges for lidar are the absorption of light by water droplets, and the “scattering” of a beam passing through a droplet or snowflake, which makes it difficult to pinpoint the object that reflected the lidar.  

The different effects on a beam of light passing through a droplet of water will forever remain a fact of physics. However, self-driving technology developers can change the way they deal with beam performance in poor weather. The “computational imaging enthusiasts” at Stanford Computational Imaging Lab are among those racing to develop a technique that can reconstruct the scattered beams and overcome the optical scattering challenge. 

With the right mix of innovation, wavelength, and power, improved performance in precipitation could be within reach. For the foreseeable future, an autonomous vehicle will be unable to drive safely at full speed in dense fog or blizzard conditions. That limitation is often the same for human drivers, too, although we are not always aware of said limitations. Fortunately, the continued development of new technology—such as computational imaging—will eventually allow autonomous vehicles to operate even in harsh weather conditions.

10. Mass production is easier dreamt than done

To be blunt, making lidar units is hard; mass producing them is even harder. Automakers need lidar units that are automotive grade, and meet the exacting requirements of self-driving system developers. They also require a manufacturing and supply chain capable of supporting high-volume production.

From R&D to manufacturing, supply and after-sales support, lidar manufacturers need to demonstrate that they can consistently produce low runs of lidar prototypes and high volumes of commercial operation-ready lidar units to a growing number of markets globally.

A lidar unit includes optics, electronics, and microscopic mechanical components. With the technology in its infancy, and with no longevity of experience to turn to, very few manufacturers have the required combination of manufacturing skills, or the ability to competitively produce at high volumes. Automakers and self-driving vehicle companies will look to those suppliers that can reliably produce cutting-edge technology at a reasonable cost.

A complete picture emerges

With plenty of runway in the lidar market, it’s essential to be able to see the complete picture without interference from background noise. Depending on what the self-driving vehicle needs to see, there’s certainly technology in development that claims to fit the bill, but it is uncertain when it will see the light of day. Lidar is not an off-the-shelf commodity, but an intricate, application-specific, purpose-designed system of high-end sensors and components.

It needs to work harmoniously with all the other equally complex equipment required for self-driving. Developments in lidar technology could produce significant advances in the capabilities of self-driving vehicles, with single-photon sensitivity and eye-safer wavelengths beyond 1400 nm enabling game changing performance. Until then, the arena for lidar innovation will remain wide open.

Must Reads