At some point in the near future—how near depends on who you ask—autonomous vehicles (AVs) will become a common sight on the roads. Without the need for a driver or human input, AVs, which are also known as self-driving cars, will require sensors and computers working together to read the road and surrounding environment.

Most of the advanced driver aids in the wild today use a combination of radar and sonar to deliver warnings on unseen threats and to help stop a vehicle before a collision occurs. Lidar is a technology that can perform similar functions to radar and sonar, but it’s a next-generation system that may represent the best option for AVs’ ability to “see.” As automakers and other companies move through testing and real-world drives, it has become clear that next-generation sensors and tech offer intriguing functionality but are not the silver bullet that many thought they’d be at first.

What Is Lidar?

Lidar is short for “light detection and ranging.” The systems use pulsed lasers to map a three-dimensional model of an environment. Lidar’s use of light allows it to map the environment quickly and more accurately than systems that use sound (sonar) or microwaves (radar). It was developed by NASA to keep track of satellites and distances in space but was picked up for use in other industries in the mid-1990s, when the United States Geological Survey used lidar to track coastal vegetation growth.

Since then, the technology has progressed, and lidar systems have become smaller and even more accurate. This has made lidar an attractive option to add “eyes” to autonomous vehicles, as the vehicles need to quickly develop an image of the world around them to avoid hitting pedestrians, animals, obstacles, and other vehicles.

This content is imported from youTube. You may be able to find the same content in another format, or you may be able to find more information, at their web site.
Watch onWatch on YouTube IconWatch on YouTube icon for video media
This is an image

Lidar systems map out their environments by sending laser pulses outward. When the pulse contacts an object or obstacle, it reflects or bounces back to the lidar unit. The system then receives the pulse and calculates the distance between it and the object, based on the elapsed time between emitting the pulse and receiving the return beam. Lidar does this rapidly, with some emitting millions of pulses per second. As the beams return to the system, it begins forming a picture of what’s going on in the world around the vehicle and can use computer algorithms to piece together shapes for cars, people, and other obstacles.

How Is Lidar Used?

Radar has been the go-to in the automotive world for years, and has been used in several forms of advanced driver assistance systems (ADAS). Blind-spot monitoring systems use radar to detect vehicles before a lane change, adaptive cruise control uses radar to maintain a consistent distance between two vehicles on the road, and automatic emergency braking systems use radar to stop a vehicle before it makes contact with an obstacle.

Lidar promises to improve on those features with more accurate environment mapping and quicker processing from the rapid-fire nature of the systems. Because of its 360-degree capabilities, lidar should improve the accuracy and quality of safety alerts.

How Lidar Works with AVs

First, it’s important to note that, currently, autonomous or self-driving vehicles don’t exist for sale to everyday consumers. Vehicles such as those from Tesla or Super Cruise-equipped Cadillacs offer the ability to ride hands-free for extended periods of time, but do so only in extremely limited circumstances, such as on highways and interstates.

When self-driving vehicles do eventually make their way into the wild on a large-scale basis, the amount of data needed and the speed at which it needs to be collected is staggering. In order to piece together a decision-making process that is anywhere remotely near the level of complexity that a human brain can manage, autonomous vehicles need to have an accurate and real-time picture of the world around them. This is especially true in urban environments, where human drivers encounter other people, animals, and a variety of vehicles in a short period of time.

optimus ride providing self driving shuttle services
The world seen through the lidar lens.
The Washington Post//Getty Images

What Are the Downsides?

Lidar is considered to be the standard for many companies working on autonomous vehicles, but the technology is not fully accepted by all automakers. Tesla and its founder Elon Musk have been critical of lidar as the driver for AV awareness, because the technology is only re-creating an image of its surroundings, rather than getting a visual representation of what’s going on. An example of this is with small obstacles in the road. Lidar is more than capable of identifying that there is something in the road that needs to be avoided, but is not able to tell exactly what it’s looking at. To lidar, a balloon floating in the center of the road looks exactly the same as a large rock, so there are times when a non-threat is treated with outsized importance and times when a real threat may not be recognized as such. In a vacuum, this isn’t a tremendous problem, but in the real world it’s far from ideal to have a vehicle misunderstanding what it’s looking at.

Tesla argues, as do others, that using a vision-based system with cameras can achieve the same awareness that a lidar system brings, but with the added level of security that comes from pictures of the actual environment. Tesla’s systems use cameras and learn over time, which makes them more able to deal with unpredictable environments. That functionality, combined with the fact that cameras are currently far less expensive than lidar, has led some to question the need for expensive sensors.

The answer to which sensor or camera is going to be the best for autonomous vehicles is more complicated than determining whether or not a vehicle can “see.” The tests being conducted have so far mostly been performed in limited and somewhat controlled environments that don’t completely represent the conditions that an AV might see on a daily basis.