Hit enter to search or ESC to close
Self-Driving

What a Self-Driving Car “Sees” on a Public Road

The human eye is a marvel—capable of detecting particles of light as far as 2.6 million miles away (hello, Andromeda Galaxy!). But even this evolutionarily advanced organ can’t compete with what sensors in a self-driving car can “see”. Our eyes can’t see in 360 degrees. They can’t keep track of hundreds of objects at once. And they can’t calculate an object’s velocity and trajectory with absolute precision. 

For that, you need what an autonomous vehicle has—namely 30 individual sensors taking in gigabytes of information about the world around it, every second. All of this data is fused together by the self-driving system (SDS) to create a three-dimensional model of the world, allowing the vehicle’s “brain” to perceive what’s going on and decide how to act next. Simple, right?

Not at all. In its raw format, the information streaming in from the vehicle’s sensors is incredibly complex: 2D images, coordinates in space, and lots and lots of numbers. While a self-driving system can process an immense amount of sensor data at once, it can be difficult for the software engineers working on the system to make sense of all of it. They need to create tools that allow them to input data and produce visual outputs. 

The images below represent a single moment in time—an Argo test vehicle making a left-hand turn in an intersection in Midtown Miami—captured by a plethora of on-board sensors. 

Cameras and Classification 

The test vehicle is crowned by seven cameras, three of which capture the scene unfolding directly in the front of the car (as seen via the front camera), and four processing anything that might approach the vehicle from the left, right, or behind. Here, after stopping at a busy four-way intersection, the vehicle determines it has the right-of-way and safely enters the intersection. It stops to allow two pedestrians to cross the street. Each “actor” in this scene—including the FedEx truck creeping into the intersection; parked and moving vehicles; a bicycle parked against a street-pole; and all nearby pedestrians—are detected by the SDS and classified. The color-coded “masks” match each pixel of the image to an object classification. Orange masks indicate people, blue masks mean vehicles, and pink masks are for bicycles.

A view from Argo's autonomous vehicle camera sensors.
An Argo AI autonomous vehicle cameras capture objects on a street in Miami.

Lidar Goes to Work

The same scene is captured by the vehicle’s lidar sensors. The visual waves represent pulses of laser light beamed from sensors housed in the vehicle’s rooftop sensor pod. Whenever the beams hit an object, they bounce back to the vehicle’s sensors as unique points of measurement. Cumulatively, these hundreds of thousands of points generate a “point cloud,” a visual representation of the surface area of all the objects detected. The point cloud can be visualized in a multitude of ways, in this case as color fields that indicate the distance of the object from the vehicle. Each lidar sensor collects 10 of these images every second, ensuring that if anything moves suddenly, the vehicle will notice it.

MORE: 10 Critical Things to Know About Lidar

A view from Argo's autonomous vehicle lidar sensors.
Whenever the pulses of laser light from Argo’s lidar hit an object, they bounce back to a create visual representation of all the objects detected.

Overlaying the 3D Map

It’s not enough to just “see” what’s happening around the vehicle–the SDS must also understand the parameters of the area where it’s operating. For this, it turns to a 3D model of the area, a meticulously constructed map of the city that features everything a driver (or, in this case, the Argo SDS) should know about the street, its infrastructure, and the laws that regulate it. Here, you will find lane markers that indicate all of the legal pathways for driving, street signs that dictate speed limits and rules of the road, and exactly-rendered traffic lights that communicate which lanes they control and the yielding relationships of all road users. 

A 3D model of a Miami intersection constructed for Argo's self-driving system
Autonomous vehicles use a meticulously constructed map of the city that features everything a driver (or, in this case, the Argo SDS) should know about the street, its infrastructure, and the laws that regulate it.

Predicting the Future  

Once the vehicle has perceived its surroundings, it must make predictions about the intentions of all actors on the road. Here, the SDS highlights potential pathways for each of the nearby vehicles, allowing the Argo vehicle to anticipate multiple possible actions so that it’s not taken by surprise. The blue line is the route that the Argo vehicle intends to take. The multicolored lines indicate likely routes for other vehicles in the scene.

MORE: How Self-Driving Vehicles Learn to Co-Exist With City Traffic

A view of of an Argo autonomous vehicle's potential routes at an intersection in Miami.
Argo’s self-driving system maps out potential routes for each of the nearby road users, which enables the autonomous vehicle to predict possible outcomes.

The World on High  

It may look like a satellite-eye view of the city below, but this visual is actually part of a 3D model of the city created from the ground up (and definitely not from outer space). Used by Argo engineers to help visualize all of the physical and legal features of its surroundings—from no-turn lanes to bus stops to view-blocking foliage—the 3D map can be “zoomed out” to see the vehicle in its context: Just one small participant in a rich and complex ecosystem.

A zoomed out 3D map of the city of Miami used by an Argo autonomous vehicle.
The 3D model is used by Argo engineers to help visualize all of the physical and legal features of its surroundings.

All in a Blink

As these examples demonstrate, there’s no such thing as a routine left-hand turn for a self-driving vehicle. In the level of detail they observe and the volume of data that they convey, these visualizations underscore how computer vision plays such a vital role in enabling safe and predictable driving. And it all happens in one-tenth of a second—a literal blink of an eye.

Choose your lane

How Autonomous Vehicles Distinguish Between Bicycles and People Who Ride Them
Self-Driving

How Autonomous Vehicles Distinguish Between Bikes and People

When it comes to how autonomous vehicles see the world, humans come first, literally. Autonomous vehicles (AVs), like the kind operated by Pittsburgh-based Argo AI, use Machine Learning to detect and classify the objects in their surroundings, identifying people...
Why The League of American Bicyclists is optimistic about autonomous vehicles
Self-Driving

Why a Leading Cycling Advocacy Group Is Optimistic About Autonomous Vehicles

As autonomous vehicle use grows, AV companies and the League of American Bicyclists are collaborating on how to ensure cyclists and motorists can share the roads safely, even if the “motorist” is artificial intelligence software. As part of the...
Opinion

Self-Driving Is Arriving Right On Time. Just Like Ice Cream Did

Seven years ago, I was a self-driving skeptic. Not of the technology. Of all the “experts” promising autonomous vehicles would be everywhere by 2020. You didn’t need to be Nostradamus to know that was ridiculous. All you needed was...
Illustration of a futuristic parking deck turned into a mixed-use space, with AVs driving by
Business

How Autonomous Vehicles Could Help Transform Parking Lots

Researchers say it’s likely that autonomous vehicles (AVs) can help reduce the need for parking lots, opening more room for grass and trees and other elements of nature. It may not seem like it when you’re circling the block...
An illustration of an Argo autonomous vehicle in teal against a backdrop of buildings, a bicyclist, and research papers
Self-Driving

7 Big Breakthroughs From Argo Research at CVPR 2022

The 2022 Conference on Computer Vision and Pattern Recognition (CVPR 2022) is nearly here. Thousands of computer scientists, software engineers, and researchers from around the globe will gather in New Orleans to review and discuss their latest work in...
Self-Driving

Researchers Predict the Future With Lidar Data

Researchers at Carnegie Mellon University’s Argo AI Center for Autonomous Vehicle Research, a private-public partnership funded by Argo for advancing the autonomous-vehicle (AV) field, say they have come up with a way to use lidar data to visualize not...

Must Reads