Hit enter to search or ESC to close

The Meaning of Mapping: Why a Safe and Scalable Self-Driving System Depends on a Hyper-Detailed City Map

At the heart of every autonomous vehicle powered by the Argo self-driving system is a map of the city where we’re testing. But this is no ordinary map. These highly accurate and richly detailed 3D models are not just enabling our vehicles to navigate around city streets autonomously—they help raise the level of safety in the communities where we operate. They also deliver a crucial advantage when the time comes to expand within those metro areas or deploy in new cities.

Our maps capture the entire spectrum of static streetscape objects and geo-specific traffic control rules and, where necessary, geo-specific driver behavior. This equips the self-driving system with “prior knowledge” of a city — similar to a human driver’s knowledge after driving in a city for a long time, but with an even greater degree of precision. We create all of our own maps by first manually driving our test vehicles to capture the sensor data. Once the street models are built, we annotate the environment using semi-automated tooling that has enhanced our ability to scale to other areas much more quickly, while maintaining high quality and accuracy to ensure safety. 

A complex Miami intersection shown with ground surface imagery.

The maps don’t just demarcate the intersections, avenues, and side-streets that make up a dense urban environment. They also include traffic signal locations and semantics (in 3D space); street signs (even those that have been vandalized, occluded, or are broken, such that they might be confusing to the naked eye); local laws, regulations and social norms including speed limits, crosswalks, lane geometries, bike lanes, curbs and so on. 

Mapping = Memory

The map’s wealth of contextual knowledge forms part of the “memory” of the self-driving system. We create our own maps because this memory is tightly coupled with our software and hardware, and forms the basis of our ability to operate in autonomous mode. It allows our self-driving vehicle to compare what it’s observing in real-time with this detailed 3D model of the physical space it is anticipating, and identify any differences between the two to determine how to react. 

That goes for even the thorniest of urban intersections. Two years ago, our team discovered a particularly tricky junction in the heart of Washington, DC. We nicknamed the intersection, “starfish”— so named because its four curving lanes, multiple crosswalks, and constellation of road signals are in constant motion. Most human drivers struggle navigating the starfish, overwhelmed by the choice of traffic lights to focus on, lanes to navigate into, and how to avoid all the pedestrians, cyclists, and other vulnerable road users (VRUs) in their path. Using the map and our testing experience, the Argo self-driving system can now approach the starfish with prior knowledge of its idiosyncrasies, while also safely responding to the dynamic variables — a pedestrian crossing without a crosswalk, a bus merging ahead, a commuting cyclist sharing the lane — and determine whether and how to react to that difference. 

In this way, the “memory” helps our vehicle dismiss objects and structures that it already knows about, and focus its attention on the unexpected, especially moving objects, in busy urban areas. 

A vector map of the “starfish” intersection in Washington, DC shows the locations of traffic lights and road signs, and includes the direction of each lane of traffic.

Mapping City-by-City

In addition to equipping our vehicles with a comprehensive 3D model of the physical spaces they encounter, the other primary utility of our mapping system is to help our vehicles “localize.” As compared to GPS, Argo’s localization is much more accurate in urban areas and calculates where our vehicles are in the world, down to the centimeter. It also provides an understanding of location-specific road rules and regulations so the vehicles can adjust their behavior accordingly.

That means our maps include specific, localized information for the six unique cities where we operate. This allows our self-driving system to drive more naturalistically, because it understands geo-specific rules (like how default speeds vary city by city for streets without posted speed limit signs) and cues (like how pedestrian crossing behaviors are different between Miami or Washington, DC.) 

Grand Circus in Downtown Detroit, shown with ground surface imagery (left) and 3D lidar point clouds (right), depicting the exact locations of objects in space.

This information, paired with hyper-specific 3D models, ensures that our vehicles are as prepared as possible to navigate safely in multiple cities, in even the most unexpected circumstances. 

Mapping at Scale

Every time our team enters a new city to test our self-driving system, we begin constructing our map with a small area around our facilities, expanding rapidly over time. By developing tools to semi-automate the annotation of traffic signs, lane marking types and lane geometry, we can shrink the time, upkeep, and effort needed to expand deeply into a new city. In fact, we have hundreds of undirected miles of roads mapped in the heart of each city where we operate.

Looking to the future, this process will only continue to become more efficient and accurate. With our fleets now operating constantly in our test cities, we’re able to exercise “opportunistic mapping”—the constant collection of road data by our self-driving cars on the go—to maintain our maps without the need for dedicated mapping vehicles or location scouting. 

Maps are central to our success in operating our test vehicles safely on the roads. As we continue to expand our testing program in new cities around the U.S. and soon in Europe, our maps will continue to ensure that we respond appropriately and safely in every environment and situation—from an intersection with a broken traffic signal, to the trickiest starfish junction. And as the automated tools and processes we develop continue to improve, we’ll continue to reduce the time it takes to scale and expand into new regions around the globe.

Choose your lane

How Autonomous Vehicles Distinguish Between Bicycles and People Who Ride Them

How Autonomous Vehicles Distinguish Between Bikes and People

When it comes to how autonomous vehicles see the world, humans come first, literally. Autonomous vehicles (AVs), like the kind operated by Pittsburgh-based Argo AI, use Machine Learning to detect and classify the objects in their surroundings, identifying people...
Why The League of American Bicyclists is optimistic about autonomous vehicles

Why a Leading Cycling Advocacy Group Is Optimistic About Autonomous Vehicles

As autonomous vehicle use grows, AV companies and the League of American Bicyclists are collaborating on how to ensure cyclists and motorists can share the roads safely, even if the “motorist” is artificial intelligence software. As part of the...

Self-Driving Is Arriving Right On Time. Just Like Ice Cream Did

Seven years ago, I was a self-driving skeptic. Not of the technology. Of all the “experts” promising autonomous vehicles would be everywhere by 2020. You didn’t need to be Nostradamus to know that was ridiculous. All you needed was...
Illustration of a futuristic parking deck turned into a mixed-use space, with AVs driving by

How Autonomous Vehicles Could Help Transform Parking Lots

Researchers say it’s likely that autonomous vehicles (AVs) can help reduce the need for parking lots, opening more room for grass and trees and other elements of nature. It may not seem like it when you’re circling the block...
An illustration of an Argo autonomous vehicle in teal against a backdrop of buildings, a bicyclist, and research papers

7 Big Breakthroughs From Argo Research at CVPR 2022

The 2022 Conference on Computer Vision and Pattern Recognition (CVPR 2022) is nearly here. Thousands of computer scientists, software engineers, and researchers from around the globe will gather in New Orleans to review and discuss their latest work in...

Researchers Predict the Future With Lidar Data

Researchers at Carnegie Mellon University’s Argo AI Center for Autonomous Vehicle Research, a private-public partnership funded by Argo for advancing the autonomous-vehicle (AV) field, say they have come up with a way to use lidar data to visualize not...

Must Reads