The Meaning of Mapping: Why a Safe and Scalable Self-Driving System Depends on a Hyper-Detailed City Map
At the heart of every autonomous vehicle powered by the Argo self-driving system is a map of the city where we’re testing. But this is no ordinary map. These highly accurate and richly detailed 3D models are not just enabling our vehicles to navigate around city streets autonomously—they help raise the level of safety in the communities where we operate. They also deliver a crucial advantage when the time comes to expand within those metro areas or deploy in new cities.
Our maps capture the entire spectrum of static streetscape objects and geo-specific traffic control rules and, where necessary, geo-specific driver behavior. This equips the self-driving system with “prior knowledge” of a city — similar to a human driver’s knowledge after driving in a city for a long time, but with an even greater degree of precision. We create all of our own maps by first manually driving our test vehicles to capture the sensor data. Once the street models are built, we annotate the environment using semi-automated tooling that has enhanced our ability to scale to other areas much more quickly, while maintaining high quality and accuracy to ensure safety.
The maps don’t just demarcate the intersections, avenues, and side-streets that make up a dense urban environment. They also include traffic signal locations and semantics (in 3D space); street signs (even those that have been vandalized, occluded, or are broken, such that they might be confusing to the naked eye); local laws, regulations and social norms including speed limits, crosswalks, lane geometries, bike lanes, curbs and so on.
Mapping = Memory
The map’s wealth of contextual knowledge forms part of the “memory” of the self-driving system. We create our own maps because this memory is tightly coupled with our software and hardware, and forms the basis of our ability to operate in autonomous mode. It allows our self-driving vehicle to compare what it’s observing in real-time with this detailed 3D model of the physical space it is anticipating, and identify any differences between the two to determine how to react.
That goes for even the thorniest of urban intersections. Two years ago, our team discovered a particularly tricky junction in the heart of Washington, DC. We nicknamed the intersection, “starfish”— so named because its four curving lanes, multiple crosswalks, and constellation of road signals are in constant motion. Most human drivers struggle navigating the starfish, overwhelmed by the choice of traffic lights to focus on, lanes to navigate into, and how to avoid all the pedestrians, cyclists, and other vulnerable road users (VRUs) in their path. Using the map and our testing experience, the Argo self-driving system can now approach the starfish with prior knowledge of its idiosyncrasies, while also safely responding to the dynamic variables — a pedestrian crossing without a crosswalk, a bus merging ahead, a commuting cyclist sharing the lane — and determine whether and how to react to that difference.
In this way, the “memory” helps our vehicle dismiss objects and structures that it already knows about, and focus its attention on the unexpected, especially moving objects, in busy urban areas.
In addition to equipping our vehicles with a comprehensive 3D model of the physical spaces they encounter, the other primary utility of our mapping system is to help our vehicles “localize.” As compared to GPS, Argo’s localization is much more accurate in urban areas and calculates where our vehicles are in the world, down to the centimeter. It also provides an understanding of location-specific road rules and regulations so the vehicles can adjust their behavior accordingly.
That means our maps include specific, localized information for the six unique cities where we operate. This allows our self-driving system to drive more naturalistically, because it understands geo-specific rules (like how default speeds vary city by city for streets without posted speed limit signs) and cues (like how pedestrian crossing behaviors are different between Miami or Washington, DC.)
This information, paired with hyper-specific 3D models, ensures that our vehicles are as prepared as possible to navigate safely in multiple cities, in even the most unexpected circumstances.
Mapping at Scale
Every time our team enters a new city to test our self-driving system, we begin constructing our map with a small area around our facilities, expanding rapidly over time. By developing tools to semi-automate the annotation of traffic signs, lane marking types and lane geometry, we can shrink the time, upkeep, and effort needed to expand deeply into a new city. In fact, we have hundreds of undirected miles of roads mapped in the heart of each city where we operate.
Looking to the future, this process will only continue to become more efficient and accurate. With our fleets now operating constantly in our test cities, we’re able to exercise “opportunistic mapping”—the constant collection of road data by our self-driving cars on the go—to maintain our maps without the need for dedicated mapping vehicles or location scouting.
Maps are central to our success in operating our test vehicles safely on the roads. As we continue to expand our testing program in new cities around the U.S. and soon in Europe, our maps will continue to ensure that we respond appropriately and safely in every environment and situation—from an intersection with a broken traffic signal, to the trickiest starfish junction. And as the automated tools and processes we develop continue to improve, we’ll continue to reduce the time it takes to scale and expand into new regions around the globe.