How Autonomous Vehicles Identify Street Signs — Even Graffitied, Damaged, and Faded Ones

Stop, yield, no U-turn, no turn on red, railroad crossing, children playing, falling rocks — there are a huge variety of road signs out there that drivers are expected to recognize to get around safely.

We human beings typically learn about them in school, driver’s ed courses, and going about our daily lives. But signs can be changed without warning, and new ones installed that we may not be familiar with.

Given all that, how do autonomous vehicles (AVs) learn to recognize, understand, and follow road signs correctly?

Even more challenging: How do AVs interpret street signs that are defaced, damaged, or altered — intentionally by street artists and malicious actors, or unintentionally by weather and accidents?

Engineers at Argo AI, a leading autonomy products and services company, explain how the Argo Autonomy Platform, the collection of hardware and software that drives the company’s AVs, handles these real-world challenges regardless of the types of signs it comes across, and what condition they are in.

3D Maps Contain Street Sign Information and Much More

In every city where Argo tests its technology, the company uses its fleets of AVs to gather data about the roadway environment and turn it into high-resolution 3D maps of the AV driving area (also known as a geonet).

This data is gathered by the specialized suite of multiple sensors mounted on the exterior of each car, including cameras, lidar, radar, and more.

As a result, Argo’s maps contain far more detail about the roads than what is usually found on your typical consumer smartphone map app.

That includes lane geometry (the precise measurements of the roads and their shapes), lane markings (number of lanes in any given roadway segment), directions of travel, locations of traffic signals, bike lanes, street signs and their meanings, and other roadway infrastructure such as barriers and even some nearby vegetation.

Argo runs AV mapping missions frequently, and each one records the changes that have occurred in the roadway environment since the last mission — construction, road closures, signage changed or installed, and many other roadway infrastructure details.

The changes are noted in Argo’s maps by computer programs and human analysts and verified. Then, updated maps with verified changes are installed on every single Argo AV before it hits the road for any of its possible services — autonomous test drives, ride-hailing, or goods deliveries.

The new map data is made accessible to each AV “offline,” even without an internet connection. That way, even if an Argo AV drives into an area with poor wireless connectivity, such as a tunnel or between tall buildings, it can still reference the latest updated maps instantly.

… But, The Real World Changes Even More Often

Argo produces highly accurate and updated 3D maps, but the real world changes more often than maps can be updated. Temporary construction and other events can cause sudden, unforeseen changes in signage, lane markings, and other seemingly “fixed” roadway infrastructure, before new map updates can be created and installed.

Fortunately, maps are not the only source of truth for Argo’s Autonomy Platform. The platform also uses powerful artificial intelligence software to compare what its AV sensors are detecting in real-time with the information gathered previously, verified by human analysts, and stored in the map.

How does it do this? Using Argo’s perception system, a part of the larger Autonomy Platform that rapidly interpets sensor data and identifies what is in front of and around an AV on the roads.

Training Artificial Intelligence to Recognize and Understand Different Signs

The powerful artificial intelligence that powers Argo’s AV perception system is a deep neural network, which is in turn, developed using a computer science technique called, fittingly, deep learning.

In deep learning, engineers “train” a neural network to recognize and distinguish between a wide, ever-growing variety of data. In this case, it’s roadway objects. And the training happens off of the AVs, on computer servers, where it can be safely checked and validated before it gets installed on the actual vehicles.

Here’s how it works: Engineers and analysts upload massive amounts of sensor data and camera footage from previous AV test drives to Argo’s perception neural network, which analyzes them and tries to apply the correct object categories to them as suggestions. Engineers and analysts then review the suggestions from the network and verify if they are correct or not.

For example, the network looks at hundreds of thousands of road scenes gathered by Argo’s AVs to discern what qualities make a stop sign a stop sign, and how a stop sign differs from other signs and objects.

That’s easy, right? U.S. drivers know a stop sign looks like an octagon and is always red with big white letters.

But wait — stop sign colors can change over time as a sign is exposed to the elements, and even the octagon shape itself can change if damaged by weather and wear and tear. That’s why Argo’s engineers supply the neural network with a constantly expanding library of stop signs in all kinds of conditions, locations, and angles.

“We look at all sorts of different stop signs,” says Guy Hotson, a Software Engineering Manager at Argo. “This includes stop signs on street corners, stop signs connected to school buses, stop signs held by construction workers, and even pictures of stop signs.”

The network “further classifies if the stop sign is facing us, if it is a temporary or permanent stop sign, and if it is real or ‘real like’ (e.g. picture of a stop sign on something else),” he says.

While in training, the neural network’s classifications are always subsequently verified by Argo analysts and software engineers, and go through a rigorous internal safety review. Only if they pass all that, are they installed on the Argo AVs as part of the perception system.

That way, Argo’s AVs can accurately identify newly installed signs that have not yet been mapped in real time while out driving, and do so without a wireless data connection.

What Happens When What the Autonomous Car Sees Diverges From the Maps?

When the AV encounters a new sign whose location and directions were not recorded in the HD maps, the perception system categorizes it in real time and follows the appropriate driving instructions for that sign.

In cases in which there is a discrepancy between the signage recorded on the maps and what the AV sees live on the street with the perception system, the Argo Autonomy Platform compares the two and analyzes whether the signage on the maps still exists in any discernible form. If so, it follows the mapped sign instructions.

If the perception system can’t determine with high confidence what it is seeing, it wirelessly contacts Argo’s Remote Guidance team of human operators, trained employees on standby that use the AV’s onboard cameras to survey the situation and present the AV with the correct driving action.

Importantly, the AV still undertakes any action on its own —  the Argo Autonomy Platform is always responsible for its own driving maneuvers and decisions.

Together, Perception and Maps Allow Argo to Navigate Even Altered Street Signs

Graffiti and street art are common on city signage and could pose real traffic problems if drivers become confused by the alterations.

Even in these types of cases, the Argo AV has a way to identify what it’s seeing and follow the proper driving instructions.

“The offline map sign detection pipeline is able to correctly identify a modified sign,” says Willi Brems, an Argo software engineer in Munich, Germany.

It does this by looking at previously mapped sign locations and the specific visual cues that make up the sign in front of it on the road — colors, symbols, lettering, and shape.

To test Argo’s software on this problem, Brems took a screenshot from a video showing a street artist altering a European “no entry” sign, and processed it through Argo’s cloud-based perception system — the one used to train the perception system’s neural network on the AVs.

The result? The detection system correctly identified a traffic sign and highlighted it with a yellow box.

Street artists may or may not be intending to confuse drivers, but what about hackers and other malicious actors who are deliberately trying to create problems?

For example, a group of university researchers exploring problems with computer vision put tape over a speed limit sign and caused a vehicle with a camera-based driver assist technology to improperly speed.

Even in these cases of “hardware” hacking and sign alteration, the Argo Autonomy Platform always compares what it is seeing with the perception system to what’s in the maps. If it looks like a match despite the alterations, the AV follows the original sign instruction.

If the Argo Autonomy Platform cannot understand the sign in front of it, and fails to reconcile it with the prior sign information in the maps, it hails Argo Remote Guidance for further clarification.

The AV does not begin speeding improperly or automatically follow an altered version of a pre-mapped sign without first verifying it against the original sign instructions in its maps or with Remote Guidance.

However, it does follow the instructions of a newly installed, unmapped sign that it confidently and reliably identifies using the onboard perception system, similar to how human beings follow a newly installed street sign that they recognize as legitimate.

All of which is a great sign (pun intended) of the real-world safety and reliability of Argo’s autonomous vehicles.

Leave a Reply

Your email address will not be published. Required fields are marked *