Hit enter to search or ESC to close

A Decade after DARPA: The State of the Art in Self-Driving Cars

Toggle Filter Camera

A decade ago in the California high desert, 11 finalists competed in an unprecedented 60-mile race. Robot cars needed to safely and swiftly complete the mission without any human intervention — while also interacting with human-driven vehicles — in under six hours.

It was the 2007 DARPA Urban Challenge, an autonomous vehicle competition that unofficially kicked off today’s self-driving technology initiatives. The vehicles were considered incredible at the time, and looking back, this marked the beginning of a long journey.

DARPA ensured a certain level of success by carefully managing scope: Participants agreed to a set of rigorously defined traffic rules, and DARPA eliminated pedestrian and cyclist traffic from the challenge. Despite these simplifications, what the teams accomplished was impressive — with most putting their systems together largely from scratch in just 18 months.

The DARPA challenge highlighted the need for more advanced computational power and algorithm development. At the time, we relied heavily on rules-based programming techniques, which meant robotic systems of a decade ago tended to operate only in very constrained environments, around well-behaved road users that would not deviate much from an established set of rules.

Many of us at Argo have been in the field of robotics and self-driving cars for well over a decade, and as we now work to bring this technology to the masses, we are leveraging our extensive expertise including our learnings from the DARPA Urban Challenge. Just a few months shy of Argo AI’s first birthday, we’ve managed to assemble an experienced team of almost 200 employees, and we now have test vehicles on the road in Pittsburgh and Southeast Michigan.

We know firsthand the challenges that come with commercializing the software and hardware that fuel highly automated and intelligent systems. Working in outdoor conditions among vehicle traffic, pedestrians and cyclists operating without strict adherence to a set of rules can be tricky. The effects of real-world conditions like night and day, changing weather, different road geometries and materials can compound things. The dynamics of the environment bring inconsistencies and variability to what robotic system builders have traditionally needed to simplify into a basic set of assumptions.

In the past few years, the game has changed due in part to the computational power now available, but with this has come a new set of complexities we are still learning to manage. Many advancements in processing power, storage and artificial intelligence are coming together so that these computers can reason through problems without requiring a script. They will be able to learn from massive amounts of data, to recognize patterns with astonishing accuracy and to filter out anomalous inputs from sensors to focus on what matters the most.

As we embrace these advancements, we do so knowing that no single tool, technique or algorithm alone will categorically solve all of the self-driving challenges. Here is our take on some considerations to thoughtfully build a self-driving car.

Sensing the world

Sensors still have a long way to go. We use LiDAR sensors, which work well in poor lighting conditions, to grab the three-dimensional geometry of the world around the car, but LiDAR doesn’t provide color or texture, so we use cameras for that. Yet cameras are challenged in poor lighting, and tend to struggle to provide enough focus and resolution at all desired ranges of operation. In contrast, radar, while relatively low resolution, is able to directly detect the velocity of road users even at long distances.

That’s why we still have so many sensors mounted on the car — the strengths of one complement the weaknesses of another. Individual sensors don’t fully reproduce what they capture, so the computer has to combine the inputs from multiple sensors, then sort out the errors and inconsistencies. Combining all of this into one comprehensive and robust picture of the world for the computer to process is incredibly difficult.

Developing a system that can be manufactured and deployed at scale with cost-effective, maintainable hardware is even more challenging. We are innovating across the sensing hardware and software stack to lower costs, reduce sensor count, and improve range and resolution. There remains significant work to be done to accomplish these conflicting objectives and get the technology to reliably scale.

Understanding the world

Once an autonomous vehicle has the tools to “see” relevant objects around it, it’s up to the car itself to take the next step — identifying the type of object, whether it’s a pedestrian, cyclist, another vehicle or debris on the road, and how fast that object is moving. The car then must make a determination about that object’s likely behavior.

Advancements in artificial intelligence and machine learning, powered by ever-increasing computing and storage options in the cloud, have fueled new algorithms and driven new twists on old algorithms. These new tools are incredibly powerful at building algorithms that can robustly sift through millions of pixels of information that flow from our sensors every second, then making determinations about the location, size and speed of road users relevant to the car.

One part of the process in building these algorithms is to collect millions of miles of real-world data from our sensors, then use that information to teach an algorithm to detect the relevant road users despite the challenges presented by noisy or erroneous sensor data. Significant tool chains and operations teams manage this data flow and development process.

Our early results are indeed impressive, but we know full well that the devil is always in the details.


When we drive a car today, we’re subconsciously estimating the next few seconds of behavior from other road users — anticipating when a pedestrian might jaywalk or when another car may be about to cut us off. Attentive drivers are incredibly good at reacting in these situations — managing their speed and planning out contingencies to adapt to anomalous behavior from others. These same actions, which good drivers perform quickly while avoiding a drastic response, are also required for a self-driving car to navigate busy city streets.

We must build algorithms that enable our autonomous vehicle to respond to a deeper understanding of the likely behavior of other road users. We need to instill “thoughtfulness” into the technology to ensure that the car can operate safely, reliably and predictably.

For example, the car needs to know when it will have to move over slightly for a large truck to give it more room, or adjust its speed to stay out of another driver’s blind spot. At the same time, we have to build algorithms that enable it to know when it’s being overly conservative, when the car will need to “nudge” in dense traffic, or commit to an action consistently so that other road users can respond correctly. Throughout, as the computer absorbs all of the information, it’s key that it never gets distracted or learns the wrong model, as it will act strangely in anticipation of an action that never comes to fruition.

This is the balance we must deliver in building these predictive models, and it’s only from all of these examples and real-world driving that we can learn to predict the micro-maneuvers that turn out to be the leading indicators of the likely actions of other road users.

System integration and testing

Generally, the software that powers a self-driving car is what’s called a stochastic system. What this means is that the results are determined through a series of detected patterns and models applied to inherently random sensor inputs, rather than through a mathematical equation with a consistent set of inputs that translates to a consistent set of outputs.

Imagine driving down the same road twice and nothing changes between the first and the second trip. You’re highly unlikely to drive the same path at exactly the same speed the second time around. Self-driving vehicles are no different, though in general they will be more consistent than human drivers.

Testing stochastic systems requires a significant number of repetitions generated by real-world data for it to be representative. That means we must gather millions of miles of road experience to teach the software to drive with confidence. (Imagine needing to drive millions of miles to get your driver’s license!) But not all miles are created equal, so “accumulated miles” is not an expressive enough metric to track progress. Think of it this way: The skills you acquired learning to drive in a quiet Midwestern town will not translate should you find yourself driving in the heart of Manhattan.

The algorithms we build move millions of pixels per second through complex math and logic to calculate important outcomes about the state of the world of an autonomous vehicle. Given the high dimensionality of these inputs, it’s impossible to test across the space of every possible input combination — there would be many trillions of combinations to test, which is simply unmanageable.

So we need to be clever about how we use the recorded miles of driving experience from our test vehicles. We’re building tools that can extract the right set of miles that sufficiently covers the realistic and relevant scenarios that the vehicle is likely to see, and then test for the right response. This balance requires extensive driving experience and data collection in the target deployment area that covers as many diverse and challenging scenarios as possible. We must also collect sufficient variations around environmental changes in each scenario that might degrade a sensor’s output, such as weather and lighting conditions.

We have built a dedicated team of logistics and test operators to safely execute these test miles, plus a team of analytics professionals and software engineers who are creating the tools to manage the data flow that gives us confidence in our completeness in scenario coverage.

We’re still very much in the early days of making self-driving cars a reality. Those who think fully self-driving vehicles will be ubiquitous on city streets months from now or even in a few years are not well connected to the state of the art or committed to the safe deployment of the technology. For those of us who have been working on the technology for a long time, we’re going to tell you the issue is still really hard, as the systems are as complex as ever.

Everyone knows focused teams that innovate and work hard can solve amazingly difficult problems. At Argo, we see these challenges as an inspiration. They drive us to leverage the advancements of the last decade to propel us into a new era where the commercial success of self-driving cars will be a reality.

We’re taking a pragmatic approach to bringing about fully self-driving cars — incorporating the state of the art while acknowledging there’s no silver bullet. We’re playing the long game and avoiding the hype in our commitment to bring this important technology to maturity in the form of a great product that earns the trust of millions of people around the world.

Choose your lane

How Autonomous Vehicles Distinguish Between Bicycles and People Who Ride Them

How Autonomous Vehicles Distinguish Between Bikes and People

When it comes to how autonomous vehicles see the world, humans come first, literally. Autonomous vehicles (AVs), like the kind operated by Pittsburgh-based Argo AI, use Machine Learning to detect and classify the objects in their surroundings, identifying people...
Why The League of American Bicyclists is optimistic about autonomous vehicles

Why a Leading Cycling Advocacy Group Is Optimistic About Autonomous Vehicles

As autonomous vehicle use grows, AV companies and the League of American Bicyclists are collaborating on how to ensure cyclists and motorists can share the roads safely, even if the “motorist” is artificial intelligence software. As part of the...

Self-Driving Is Arriving Right On Time. Just Like Ice Cream Did

Seven years ago, I was a self-driving skeptic. Not of the technology. Of all the “experts” promising autonomous vehicles would be everywhere by 2020. You didn’t need to be Nostradamus to know that was ridiculous. All you needed was...
Illustration of a futuristic parking deck turned into a mixed-use space, with AVs driving by

How Autonomous Vehicles Could Help Transform Parking Lots

Researchers say it’s likely that autonomous vehicles (AVs) can help reduce the need for parking lots, opening more room for grass and trees and other elements of nature. It may not seem like it when you’re circling the block...
An illustration of an Argo autonomous vehicle in teal against a backdrop of buildings, a bicyclist, and research papers

7 Big Breakthroughs From Argo Research at CVPR 2022

The 2022 Conference on Computer Vision and Pattern Recognition (CVPR 2022) is nearly here. Thousands of computer scientists, software engineers, and researchers from around the globe will gather in New Orleans to review and discuss their latest work in...

Researchers Predict the Future With Lidar Data

Researchers at Carnegie Mellon University’s Argo AI Center for Autonomous Vehicle Research, a private-public partnership funded by Argo for advancing the autonomous-vehicle (AV) field, say they have come up with a way to use lidar data to visualize not...

Must Reads