When will self-driving cars be safe?
Everyone talks about safety as it relates to driving, but what does “safe” actually mean? Our perception of safety doesn’t always align with reality. My mom complains about my driving even at a crawl, but she’s absolutely fine sitting in the back while a random cab driver swerves through Manhattan traffic.
“He’s a professional,” she says to me. “You’re not.”
I don’t know if there’s any safety metric that would satisfy her, but I do know that automotive safety is three-dimensional: the vehicle, what the driver chooses to do, and what others choose to do. One might even add a fourth dimension, which is other vehicles. Actually, there’s a fifth, which is infrastructure, including conditions, signage and street design.
The further back one steps, the more obvious it is that safety isn’t simple. The safest car and driver — human or machine— have to operate in the real world. Safety is holistic, and boiling it down to a single number won’t get us closer to understanding what “safe” really means.
On this episode of No Parking, Bryan and I dove deep into the history and future of safety with Dr. Philip Koopman. From his time as a U.S. Navy submariner and a Carnegie Mellon University professor to joining the National Robotics Engineering Center and co-founding Edge Case Research, Koopman has long been the tip of the intellectual spear attempting to define the bar for safety, and raise it.
Koopman argued that the definition of safety is much more complicated than just “something that doesn’t kill you.” It means different things to different industries, and even different companies within them.
“For [vehicle] safety, there’s three bins. There are things that are actively dangerous. Everyone looks and says, “Oh that’s scary.”
Over time, all that weight, speed and energy coalesce into a machine we trust for everyday use. That’s Koopman’s middle bin. But real safety doesn’t click until option three, he said.
“(The final bin) isn’t, ‘I have one vehicle, and it hasn’t killed anyone today.’ … It’s that there are 100 million vehicles on the road, and if you’re worried, well, that (risk) is one in a million.”
It was a fascinating conversation that covered the Wright Brothers, NASA, autonomous vehicles, machine learning, and the difference between reliability and dependability, and the deconstruction of countless engineering terms I’d never heard before. By the end, I had a lot more questions about the history of safety than I did about the future. When the first cars and planes were built in the early 20th century, safety was barely part of the equation. It’s hard to believe we tolerated the risks of early aviation, or roadways full of unlicensed drivers. Today commercial aviation is vastly safer than our roads, but plane crash headlines capture far more attention than pedestrian and cyclist deaths, which are at 25-year high.
It seems our perception of risk — and risk tolerance — has never been consistently aligned with reality, and Koopman pointed out that this is being carried into conversations about autonomous vehicles, even during the testing phase.
“People get hung up about, well, is the autonomy trustworthy? That’s actually not the point for safety on these, because the point is you’re testing stuff that isn’t done yet. It’s not ready yet. That’s why you’re testing.”