Training wheels for autonomous vehicles come in the shape of a human behind the wheel. But when you remove the safety driver from a robotic car, two tons of metal is let loose.
Just last week, California’s Department of Motor Vehicles announced that it plans to allow companies to test autonomous cars on its roads without on-board backup drivers before the end of the year. The cars would be required to have a remote operator capable of keeping an eye on the car, but nevertheless, as the Guardian correctly points out, the news feels like a defining moment for the technology. Has it matured enough to finally let us place our faith in the ability of the robot?
The idea of never touching the wheel is so tantalizing that the likes of Waymo, Tesla, and Uber are all racing to develop autonomous vehicles as fast as they can. But in the rush, we shouldn’t forget that the cars cannot yet cope with many complex road conditions.
When our own Will Knight took a ride in one of Uber’s first autonomous vehicles, his ride certainly couldn’t. “The car performed well in many difficult situations—reacting to pedestrians darting into the road, for example,” he wrote at the time. “However, several times the person behind the wheel needed to take control: once so the car didn’t become stuck behind a truck, and once to avoid another vehicle making a sudden turn.”
Now, six months later, they still can’t. Recode has obtained a series of internal Uber documents appearing to suggest that a driver is still very much required in the ride-hailing company’s autonomous vehicles. Just last week, for instance, its 43 cars clocked up a grand total of 20,354 autonomous miles, but safety drivers had reason to intervene every 0.8 miles on average—a figure that’s actually decreased since January.
That sounds awful. But the figure counts all the times the driver took back control of the vehicles, which might be because it started raining heavily, or road markings ran out, or something else mundane happened. A more important figure is the number of miles between what Uber calls “critical” interventions on the driver’s part—situations where if a driver hadn’t stepped in, a person might have been injured or at least $5,000 worth of property damage could’ve occurred. As of last week, that figure was 196 miles between incidents, up from 50 miles in January—getting better, but still clearly problematic.
So it seems Uber’s vehicles may not be quite ready to take to California’s streets without a driver. Last year’s fatal self-driving Tesla crash suggests that its vehicles, too, might not be. The days of forgetting about the wheel will surely come—but for safety’s sake, it pays to be patient.
(Read more: Recode, Guardian, “Fatal Tesla Autopilot Crash Is a Reminder Autonomous Cars Will Sometimes Screw Up,” What to Know Before You Get In a Self-driving Car,” “My Self-Driving Uber Needed Human Help”)
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.