Robo-cars and Humans Will Struggle to Coexist, at Least For Now
On its first day on the job yesterday, a self-driving shuttle in Las Vegas got into a crash.
The vehicle is one of several made by French startup Navya that are part of a trial sponsored by AAA Northern California, Nevada, and Utah. Each shuttle carries eight people and uses sensors and AI to navigate the streets.
Yesterday was the first public test, and as the Guardian notes, it didn’t go perfectly. During a trip, one vehicle sensed that a delivery truck was approaching and pulled to a stop in order to avoid a collision. Sadly, it seems the driver of the truck was paying less attention, and grazed the front fender of the shuttle.
In a statement issued by the Las Vegas city government, the organizers of the trial say that the delivery truck driver was at fault (local police also agree), and that its autonomous vehicle worked as designed. “The shuttle did what it was supposed to do, in that its sensors registered the truck and the shuttle stopped to avoid the accident,” they write. In a sense, that may seem like sufficient safeguard. After all, the vehicles only serve a 0.6-mile loop around the Fremont East district of Las Vegas, and never travel faster than 15 miles per hour.
But the incident underscores how human drivers and robotic cars are going to struggle to safely integrate on our roads, at least at first. In this case, for instance, the car might have been better off reversing a little, based on the knowledge that humans are fallible creatures—but it wasn’t programmed to, so it didn’t.
To this very point, the New York Times Magazine has a nice feature, published yesterday, about a point in the future when just 20 percent of cars on our roads will be robotic. It’s worth reading (and will get you thinking about a future where having sex in moving cars is a reality, too).
But the article also raises questions facing autonomous vehicles that we’ve asked ourselves before. If a robotic car does make a mistake, how do you work out what went wrong, given that it’s currently impossible to discern the inner workings of deep-learning systems? Do we expect self-driving cars to be totally safe, or are they allowed to screw up sometimes? And what will autonomy do to insurance when culpability is harder to assess? They’re big problems—so far, without answers.
Keep Reading
Most Popular
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
How Rust went from a side project to the world’s most-loved programming language
For decades, coders wrote critical systems in C and C++. Now they turn to Rust.
Design thinking was supposed to fix the world. Where did it go wrong?
An approach that promised to democratize design may have done the opposite.
Sam Altman invested $180 million into a company trying to delay death
Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.