Skip to Content
Uncategorized

Robo-cars and Humans Will Struggle to Coexist, at Least For Now

November 9, 2017

On its first day on the job yesterday, a self-driving shuttle in Las Vegas got into a crash.

The vehicle is one of several made by French startup Navya that are part of a trial sponsored by AAA Northern California, Nevada, and Utah. Each shuttle carries eight people and uses sensors and AI to navigate the streets.

Yesterday was the first public test, and as the Guardian notes, it didn’t go perfectly. During a trip, one vehicle sensed that a delivery truck was approaching and pulled to a stop in order to avoid a collision. Sadly, it seems the driver of the truck was paying less attention, and grazed the front fender of the shuttle.

In a statement issued by the Las Vegas city government, the organizers of the trial say that the delivery truck driver was at fault (local police also agree), and that its autonomous vehicle worked as designed. “The shuttle did what it was supposed to do, in that its sensors registered the truck and the shuttle stopped to avoid the accident,” they write. In a sense, that may seem like sufficient safeguard. After all, the vehicles only serve a 0.6-mile loop around the Fremont East district of Las Vegas, and never travel faster than 15 miles per hour.

But the incident underscores how human drivers and robotic cars are going to struggle to safely integrate on our roads, at least at first. In this case, for instance, the car might have been better off reversing a little, based on the knowledge that humans are fallible creatures—but it wasn’t programmed to, so it didn’t.

To this very point, the New York Times Magazine has a nice feature, published yesterday, about a point in the future when just 20 percent of cars on our roads will be robotic. It’s worth reading (and will get you thinking about a future where having sex in moving cars is a reality, too).

But the article also raises questions facing autonomous vehicles that we’ve asked ourselves before. If a robotic car does make a mistake, how do you work out what went wrong, given that it’s currently impossible to discern the inner workings of deep-learning systems? Do we expect self-driving cars to be totally safe, or are they allowed to screw up sometimes? And what will autonomy do to insurance when culpability is harder to assess? They’re big problems—so far, without answers.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.