Skip to Content
Uncategorized

Robo-cars and Humans Will Struggle to Coexist, at Least For Now

November 9, 2017

On its first day on the job yesterday, a self-driving shuttle in Las Vegas got into a crash.

The vehicle is one of several made by French startup Navya that are part of a trial sponsored by AAA Northern California, Nevada, and Utah. Each shuttle carries eight people and uses sensors and AI to navigate the streets.

Yesterday was the first public test, and as the Guardian notes, it didn’t go perfectly. During a trip, one vehicle sensed that a delivery truck was approaching and pulled to a stop in order to avoid a collision. Sadly, it seems the driver of the truck was paying less attention, and grazed the front fender of the shuttle.

In a statement issued by the Las Vegas city government, the organizers of the trial say that the delivery truck driver was at fault (local police also agree), and that its autonomous vehicle worked as designed. “The shuttle did what it was supposed to do, in that its sensors registered the truck and the shuttle stopped to avoid the accident,” they write. In a sense, that may seem like sufficient safeguard. After all, the vehicles only serve a 0.6-mile loop around the Fremont East district of Las Vegas, and never travel faster than 15 miles per hour.

But the incident underscores how human drivers and robotic cars are going to struggle to safely integrate on our roads, at least at first. In this case, for instance, the car might have been better off reversing a little, based on the knowledge that humans are fallible creatures—but it wasn’t programmed to, so it didn’t.

To this very point, the New York Times Magazine has a nice feature, published yesterday, about a point in the future when just 20 percent of cars on our roads will be robotic. It’s worth reading (and will get you thinking about a future where having sex in moving cars is a reality, too).

But the article also raises questions facing autonomous vehicles that we’ve asked ourselves before. If a robotic car does make a mistake, how do you work out what went wrong, given that it’s currently impossible to discern the inner workings of deep-learning systems? Do we expect self-driving cars to be totally safe, or are they allowed to screw up sometimes? And what will autonomy do to insurance when culpability is harder to assess? They’re big problems—so far, without answers.

Keep Reading

Most Popular

DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.

“This is a profound moment in the history of technology,” says Mustafa Suleyman.

What to know about this autumn’s covid vaccines

New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.

Human-plus-AI solutions mitigate security threats

With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure

Next slide, please: A brief history of the corporate presentation

From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.