Skip to Content

Humans Will Bully Mild-Mannered Autonomous Cars

Drivers, pedestrians, and cyclists alike may find themselves taking advantage of the safety features built into risk-averse robotic vehicles.
November 3, 2016

Step in front of an autonomous car, and it should stop. Cut one off while you’re driving, and it should hit the brakes. These are obvious safety features to build into robotic vehicles—but they also leave open the possibility for humans to game their behavior. It's easy to imagine how cyclists might rule the roads of New York City if all taxis are driverless. 

That’s certainly a fear for Volvo. Speaking to the Guardian, the company’s senior technical leader, Erik Coelingh, explained that the automaker plans to leave its self-driving cars unmarked during upcoming London trials so that human drivers aren’t tempted to take advantage. “I’m pretty sure that people will challenge them if they are marked by doing really harsh braking ... or putting themselves in the way,” he said.

In fact, Google has already experienced similar problems firsthand. Some of its cars found it difficult to pull away from stop signs, because they were too timid: other cars simply whistled by while they sat stranded. That particular problem was overcome by having the car inch forward at the junction, much the way a human would, to indicate its intention.

If you want to make a driverless car stop, just drive right in front of it.

But dialing up how daring the cars are to match human drivers can only go so far—not least because there will always be people who drive aggressively in order to get an edge. In fact, Discover points to a study carried out by the London School of Economics, which found that drivers who are “combative” on the road are more welcoming of autonomous cars. That could be because they think they’ll be pushovers.

Pedestrians may think similarly. A new study from the University of California, Santa Cruz, has modeled how pedestrians and autonomous vehicles might interact using game theory—in essence applying a little academic thinking to the everyday game of playing chicken with traffic. The conclusion? “Because autonomous vehicles will be risk-averse ... pedestrians will be able to behave with impunity, and autonomous vehicles may facilitate a shift toward pedestrian-oriented urban neighborhoods,” writes the author, Adam Millard-Ball.

The ability to take advantage of autonomous cars’ caution is likely to extend to all road users. Google, for instance, has explained in the past that its AI systems are able to detect cyclists, with the cars being “taught to drive conservatively around them.” But one cyclist in Austin reported that a Google vehicle found itself unable to set off because of its overcautious approach around the bicycle.

There is, of course, a need for some caution on the side of humans. Until autonomous cars are pervasive, stepping into traffic remains a dangerous choice, as it’s hard to tell from a distance whether a car is autonomous or not. In fact, researchers will probably be able to overcome some of these problems by simply making their self-driving cars act more like humans—with, say, smoother driving or authentic horn-honking.

But unless it comes down to some kind of ethical dilemma, autonomous cars will be trained to avoid accidents. It seems implausible that humans won’t be tempted to take advantage.

(Read more: Guardian, Journal of Planning Education and Research, Discover, “Novelty of Driverless Cars Wears Off Quickly for First-Timers,” “How to Help Self-Driving Cars Make Ethical Decisions,” “Outta My Way! How Will We Translate Google’s Self-Driving Honks?”)

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.