Skip to Content
Artificial intelligence

Teaching a self-driving car the emergency stop is harder than it seems

April 23, 2019
Passengers get into a self-driving car.
Passengers get into a self-driving car.Tony Avelar/AP

Much self-driving-car research focuses on pedestrian safety, but it is important to consider passenger safety and comfort, too. When braking to avoid a collision, for example, a vehicle should ideally ease, not slam, into a stop. In machine-learning parlance, this idea constitutes a multi-objective problem. Objective one: spare the pedestrian. Objective two: just not at the expense of the passenger.

Researchers at Ryerson University in Toronto took on this challenge with deep reinforcement learning. If you’ll recall, reinforcement learning is a subset of machine learning that uses rewards and punishments to teach an AI agent to achieve one or multiple goals. In this case they punished the car anytime it hit a pedestrian (giving more severe punishments for higher-speed collisions) and also for causing jerking during braking (giving greater punishments for more violent stops).

They then tested their model in a virtual environment with simulations, based on real-world data, of pedestrians crossing the road. Their model successfully avoided all accidents while also reducing the jerkiness of braking compared with another model that didn’t consider passenger comfort. It offers a proof of concept for how to give passengers a better riding experience without detracting from overall driving safety. More work needs to be done to test the idea in the physical world.

This story originally appeared in our Webby-nominated AI newsletter The Algorithm. To have it directly delivered to your inbox, sign up here for free.

Deep Dive

Artificial intelligence

DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.

“This is a profound moment in the history of technology,” says Mustafa Suleyman.

AI hype is built on high test scores. Those tests are flawed.

With hopes and fears about the technology running wild, it's time to agree on what it can and can't do.

You need to talk to your kid about AI. Here are 6 things you should say.

As children start back at school this week, it’s not just ChatGPT you need to be thinking about.

AI language models are rife with different political biases

New research explains you’ll get more right- or left-wing answers, depending on which AI model you ask.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.