Skip to Content
Artificial intelligence

Teaching a self-driving car the emergency stop is harder than it seems

April 23, 2019
Passengers get into a self-driving car.
Passengers get into a self-driving car.Tony Avelar/AP

Much self-driving-car research focuses on pedestrian safety, but it is important to consider passenger safety and comfort, too. When braking to avoid a collision, for example, a vehicle should ideally ease, not slam, into a stop. In machine-learning parlance, this idea constitutes a multi-objective problem. Objective one: spare the pedestrian. Objective two: just not at the expense of the passenger.

Researchers at Ryerson University in Toronto took on this challenge with deep reinforcement learning. If you’ll recall, reinforcement learning is a subset of machine learning that uses rewards and punishments to teach an AI agent to achieve one or multiple goals. In this case they punished the car anytime it hit a pedestrian (giving more severe punishments for higher-speed collisions) and also for causing jerking during braking (giving greater punishments for more violent stops).

They then tested their model in a virtual environment with simulations, based on real-world data, of pedestrians crossing the road. Their model successfully avoided all accidents while also reducing the jerkiness of braking compared with another model that didn’t consider passenger comfort. It offers a proof of concept for how to give passengers a better riding experience without detracting from overall driving safety. More work needs to be done to test the idea in the physical world.

This story originally appeared in our Webby-nominated AI newsletter The Algorithm. To have it directly delivered to your inbox, sign up here for free.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.