Skip to Content
MIT Technology Review

Teaching a self-driving car the emergency stop is harder than it seems

Passengers get into a self-driving car.Passengers get into a self-driving car.

Much self-driving-car research focuses on pedestrian safety, but it is important to consider passenger safety and comfort, too. When braking to avoid a collision, for example, a vehicle should ideally ease, not slam, into a stop. In machine-learning parlance, this idea constitutes a multi-objective problem. Objective one: spare the pedestrian. Objective two: just not at the expense of the passenger.

Researchers at Ryerson University in Toronto took on this challenge with deep reinforcement learning. If you’ll recall, reinforcement learning is a subset of machine learning that uses rewards and punishments to teach an AI agent to achieve one or multiple goals. In this case they punished the car anytime it hit a pedestrian (giving more severe punishments for higher-speed collisions) and also for causing jerking during braking (giving greater punishments for more violent stops).

They then tested their model in a virtual environment with simulations, based on real-world data, of pedestrians crossing the road. Their model successfully avoided all accidents while also reducing the jerkiness of braking compared with another model that didn’t consider passenger comfort. It offers a proof of concept for how to give passengers a better riding experience without detracting from overall driving safety. More work needs to be done to test the idea in the physical world.

This story originally appeared in our Webby-nominated AI newsletter The Algorithm. To have it directly delivered to your inbox, sign up here for free.