Skip to Content
Policy

How to Help Self-Driving Cars Make Ethical Decisions

Researchers are trying to program self-driving cars to make split-second decisions that raise real ethical questions.
July 29, 2015

A philosopher is perhaps the last person you’d expect to have a hand in designing your next car, but that’s exactly what one expert on self-driving vehicles has in mind.

Chris Gerdes, a professor at Stanford University, leads a research lab that is experimenting with sophisticated hardware and software for automated driving. But together with Patrick Lin, a professor of philosophy at Cal Poly, he is also exploring the ethical dilemmas that may arise when vehicle self-driving is deployed in the real world.

Gerdes and Lin organized a workshop at Stanford earlier this year that brought together philosophers and engineers to discuss the issue. They implemented different ethical settings in the software that controls automated vehicles and then tested the code in simulations and even in real vehicles. Such settings might, for example, tell a car to prioritize avoiding humans over avoiding parked vehicles, or not to swerve for squirrels.

Illustration by Victor Kerlow

Fully self-driving vehicles are still at the research stage, but automated driving technology is rapidly creeping into vehicles. Over the next couple of years, a number of carmakers plan to release vehicles capable of steering, accelerating, and braking for themselves on highways for extended periods. Some cars already feature sensors that can detect pedestrians or cyclists, and warn drivers if it seems they might hit someone.

So far, self-driving cars have been involved in very few accidents. Google’s automated cars have covered nearly a million miles of road with just a few rear-enders, and these vehicles typically deal with uncertain situations by simply stopping (see “Google’s Self-Driving Car Chief Defends Safety Record”).

As the technology advances, however, and cars become capable of interpreting more complex scenes, automated driving systems may need to make split-second decisions that raise real ethical questions.

At a recent industry event, Gerdes gave an example of one such scenario: a child suddenly dashing into the road, forcing the self-driving car to choose between hitting the child or swerving into an oncoming van.

“As we see this with human eyes, one of these obstacles has a lot more value than the other,” Gerdes said. “What is the car’s responsibility?”

Gerdes pointed out that it might even be ethically preferable to put the passengers of the self-driving car at risk. “If that would avoid the child, if it would save the child’s life, could we injure the occupant of the vehicle? These are very tough decisions that those that design control algorithms for automated vehicles face every day,” he said.

Gerdes called on researchers, automotive engineers, and automotive executives at the event to prepare to consider the ethical implications of the technology they are developing. “You’re not going to just go and get the ethics module, and plug it into your self-driving car,” he said.

Other experts agree that there will be an important ethical dimension to the development of automated driving technology.

“When you ask a car to make a decision, you have an ethical dilemma,” says Adriano Alessandrini, a researcher working on automated vehicles at the University de Roma La Sapienza, in Italy. “You might see something in your path, and you decide to change lanes, and as you do, something else is in that lane. So this is an ethical dilemma.”

Alessandrini leads a project called CityMobil2, which is testing automated transit vehicles in various Italian cities. These vehicles are far simpler than the cars being developed by Google and many carmakers; they simply follow a route and brake if something gets in the way. Alessandrini believes this may make the technology easier to launch. “We don’t have this [ethical] problem,” he says.

Others believe the situation is a little more complicated. For example, Bryant Walker-Smith, an assistant professor at the University of South Carolina who studies the legal and social implications of self-driving vehicles, says plenty of ethical decisions are already made in automotive engineering. “Ethics, philosophy, law: all of these assumptions underpin so many decisions,” he says. “If you look at airbags, for example, inherent in that technology is the assumption that you’re going to save a lot of lives, and only kill a few.”

Walker-Smith adds that, given the number of fatal traffic accidents that involve human error today, it could be considered unethical to introduce self-driving technology too slowly. “The biggest ethical question is how quickly we move. We have a technology that potentially could save a lot of people, but is going to be imperfect and is going to kill.”

Deep Dive

Policy

Is there anything more fascinating than a hidden world?

Some hidden worlds--whether in space, deep in the ocean, or in the form of waves or microbes--remain stubbornly unseen. Here's how technology is being used to reveal them.

Africa’s push to regulate AI starts now        

AI is expanding across the continent and new policies are taking shape. But poor digital infrastructure and regulatory bottlenecks could slow adoption.

Yes, remote learning can work for preschoolers

The largest-ever humanitarian intervention in early childhood education shows that remote learning can produce results comparable to a year of in-person teaching.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.