Emerging Technology from the arXiv

A View from Emerging Technology from the arXiv

Best of 2015: Why Self-Driving Cars Must Be Programmed to Kill

Self-driving cars are already cruising the streets. but before they can become widespread, carmakers must solve an impossible ethical dilemma of algorithmic morality. From October …

  • December 24, 2015

When it comes to automotive technology, self-driving cars are all the rage. Standard features on many ordinary cars include intelligent cruise control, parallel parking programs, and even automatic overtaking—features that allow you to sit back, albeit a little uneasily, and let a computer do the driving.

So it’ll come as no surprise that many car manufacturers are beginning to think about cars that take the driving out of your hands altogether (see “Drivers Push Tesla’s Autopilot Beyond Its Abilities”). These cars will be safer, cleaner, and more fuel-efficient than their manual counterparts. And yet they can never be perfectly safe.

And that raises some difficult issues. How should the car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random? (See also “How to Help Self-Driving Cars Make Ethical Decisions.”)

The answers to these ethical questions are important because they could have a big impact on the way self-driving cars are accepted in society. Who would buy a car programmed to sacrifice the owner?

So can science help?


Uh oh–you've read all five of your free articles for this month.

Insider Online Only

$19.95/yr US PRICE


You've read of free articles this month.