Skip to Content

Morality, the Next Frontier in Human-Computer Interaction

Think drone strikes are ethically complicated? Autonomous cars will be even thornier.
November 30, 2012

I’ve ridden in an autonomous car, and it didn’t send me hurtling into an existential crisis. I saw Minority Report, and its near-future vision of self-driving sedans zooming down the freeway didn’t strike me as outlandishly unrealistic. But a short essay by Gary Marcus in the New Yorker about the ethical quandaries raised by Google’s driverless car made my hair stand on end. Remember that Philosophy 101 thought experiment asking whether you should shove a fat man off a bridge if it would save a bunch of other people from certain death? According to Marcus, our future robo-chauffeurs will be forced to solve these ethical mindbenders dozens of times a day on our behalf: 

Within two or three decades … it will no longer be optional for machines to have ethical systems. Your [autonomous] car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.

Last I checked, Toyota could barely write software to make its brakes work. Now we’re supposed to someday expect our Priuses to solve Sophie’s Choice? Fat chance. But Marcus makes a compelling point about the moral thorniness that would logically follow from the starting point of our facile techno-utopian assumptions. Even if we could program autonomous cars with a set of ethical rules of the road, they very well might fly in the face of our basic psychology, which doesn’t give a damn about statistical outcomes. (After all, we casually roll the dice with our lives every time we get behind the wheel of a car, but many people are so afraid of the extremely unlikely event of dying in a plane crash that they refuse to fly.)

That said, autonomous cars are very likely to become a part of our near future. So what’s more likely: millions of drivers simply accepting the fact that machines will make life-or-death decisions for them on the fly, perhaps not in their favor, and that a certain annual number of “driverless vehicular manslaughters” is the price of convenience? Or that cars might simply never really become fully autonomous in the strict sense? 

Neither, probably, because the scenario that Marcus describes—while thought-provoking—is just as contrived as the Trolley Problem from philosophy class. Autonomous cars will surely have manual overrides for emergency situations, if only to protect the Hondas and Toyotas of 2035 from epic legal liabilities. Or we may lower our expectations over the next decade of what “autonomous” means, and settle for a dumber but more psychologically palatable level of “social intelligence” from our cars (think Furby, not KITT).

In any case, the real problems that artificially intelligent cars will bring with them aren’t the grand techno-ethical abstractions mulled over by the Singularity Institute, but practical issues of product and interface design, constrained by the usual vicissitudes of politics and economics. For better or worse, it’s the designers, lawyers, and consumers—not the philosophers or academics—who will be the ultimate arbiters of what passes muster as a “moral machine.” 

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.