Skip to Content

Our Robotic Children: The Ethics of Creating Intelligent Life

Philosopher Eric Schwitzgebel argues that conscious machines would deserve special moral consideration akin to our own children.
November 16, 2015

Two children are drowning: your son and a stranger. Who would you save first? Your son, right? What if one of the children was a thinking, feeling robot?

Philosopher Eric Schwitzgebel from the University of California, Riverside, argues that our hypothetical creations would be more than strangers to us in a fascinating op-ed for Aeon. “Moral relation to robots will more closely resemble the relation that parents have to their children,” he writes, “… than the relationship between human strangers.”

Humanity’s fraught relationship with artificial intelligence has been a staple of science fiction since the field of modern computer science was born in the 1950s. As Schwitzgebel puts it:

The moral status of robots is a frequent theme in science fiction, back at least to Isaac Asimov’s robot stories, and the consensus is clear: if someday we manage to create robots that have mental lives similar to ours, with human-like plans, desires and a sense of self, including the capacity for joy and suffering, then those robots deserve moral consideration similar to that accorded to natural human beings. Philosophers and researchers on artificial intelligence who have written about this issue generally agree.

What even a decade ago might have seemed a flight of scientific fancy has become a relevant question as the development of AI and robotics proceeds apace. Hardly a day goes by without headlines that seem fantastic.

Our own Will Knight recently wrote about a robotic toddler that learns to stand by using brain-like algorithms; it “imagines” its task before it tries it in physical space. Aviva Rutkin wrote for New Scientist about how Silicon Valley is hiring people to serve as trainers for its burgeoning AI systems. The trainers are simultaneously providing backup for the AI and generating a “massive library of training data” which the AI will parse using various machine-learning algorithms until it is able to operate with less supervision. How long now until we cross the threshold and create a robot that thinks? That feels?

“If we create genuinely conscious robots,” Schwitzgebel writes, “we are […] substantially responsible for their welfare. That is the root of our special obligation.” In other words: we brought them into this world, for good or ill—what happens to them after their creation will always, in a significant way, be our fault.

He goes on to quote Frankenstein’s monster, speaking to its creator:

I am thy creature, and I will be even mild and docile to my natural lord and king, if thou wilt also perform thy part, the which thou owest me. Oh, Frankenstein, be not equitable to every other, and trample upon me alone, to whom thy justice, and even thy clemency and affection, is most due. Remember that I am thy creature: I ought to be thy Adam …

Even without biblical allusion, it’s hard not to feel the weight of the Creator’s responsibility. It’s a heady, dizzying thought—in this case, pushing past parental concern and into the realm of god-figure.

Giving our robotic creations the same moral standing as our organic ones will be one hell of a challenge, though. After all, we can’t get people to treat other humans with a universal level of dignity and respect, how can we expect them to give equal moral consideration to bits and bytes? Let alone to give our creations special standing because of our unique status as their creators.

As much as we’d like to pretend that our attitudes toward our children are the result of higher reasoning or deeply thought out philosophical principles, the reality is messy, hormonal, and very much organic. Children have been getting special moral consideration from their parents since long before Socrates. It’s a deep impulse to treat our children with special care; it’s a similarly deep impulse to treat things that look and act like us with special care. If we’re to give robots special moral status as our progeny then I’d argue that we’d also better design them to have expressive faces and only four limbs. We don’t generally give cephalopods much moral freight—even though they’re extremely intelligent.

Regardless, Schwitzgebel emphasizes an aspect of the great AI debates that is often neglected in popular culture. It’s not only a robot rebellion that we need to worry about. It’s also the burden of creation. Victor Frankenstein clearly wasn’t ready to bear it—let’s be sure we are if and when the time comes.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.