We noticed you're browsing in private or incognito mode.

To continue reading this article, please exit incognito mode or log in.

Not a subscriber? Subscribe now for unlimited access to online articles.

Kenrick Vezina

A View from Kenrick Vezina

Our Robotic Children: The Ethics of Creating Intelligent Life

Philosopher Eric Schwitzgebel argues that conscious machines would deserve special moral consideration akin to our own children.

  • November 16, 2015

Two children are drowning: your son and a stranger. Who would you save first? Your son, right? What if one of the children was a thinking, feeling robot?

Philosopher Eric Schwitzgebel from the University of California, Riverside, argues that our hypothetical creations would be more than strangers to us in a fascinating op-ed for Aeon. “Moral relation to robots will more closely resemble the relation that parents have to their children,” he writes, “… than the relationship between human strangers.”

Humanity’s fraught relationship with artificial intelligence has been a staple of science fiction since the field of modern computer science was born in the 1950s. As Schwitzgebel puts it:

The moral status of robots is a frequent theme in science fiction, back at least to Isaac Asimov’s robot stories, and the consensus is clear: if someday we manage to create robots that have mental lives similar to ours, with human-like plans, desires and a sense of self, including the capacity for joy and suffering, then those robots deserve moral consideration similar to that accorded to natural human beings. Philosophers and researchers on artificial intelligence who have written about this issue generally agree.

What even a decade ago might have seemed a flight of scientific fancy has become a relevant question as the development of AI and robotics proceeds apace. Hardly a day goes by without headlines that seem fantastic.

Our own Will Knight recently wrote about a robotic toddler that learns to stand by using brain-like algorithms; it “imagines” its task before it tries it in physical space. Aviva Rutkin wrote for New Scientist about how Silicon Valley is hiring people to serve as trainers for its burgeoning AI systems. The trainers are simultaneously providing backup for the AI and generating a “massive library of training data” which the AI will parse using various machine-learning algorithms until it is able to operate with less supervision. How long now until we cross the threshold and create a robot that thinks? That feels?

“If we create genuinely conscious robots,” Schwitzgebel writes, “we are […] substantially responsible for their welfare. That is the root of our special obligation.” In other words: we brought them into this world, for good or ill—what happens to them after their creation will always, in a significant way, be our fault.

He goes on to quote Frankenstein’s monster, speaking to its creator:

I am thy creature, and I will be even mild and docile to my natural lord and king, if thou wilt also perform thy part, the which thou owest me. Oh, Frankenstein, be not equitable to every other, and trample upon me alone, to whom thy justice, and even thy clemency and affection, is most due. Remember that I am thy creature: I ought to be thy Adam …

Even without biblical allusion, it’s hard not to feel the weight of the Creator’s responsibility. It’s a heady, dizzying thought—in this case, pushing past parental concern and into the realm of god-figure.

Giving our robotic creations the same moral standing as our organic ones will be one hell of a challenge, though. After all, we can’t get people to treat other humans with a universal level of dignity and respect, how can we expect them to give equal moral consideration to bits and bytes? Let alone to give our creations special standing because of our unique status as their creators.

As much as we’d like to pretend that our attitudes toward our children are the result of higher reasoning or deeply thought out philosophical principles, the reality is messy, hormonal, and very much organic. Children have been getting special moral consideration from their parents since long before Socrates. It’s a deep impulse to treat our children with special care; it’s a similarly deep impulse to treat things that look and act like us with special care. If we’re to give robots special moral status as our progeny then I’d argue that we’d also better design them to have expressive faces and only four limbs. We don’t generally give cephalopods much moral freight—even though they’re extremely intelligent.

Regardless, Schwitzgebel emphasizes an aspect of the great AI debates that is often neglected in popular culture. It’s not only a robot rebellion that we need to worry about. It’s also the burden of creation. Victor Frankenstein clearly wasn’t ready to bear it—let’s be sure we are if and when the time comes.

Become an MIT Technology Review Insider for in-depth analysis and unparalleled perspective.

Subscribe today
More from Intelligent Machines

Artificial intelligence and robots are transforming how we work and live.

Want more award-winning journalism? Subscribe to All Access Digital.
  • All Access Digital {! insider.prices.digital !}*

    {! insider.display.menuOptionsLabel !}

    The digital magazine, plus unlimited site access, our online archive, and The Download delivered to your email in-box each weekday.

    See details+

    12-month subscription

    Unlimited access to all our daily online news and feature stories

    Digital magazine (6 bi-monthly issues)

    Access to entire PDF magazine archive dating back to 1899

    The Download: newsletter delivered daily

You've read of three free articles this month. for unlimited online access. You've read of three free articles this month. for unlimited online access. This is your last free article this month. for unlimited online access. You've read all your free articles this month. for unlimited online access. You've read of three free articles this month. for more, or for unlimited online access. for two more free articles, or for unlimited online access.