Skip to Content
Computing

Do We Need Asimov’s Laws?

As robots become ever more present in daily life, the question of how to control their behaviour naturally arises. Does Asimov have the answer?

In 1942, the science fiction author Isaac Asimov published a short story called Runaround in which he introduced three laws that governed the behaviour of robots. The three laws are as follows:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

He later introduced a fourth or zeroth law that outranked the others:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Since then, Asimov’s laws of robotics have become a key part of a science fiction culture that has gradually become mainstream.

In recent years, roboticists have made rapid advances in the technologies that are bringing closer the kind of advanced robots that Asimov envisaged. Increasingly, robots and humans are working together on factory floors, driving cars, flying aircraft and even helping around the home.

And that raises an interesting question: do we need a set of Asimov-like laws to govern the behaviour of robots as they become more advanced?

Today, we get an answer of sorts from Ulrike Barthelmess and Ulrich Furbach at the University of Koblenz in Germany. These guys review the history of robots in society and argue that our fears over their potential to destroy us are unfounded. Because of this, Asimov’s laws aren’t needed, they say.

The word robot comes from the Czech word robota meaning forced labour, which first appeared in a 1924 play by the Czech author Karel Capek. The anglicised version spread rapidly after this along with the idea that these machines could all too easily destroy their creators, a theme that has become common in science fiction since then.

But Barthelmess and Furbach argue that this fear of machines is rooted far more deeply in our culture. While science fiction stories often use plots in which robots destroy their creators, this is a theme that has a long history in literature.

For example, in Mary Shelley’s Frankenstein, a monster made from human body parts turns against Frankenstein, its creator, because he refuses to make a mate for the monster.

Then there is the 16th century Jewish Golem narrative, in one version of which a Rabbi constructs a creature out of clay to protect the community while promising to deactivate it after the Sabbath. But the Rabbi forgets and the golem turns into a monster that has to be destroyed.

Barthelmess and Furbach argue that the religious undertone in both these stories is that it is forbidden for humans to act like God. And that any attempt to do so will always be punished by the creator.

Similar episodes appear in Greek mythology where humans who demonstrate arrogance towards the Gods are also punished, such as Prometheus and Niobe. That’s why stories of this kind are part of our culture going back thousands of years. It is this deep-rooted fear that science fiction authors play on in stories about robots.

Of course, there are real conflicts between humans and machines. During the industrial revolution in Europe, for example, there was a great fear of machines and their manifest ability to change the world in ways that had a profound influence on many people.

Barthelmess and Furbach point out that in 18th century England, people began a movement to destroy weaving machines which became so grave that the Parliament made demolishing machines a capital crime. A group known as the Luddites even battled the British army over these issues. “There was a kind of technophobia which resulted in fights against machines,” they say.

Of course, it’s not beyond the realms of possibility that a similar kind of antagonism could develop towards the new generation of robots that are set to take over the highly repetitive tasks that human workers currently perform in factories all over the world and in particular in Asia.

However, there is very different attitude towards robots in Asia. Countries such as Japan lead the world in the development of robots for automated factories and as human helpers, partly because of Japan’s ageing population and the well known health care problems that this will produce in the not too distant future.

That attitude is perhaps embodied by Astro Boy, a fictional robot who in 2007 was named by Japan’s Ministry of Foreign Affairs as the Japanese envoy for safe overseas travel.

For these reasons, Barthelmess and Furbach argue that what we fear about robots is not the possibility that they will take over and destroy us but the possibility that other humans will use them to destroy our way of life in ways we cannot control.

In particular, they point out that many robots will protect us by design. For example, automated vehicles and planes are being designed to drive and fly more safely than human operators ever can. So we will be safer using them than not using them.

An important exception are the growing numbers of robots specifically designed to kill humans. The US, in particular, is using drones for targeted killings in foreign countries. The legality, not to mention morality, of these actions is still being ferociously debated.

But Barthelmess and Furbach imply that humans are still ultimately responsible for these killings and that international law, rather than Asimov’s laws, should be able to cope with issues that arise, or adapted to do so.

They end their discussion by considering the potential convergence between humans and robots in the near future. The idea here is that humans will incorporate various technologies into their own bodies, such as extra memory or processing power, and so will eventually fuse with robots. At that point, everyday law will have to cope with the behaviour and actions of ordinary people and Asimov’s laws will be obsolete.

An interesting debate that is unlikely to be settled any time soon. Additional views in the comments section please.

Ref: arxiv.org/abs/1405.0961 : Do we need Asimov’s Laws?

Deep Dive

Computing

Inside the hunt for new physics at the world’s largest particle collider

The Large Hadron Collider hasn’t seen any new particles since the discovery of the Higgs boson in 2012. Here’s what researchers are trying to do about it.

Why China is betting big on chiplets

By connecting several less-advanced chips into one, Chinese companies could circumvent the sanctions set by the US government.

How Wi-Fi sensing became usable tech

After a decade of obscurity, the technology is being used to track people’s movements.

Algorithms are everywhere

Three new books warn against turning into the person the algorithm thinks you are.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.