Connectivity

How to Create a Malevolent Artificial Intelligence

If cybersecurity experts are to combat malevolent artificial intelligence, they will need to know how such a system can emerge, say computer scientists.

The possibility that a malevolent artificial intelligence might pose a serious threat to humankind has become a hotly debated issue. Various high profile individuals from the physicist Stephen Hawking to the tech entrepreneur Elon Musk have warned of the danger.

Which is why the field of artificial intelligence safety is emerging as an important discipline. Computer scientists have begun to analyze the unintended consequences of poorly designed AI systems, of AI systems created with faulty ethical frameworks or ones that do not share human values.

But there’s an important omission in this field, say independent researchers Federico Pistono and Roman Yampolskiy from the University of Louisville in Kentucky. “Nothing, to our knowledge, has been published on how to design a malevolent machine,” they say.

That’s a significant problem because computer security specialists must understand the beast they are up against before they can hope to defeat it.

Today, Pistono and Yampolskiy attempt to put that right, at least in part, and the key point they make is that a malevolent AI is most likely to emerge only in certain environments. So they have set out the conditions in which a malevolent AI system could emerge. And their conclusions will make for uncomfortable reading for one or two companies.

So what warning signs indicate that work on a malevolent AI system might be possible? Pistono and Yampolskiy say there likely to be some clear signs.

One of the most obvious would be the absence of oversight boards in the development of AI systems. “If a group decided to create a malevolent artificial intelligence, it follows that preventing a global oversight board committee from coming to existence would increase its probability of succeeding,” they say.

Such a group does this by downplaying the significance of its work and the dangers it poses and even by circulating confusing information. “The strategy is to disseminate conflicting information that would create doubt in the public's imagination about the dangers and opportunities of artificial general intelligence research,” they say.

Another important sign would be the existence of closed-source code behind the artificial intelligence system. “It is well known among cryptography and computer security experts that closed-source software and algorithms are less secure than their free and open-source counterpart,” say Pistono and Yampolskiy. “The very existence of non-free software and hardware puts humanity at a greater risk.”

Instead, they say artificial intelligence could be developed using open-source software, although whether this would be safer is unclear. The open-source process allows more people to look for and fix flaws.  But it also gives access to criminals, terrorists and the like, who might use the software for nefarious purposes.

Artificial intelligence is currently being developed in both these ways.

The closed-source systems have had well-publicized successes. Google’s AI system recently triumphed over humanity in the ancient game of Go, for example. Facebook also has a high profile AI research group, albeit one that has been less publicly successful.

Neither company has been clear about the way its research is governed. Google’s DeepMind subsidiary, for example, says it has an AI ethics board but has consistently refused to reveal who sits on it. Facebook merely says that fears over AI have been overblown.

The development of open-source artificial intelligence is less well advanced. But it has recently begun to gather momentum, driven at least in part by the fears about commercial rivals.

The highest profile of these efforts is OpenAI, a nonprofit artificial intelligence organization started in 2015 with the goal of advancing “digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

It is backed by pledges of up to $1 billion in funding from, among others, the tech entrepreneur Elon Musk, who has repeatedly warned of the dangers of AI. (Musk also part-funded the work of one of the authors of this study, Roman Yampolskiy.)

Whether OpenAI will increase or reduce the chances of a malevolent AI system emerging isn’t clear. But the goal at least is to ensure that whatever happens occurs in full public view.

One shortcoming in all this is that the practice of cybersecurity for AI systems is much less well developed than for ordinary software.

Computer security experts have long recognized that malicious software poses a significant threat to modern society. Many safety-critical applications—nuclear power stations, air traffic control, life support systems, and so on—are little more than a serious design flaw away from disaster. The situation is exacerbated by the designers of intentionally malicious software—viruses, Trojans, and the like—that hunt down and exploit these flaws.

To combat this, security experts have developed a powerful ecosystem that identifies flaws and fixes them before they can be exploited. They study malicious software and look for ways to neutralize it.

They also have a communications system for spreading this information within their community but not beyond. This allows any flaws to be corrected quickly before knowledge of them spreads.

But a similar system does not yet operate effectively in the world of AI research.

That may not matter while AI systems are relatively benign. Most of today’s AI are focused on topics such as natural language processing, object recognition, and tasks such as driving.

But, given the pace of development in recent years, this is likely to change quickly. An important question is how humans might combat a malevolent artificial intelligence that could have dire consequences for humanity. It’s a question worth considering in detail now.

Ref: arxiv.org/abs/1605.02817: Unethical Research: How to Create a Malevolent Artificial Intelligence

The latest Insider Conversation is live! Listen to the story behind the story.

Subscribe today
Already a Premium subscriber? Log in.

Uh oh–you've read all of your free articles for this month.

Insider Premium
$179.95/yr US PRICE

More from Connectivity

What it means to be constantly connected with each other and vast sources of information.

Want more award-winning journalism? Subscribe and become an Insider.
  • Insider Premium {! insider.prices.premium !}*

    {! insider.display.menuOptionsLabel !}

    Our award winning magazine, unlimited access to our story archive, special discounts to MIT Technology Review Events, and exclusive content.

    See details+

    What's Included

    Bimonthly home delivery and unlimited 24/7 access to MIT Technology Review’s website.

    The Download. Our daily newsletter of what's important in technology and innovation.

    Access to the Magazine archive. Over 24,000 articles going back to 1899 at your fingertips.

    Special Discounts to select partner offerings

    Discount to MIT Technology Review events

    Ad-free web experience

    First Look. Exclusive early access to stories.

    Insider Conversations. Listen in as our editors talk to innovators from around the world.

  • Insider Plus {! insider.prices.plus !}* Best Value

    {! insider.display.menuOptionsLabel !}

    Everything included in Insider Basic, plus ad-free web experience, select discounts to partner offerings and MIT Technology Review events

    See details+

    What's Included

    Bimonthly home delivery and unlimited 24/7 access to MIT Technology Review’s website.

    The Download. Our daily newsletter of what's important in technology and innovation.

    Access to the Magazine archive. Over 24,000 articles going back to 1899 at your fingertips.

    Special Discounts to select partner offerings

    Discount to MIT Technology Review events

    Ad-free web experience

  • Insider Basic {! insider.prices.basic !}*

    {! insider.display.menuOptionsLabel !}

    Six issues of our award winning magazine and daily delivery of The Download, our newsletter of what’s important in technology and innovation.

    See details+

    What's Included

    Bimonthly home delivery and unlimited 24/7 access to MIT Technology Review’s website.

    The Download. Our daily newsletter of what's important in technology and innovation.

/
You've read all of your free articles this month. This is your last free article this month. You've read of free articles this month. or  for unlimited online access.