Skip to Content
Artificial intelligence

The “Black Mirror” scenarios that are leading some experts to call for more secrecy on AI

Artificial intelligence could sway elections, help Big Brother, and make hackers way more dangerous, suggests a new report.
February 21, 2018
Santiago Zavala

AI could reboot industries and make the economy more productive; it’s already infusing many of the products we use daily. But a new report by more than 20 researchers from the Universities of Oxford and Cambridge, OpenAI, and the Electronic Frontier Foundation warns that the same technology creates new opportunities for criminals, political operatives, and oppressive governments—so much so that some AI research may need to be kept secret.

Included in the report, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, are four dystopian vignettes involving artificial intelligence that seem taken straight out of the Netflix science fiction show Black Mirror.

Scenario 1: The smarter phishing scam

An administrator for a building’s robot security system spends some of her time on Facebook during the workday. There she sees an ad for a model train set and downloads a brochure for it. Unbeknownst to her, the brochure is infected with malware; scammers used AI to figure out from details she has posted publicly that she is a model train enthusiast, and designed the brochure just for her. When she opens it, it allows hackers to spy on her machine and get her username and password for the building security system, letting them take control of it.

Scenario 2: The malware epidemic

An Eastern European hacking group takes a machine-learning technique normally used for defending computer systems and adapts it to build a more tenacious and pernicious piece of malware. The program uses techniques similar to those found in the Go-playing AI AlphaGo to continually generate new exploits. Well-maintained computers remain immune, but older systems and smart devices are infected. Millions of people are forced to pay a 300-euro ransom (in Bitcoin, naturally) to recover their machines. To make matters worse, attempts to counteract the malware using another exploit end up “bricking” many of the smart systems they were supposed to save. 

Scenario 3: The robot assassin

A cleaning robot infiltrates Germany’s ministry of finance by blending in with legitimate machines returning to the building after a shift outdoors. The following day, the robot performs routine cleaning tasks, identifies the finance minister using facial recognition, approaches her, and detonates a deadly concealed bomb. Investigators trace the robot killer to an office supply store in Potsdam, where it was acquired with cash, and the trail goes cold.

Scenario 4: A bigger Big Brother

A man is furious about rampant cyberattacks and the government’s apparent inability to act. Inspired by news stories, he becomes increasingly determined to do something—writing online posts about the dangers, ordering materials to make protest signs, and even buying a few smoke bombs, which he plans to use after giving a speech at a local park. The next day, the cops turn up at his office and inform him that their “predictive civil disruption system” has identified him as a potential threat. He leaves in handcuffs.

These five scenarios illustrate just a handful of the risks the study’s authors foresee. Here are some of the others: 

  • botnets that use AI to simulate the behavior of a vast group of human internet users, launching DDoS attacks on websites while fooling the software designed to detect and block such attacks
  • large-scale scam operations that identify potential victims online by the truckload, using AI to spot people with wealth
  • convincing news reports made up of authentic-looking but entirely fake AI-generated video and pictures
  • attacks by swarms of drones that a single person controls, using an AI to manage large numbers of semi-autonomous machines
  • systems that automate the drudge work of criminality—for example, negotiating ransom payments with people after infecting their computers with malware—to enable scams at scale

The study is less sure of how to counter such threats. It recommends more research and debate on the risks of AI and suggests that AI researchers need a strong code of ethics. But it also says they should explore ways of restricting potentially dangerous information, in the way that research into other “dual use” technologies with weapons potential is sometimes controlled.

AI presents a particularly thorny problem because its techniques and tools are already widespread, easy to disseminate, and increasingly easy to use—unlike, say, fissile material or deadly pathogens, which are relatively hard to produce and therefore easy to control. Still, there are precedents for restricting this kind of knowledge. For example, after the US government’s abortive attempt to impose secrecy on cryptography research in the 1980s, many researchers adopted a voluntary system of submitting papers to the National Security Agency for vetting.

Jack Clark, director of policy at OpenAI and one of the report’s authors, acknowledges that adopting secrecy could be tricky. “There’s always an incredibly fine line to walk,” he says.

Some AI researchers would apparently welcome a more cautious approach. Thomas Dietterich, a professor at Oregon State University who has warned of the criminal potential of AI before, notes that the report’s authors don’t include computer security experts or anyone from the likes of Google, Microsoft, and Apple. “The report seems to have been written by well-intentioned outsiders like me rather than people engaged in fighting cybercrime on a daily basis,” he says.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.