Skip to Content
Artificial intelligence

We’re fighting fake news AI bots by using more AI. That’s a mistake.

Facebook and others are battling complex disinformation with AI-driven defences. But this can only get us so far, argues an expert on high-tech propaganda.
January 8, 2020
Photo of the book cover
Photo of the book cover
  • Samuel Woolley is an assistant professor at Moody School of Communication at the University of Texas-Austin. This is an adapted excerpt from his upcoming book The Reality Game.

Any time you log on to Twitter and look at a popular post, you’re likely to find bot accounts liking or commenting on it. Clicking through and you can see they’ve tweeted many times, often in a short time span. Sometimes their posts are selling junk or spreading digital viruses. Other accounts, especially the bots that post garbled vitriol in response to particular news articles or official statements, are entirely political.

It’s easy to assume this entire phenomenon is powered by advanced computer science. Indeed, I’ve talked to many people who think machine learning algorithms driven by machine learning or artificial intelligence are giving political bots the ability to learn from their surroundings and interact with people in a sophisticated way. 

During events in which researchers now believe political bots and disinformation played a key role—the Brexit referendum, the Trump-Clinton contest in 2016, the Crimea crisis—there is a widespread belief that smart AI tools allowed computers to pose as humans and help manipulate the public conversation. 

Pundits and journalists have fueled this: There have been extremely provocative stories about the rise of a “weaponized AI propaganda machine”, and stories claiming that “artificial intelligence conquered democracy.” Even my own research into how social media is used to mold public opinion, hack truth, and silence protest—what is known as “computational propaganda”—has been quoted in articles that suggest our robot overlords are already here. 

The reality is, though, that complex mechanisms like artificial intelligence played little role in computation propaganda campaigns to date. All the evidence I’ve seen on Cambridge Analytica suggests the firm never launched the “psychographic” marketing tools it claimed to possess during the 2016 US election—though it said it could target individuals with specific messages based on personality profiles derived from its controversial Facebook database. 

When I was at the Oxford Internet Institute, meanwhile, we looked into how and whether Twitter bots were used during the Brexit debate. We found that while many were used to spread messages about the Leave campaign, the vast majority of the automated accounts were very simple. They were made to alter online conversation with bots that had been built simply to boost likes and follows, to spread links, to game trends, or to troll opposition. It was gamed by small groups of human users who understood the magic of memes and virality, of seeding conspiracies online and watching them grow. Conversations were blocked by basic bot-generated spam and noise, purposefully attached to particular hashtags in order to demobilize online conversations. Links to news articles that showed a politician in a particular light were hyped by fake or proxy accounts made to post and repost the same junk over and over and over. These campaigns were wielded quite bluntly: these bots were not designed to be functionally conversational. They did not harness AI. 

Dumb no more

There are, however, signals that AI-enabled computational propaganda and disinformation are beginning to be used. Hackers and other groups have already begun testing the effectiveness of more dangerous AI bots over social media. A 2017 piece from Gizmodo reported that two data scientists taught an artificial intelligence to design its own phishing campaign: “In tests, the artificial hacker was substantially better than its human competitors, composing and distributing more phishing tweets than humans, and with a substantially better conversion rate.”

Problematic content is not spread only by machine-learning-enabled political bots. Nor are problematic uses or designs of technology being generated only by social-media firms. Researchers have pointed out that machine learning can be tainted by poison attacks—malicious actors influencing “training data” in order to change the results of a given algorithm—before the machine is even made public. 

Kalev Leetaru, a senior fellow at George Washington University, suggests that the first attacks driven by AI bots may not be aimed at social media but instead would involve what’s known as a distributed denial-of-service attack, which involves shutting down targeted web servers by flooding them with traffic.

“Imagine for a moment that you handed that botnet over to the control of a deep learning system and gave that AI algorithm complete control over every knob and dial of that botnet,” Leetaru writes

These efforts aren’t geared toward helping news organizations vet the heaps of content. Rather, they help a multibillion-dollar company keep its own house clean.

“You also give it live feeds of global internet status information from major cybersecurity and monitoring vendors around the world so it can observe second-by-second how the victim and the rest of the internet at large is responding to the attack. Perhaps this all comes after you’ve had the algorithm spend several weeks monitoring the target in exquisite detail to understand the totality and nuance of its traffic patterns and behaviors and burrow its way through its outer layers of defenses.”

Beyond defense

In April 2018 Mark Zuckerberg appeared before Congress: he was under the political microscope for the mishandling of user information during the 2016 election. In his two-part testimony he mentioned artificial intelligence more than 30 times, suggesting that AI was going to be the solution to the problem of digital disinformation by providing programs that would combat the sheer volume of computational propaganda. He predicted that in the next decade, AI would be the savior for the massive problems of scale that Facebook and others come up against when dealing with the global spread of junk content and manipulation. 

So is there a way we could use AI or automated bot technology to tackle the manipulation of public opinion online? Can we use AI to fight AI? 

The Observatory on Social Media at Indiana University has built public tools that harness machine learning to detect bots by looking at 1,200 features to determine whether it’s more likely to be a human or a bot.

And Facebook product manager Tessa Lyons said in a 2018 announcement that “Machine learning helps us identify duplicates of debunked stories. For example, a fact-checker in France debunked the claim that you can save a person having a stroke by using a needle to prick their finger and draw blood. This allowed us to identify over 20 domains and over 1,400 links spreading that same claim.” 

In such cases, social-media firms can harness machine learning to pick up, and even verify, fact-checks from around the globe and use these evidence-driven corrections to flag bogus content.

There is a big debate in the academic community, however, as to whether passively identifying potentially false information for social-media users is actually effective. Some researchers suggest that fact-checking efforts both online and offline do not work very effectively in their current form. In early 2019, the fact-checking website Snopes, which had partnered with Facebook in such corrective efforts, broke off the relationship. In an interview with the Poynter Institute, Snopes’s vice president of operations Vinny Green said, “It doesn’t seem like we’re striving to make third-party fact checking more practical for publishers—it seems like we’re striving to make it easier for Facebook.” 

Organizations like Facebook continue to rely on small, usually nonprofits, to vet content. Potentially false articles or videos are often passed to these groups with no background information on how or why they were flagged in the first place.

These efforts aren’t geared toward helping news organizations vet the heaps of content or leads they receive each day to help under-resourced reporters do better work. Rather, they help a multibillion-dollar company keep its own house clean in a post hoc fashion. It is time for Facebook to take responsibility internally for fact-checking, rather than passing off the task of verifying or debunking news reports to other groups. Facebook and other social-media companies must also stop relying on fact-checks after the fact—that is, only after a false article has gone viral. These companies need to generate some kind of early warning system for computational propaganda.

Facebook, Google, and others like them employ people to find and take down content that contains violence or information from terrorist groups. They are much less zealous, however, in their efforts to get rid of disinformation. The plethora of different contexts in which false information flows online—everywhere from an election in India to a major sporting event in South Africa—makes it tricky for AI to operate on its own, absent human knowledge. But in the coming months and years it will take hordes of people across the world to effectively vet the massive amounts of content in the countless circumstances that will arise.

There simply is no easy fix to the problem of computational propaganda on social media. It is the companies’ responsibility, though, to find a way to fix it. So far Facebook seems far more focused on  public relations than on regulating the flow of computational propaganda or graphic content. According to The Verge, the company spends more time celebrating its efforts to get rid of particular pieces of vitriol or violence than on systematically overhauling its moderation processes. 

Beyond fact-checking

It will be some combination of human labor and AI that eventually succeeds in combating computational propaganda, but how this will happen is simply not clear. AI-enhanced fact-checking is only one route forward. Machine learning and deep learning, in concert with human workers, can combat computational propaganda, disinformation, and political harassment in several other ways. 

Jigsaw, the Google-based technology incubator where I served a one-year term as a research fellow, designed and built an AI-based tool called Perspective to combat online trolling and hate speech. This tool (which I didn’t work on myself) is an API that allows developers to automatically detect toxic language. 

It’s controversial because it not only runs the risk of false positives—flagging posts that don’t actually contain trolling or abuse—but also moderates speech. According to Wired, the tool was trained using machine learning, but any such tool is also trained using inputs from humans, who have their own biases. So could a tool built to detect racist or hateful language could fail because of flawed training?

In 2016 Facebook launched Deeptext, an AI tool similar to Google’s Perspective. The company says it helped delete over 60,000 hateful posts a week. Facebook admitted, however, that the tool still relied on a large pool of human moderators to actually get rid of harmful content. Twitter, meanwhile, finally made moves at the end of 2017 to work more carefully to ban similarly threatening or violent posts. But while it has started curbing this problematic material—and is also deleting hordes of political bot accounts—Twitter has given no clear indications of how it is detecting and deleting accounts. My research collaborators and I continue to find massive manipulative botnets on Twitter nearly every month.

Beyond the horizon

It’s unsurprising that a technologist like Zuckerberg would propose a technological fix, but AI is not perfect on its own. The myopic focus of tech leaders on computer-based solutions reflects the naïveté and arrogance that caused Facebook and others to leave users vulnerable in the first place.

There are not yet armies of smart AI bots working to manipulate public opinion during contested elections. Will there be in the future? Perhaps. But it’s important to note that even armies of smart political bots will not function on their own: They will still require human oversight to manipulate and deceive. We are not facing an online version of The Terminator here. Luminaries from the fields of computer science and AI including Turing Award winner Ed Feigenbaum and Geoff Hinton, the “godfather of deep learning,” have argued strongly against fears that “the singularity”—the unstoppable age of smart machines—is coming anytime soon. In a survey of American Association of Artificial Intelligence fellows, over 90% said that super-intelligence is “beyond the foreseeable horizon.” Most of these experts also agreed that when and if super-smart computers do arrive, they will not be a threat to humanity.

Stanford researchers working to track the state of the art in AI suggest that our “machine overlords,” at present, “still can’t exhibit the common sense or the general intelligence of even a 5-year-old.” So how will these tools subvert human rule or, say, solve exceedingly human social problems like political polarization and a lack of critical thinking? The Wall Street Journal put it succinctly in 2017: “Without Humans, Artificial Intelligence Is Still Pretty Stupid.” 

Grady Booch, a leading expert on AI systems, is also skeptical about the rise of super-smart rogue machines, but for a different reason. In a TED talk in 2016, he said that “to worry now about the rise of a superintelligence is in many ways a dangerous distraction because the rise of computing itself brings to us a number of human and societal issues to which we must now attend.” 

More important, Booch stressed, current AI systems can do all sorts of amazing things, from conversing with humans in natural language to recognizing objects—but these things are decided upon by humans and encoded with human values. They are not programmed, but they are taught how to behave. 

“In scientific terms, this is what we call ground truth,” Booch says, “and here’s the important point: in producing these machines, we are therefore teaching them a sense of our values. To that end, I trust an artificial intelligence the same, if not more, as a human who is well trained.”

I would take Booch’s idea even further. To address the problem of computational propaganda we need to zero in on the people behind the tools. 

Yes, ever-evolving technology can automate the spread disinformation and trolling. It can let perpetrators operate anonymously and without fear of discovery. But this suite of tools as a mode of political communication is ultimately focused on achieving the human aim of control. Propaganda is a human invention, and it’s as old as society. As an expert on robotics once told me, we should not fear machines that are smart like humans, so much as humans who are not smart about how they build machines.

Excerpted from The Reality Game: How the Next Wave of Technology Will Break the Truth, by Samuel Woolley. Copyright © 2020. Available from PublicAffairs, an imprint of Hachette Book Group, Inc.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.