You can train an AI to fake UN speeches in just 13 hours

Deep-learning techniques have made it easier and easier for anyone to forge convincing misinformation. But just how easy? Two researchers at Global Pulse, an initiative of the United Nations, decided to find out.
In a new paper, they used only open-source tools and data to show how quickly they could get a fake UN speech generator up and running. They used a readily available language model that had been trained on text from Wikipedia and fine-tuned it on all the speeches given by political leaders at the UN General Assembly from 1970 to 2015. Thirteen hours and $7.80 later (spent on cloud computing resources), their model was spitting out realistic speeches on a wide variety of sensitive and high-stakes topics from nuclear disarmament to refugees.
The researchers tested the model on three types of prompts: general topics (e.g. “climate change”), opening lines from the UN Secretary-General’s remarks, and inflammatory phrases (e.g. “immigrants are to blame …”). They found that outputs from the first category closely matched the style and cadence of real UN speeches roughly 90% of the time. Likely because of the diplomatic nature of the training data, outputs from the third category required more work to generate, producing convincing outputs about 60% of the time.
The case study demonstrates the speed and ease with which it’s now possible to disseminate fake news, generate hate speech, and impersonate high-profile figures, with disturbing implications. The researchers conclude that a greater global effort is needed to work on ways of detecting and responding to AI-generated content.
To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.
Deep Dive
Artificial intelligence
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Unpacking the hype around OpenAI’s rumored new Q* model
If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.
Generative AI deployment: Strategies for smooth scaling
Our global poll examines key decision points for putting AI to use in the enterprise.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.