Grace Windheim had heard of deepfakes before. But she had never considered how to make one. It was a viral meme using the technology that led her to research the possibility—and discover that it was super easy and completely free.
Within a day, she had created a step-by-step YouTube tutorial to walk others through the process. “Making one of these deepfakes and overlaying audio is not as complicated as you may think,” she says in the video, published on August 4. It has since been viewed over 360,000 times.
Windheim is part of a new group of online creators who are toying with deepfakes as the technology grows increasingly accessible and seeps into internet culture. The phenomenon is not surprising; media manipulation tools have often gained traction through play and parody. But it also raises fresh concerns about its potential for abuse.
Deepfakes have already been used to harass women by nonconsensually swapping their faces into porn videos. Scholars also worry about their ability to disrupt elections. While the deepfakes created for memes are still obviously fake and relatively harmless, they may not stay that way for long.
“There’s a fine line between using deepfakes for entertainment and memes, and using them for harm,” Windheim says. “In this tutorial, I’m saying, ‘This is how you make this particular deepfake.’ But the scary thing about the script is it can just be applied to make any type of deepfake you want.”
“I’ve Been a Fool”
Windheim, a recent college grad, works as a content creator at the San Francisco–based startup Kapwing. The company, which got its start as a meme maker, offers a free suite of browser-based video-editing software tools. As part of her job, Windheim runs the YouTube channel and produces content marketing videos to show off the products’ capabilities.
In early August, she came across a particularly viral search term on Google Trends. Three of the five top queries were asking about a “Baka Mitai deepfake meme.” “I almost never see a query come up that frequently,” she says.
The meme, as it turns out, was based on a video of a YouTuber lip-synching to a Japanese video-game song called “Baka Mitai” (translation: “I’ve Been a Fool”). Various internet users had used the video to create crappy deepfakes of everyone from Barack Obama to Thanos singing the song. Despite its popularity, however, Windheim found that little had been written about how to actually make it. She saw an opportunity.
The particular deepfake algorithm that people were using comes from a 2019 research paper presented at NeurIPS, the largest annual AI research conference. Unlike other, more complex algorithms, it allows a user to take any video of a person’s face and use it to animate a photo of someone else’s face with only a few lines of code.
Windheim found the open-source algorithm in a YouTube tutorial and ported it into a Google Colab notebook, a free service for running code in the cloud. After a few tries, aided by the skills she’d picked up in the occasional coding class in college, she got the script to spit out a deepfake video. She then synched the song to the video with Kapwing’s tools, creating a new version of the meme.
Since she posted her tutorial on Kapwing’s YouTube channel, a number of other YouTubers have also made tutorials using the same copy-and-pasted algorithm. The difference: many of them are teaching their audience how to make any kind of deepfake meme. One even teaches people how to make them on mobile.
These memes are now appearing everywhere on social media: on Twitter, Instagram, and especially TikTok. The platform’s short videos, which often feature snappy choreography to catchy music, are particularly conducive to being deepfaked to mesmerizing effect. The #deepfake hashtag in the app has already racked up more than 120 million views.
There’s a telltale wonkiness to the faces in the videos made with this algorithm, which makes its handiwork easy to recognize; that is part of the deepfakes’ humor. These imperfections—and the surrealist quality of the memes—will keep them from being confused for reality. At the moment, more hyper-realistic deepfakes are far more technically challenging and computationally expensive to create.
But at the rate that the technology is advancing, easy-to-make deepfakes that are nearly indistinguishable from reality are likely around the corner. Some companies like Chinese tech giant Tencent, owner of WeChat, have publicly announced their intentions to invest more resources into advancing the state of the art for commercial applications.
Deepfakes are not inherently bad. The technology has already been used by artists, educators, and others as a powerful new tool for creative expression. In February, for example, Time magazine used deepfakes to re-create the experience of Martin Luther King Jr. delivering his “I Have a Dream” speech in virtual reality. Ultimately, regulators need to define what is appropriate use and what could lead to harm.
For now, Windheim is relying on her own judgment to make that call. Before posting her video, she read up on the implications of deepfakes and had a conversation with her colleagues. “We’re never intending our products to help users spread misinformation,” she says, “so we just wanted to sanity-check ourselves.”
In the end, they decided on some ground rules: they would focus their tutorials on making specific memes, never on creating deepfakes outside of that context. As long as it’s entertainment and within meme culture, she says, “we’re in the clear zone.”
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.