Take a look at the TV anchor above. At first glance he seems perfectly normal, albeit a bit wooden. Look closer, though, and you’ll notice something a bit off about his voice and the way his lips move.
That’s because the anchor isn’t real at all.
AI mimicry: The digitally synthesized anchor was created by Sogou, a search company based in Beijing, in collaboration with China’s state press agency, Xinhua. Sogou used some cutting-edge machine learning to copy and re-create a real person’s likeness and voice. The company fed its algorithms footage of a real anchor, plus corresponding text, and trained it to reproduce a decent facsimile that will say whatever you want.
Anchorman, oh man: Let’s be clear, though. The anchor isn’t intelligent in the slightest. It’s essentially just a digital puppet that reads a script. The “AI” in this case is the software that learns what makes a convincing-looking face and voice. That’s certainly impressive, but it’s a very narrow example of machine learning. You can call it an “AI anchor,” but that’s a little confusing.
Face off: This kind of technology will help improve animation, special effects, and video games., But there are reasons to be worried about how it might be misused to spread misinformation or besmirch someone’s reputation. A similar approach can be used to stitch a person’s face onto someone else, and it’s already been used to create all sorts of unsafe-for-work clips.
Never-ending news: Two anchors have been created, one that speaks English and another that speaks Mandarin. Both have been put to work by the agency on its WeChat channel. Xinhua claims the anchors “can read texts as naturally as a professional news anchor” and says they will “work 24 hours a day on its official website and various social media platforms, reducing news production costs and improving efficiency,” the report says.
Fake future: A couple of months ago, I saw the company’s CEO, Wang Xiaochaun, give a talk at Tsinghua University during which he demoed several AI projects, including one that let people assume the likeness of a famous movie star during video calls. One thing is clear: the future will look (and sound) pretty weird.
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.