This is fake news! China’s ‘AI news anchor’ isn’t intelligent at all
Take a look at the TV anchor above. At first glance he seems perfectly normal, albeit a bit wooden. Look closer, though, and you’ll notice something a bit off about his voice and the way his lips move.
That’s because the anchor isn’t real at all.
AI mimicry: The digitally synthesized anchor was created by Sogou, a search company based in Beijing, in collaboration with China’s state press agency, Xinhua. Sogou used some cutting-edge machine learning to copy and re-create a real person’s likeness and voice. The company fed its algorithms footage of a real anchor, plus corresponding text, and trained it to reproduce a decent facsimile that will say whatever you want.
Anchorman, oh man: Let’s be clear, though. The anchor isn’t intelligent in the slightest. It’s essentially just a digital puppet that reads a script. The “AI” in this case is the software that learns what makes a convincing-looking face and voice. That’s certainly impressive, but it’s a very narrow example of machine learning. You can call it an “AI anchor,” but that’s a little confusing.
Face off: This kind of technology will help improve animation, special effects, and video games., But there are reasons to be worried about how it might be misused to spread misinformation or besmirch someone’s reputation. A similar approach can be used to stitch a person’s face onto someone else, and it’s already been used to create all sorts of unsafe-for-work clips.
Never-ending news: Two anchors have been created, one that speaks English and another that speaks Mandarin. Both have been put to work by the agency on its WeChat channel. Xinhua claims the anchors “can read texts as naturally as a professional news anchor” and says they will “work 24 hours a day on its official website and various social media platforms, reducing news production costs and improving efficiency,” the report says.
Fake future: A couple of months ago, I saw the company’s CEO, Wang Xiaochaun, give a talk at Tsinghua University during which he demoed several AI projects, including one that let people assume the likeness of a famous movie star during video calls. One thing is clear: the future will look (and sound) pretty weird.
Deep Dive
Artificial intelligence
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Unpacking the hype around OpenAI’s rumored new Q* model
If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.
Generative AI deployment: Strategies for smooth scaling
Our global poll examines key decision points for putting AI to use in the enterprise.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.