Deepfakes may be a useful tool for spies

A spy may have used an AI-generated face to deceive and connect with targets on social media.
The news: A LinkedIn profile under the name Katie Jones has been identified by the AP as a likely front for AI-enabled espionage. The persona is networked with several high-profile figures in Washington, including a deputy assistant secretary of state, a senior aide to a senator, and an economist being considered for a seat on the Federal Reserve. But what’s most fascinating is the profile image: it demonstrates all the hallmarks of a deepfake, according to several experts who reviewed it.
Easy target: LinkedIn has long been a magnet for spies because it gives easy access to people in powerful circles. Agents will routinely send out tens of thousands of connection requests, pretending to be different people. Only last month, a retired CIA officer was sentenced to 20 years in prison for leaking classified information to a Chinese agent who made contact by posing as a recruiter on the platform.
Weak defense: So why did “Katie Jones” take advantage of AI? Because it removes an important line of defense for detecting impostors: doing a reverse image search on the profile photo. It’s yet another way that deepfakes are eroding our trust in truth as they rapidly advance into the mainstream.
To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.
Deep Dive
Artificial intelligence
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.