Skip to Content
Artificial intelligence

Deepfakes may be a useful tool for spies

June 14, 2019
An image of a fake LinkedIn profile of the persona Katie Jones.
An image of a fake LinkedIn profile of the persona Katie Jones.Screenshot of LinkedIn via AP

A spy may have used an AI-generated face to deceive and connect with targets on social media.

The news: A LinkedIn profile under the name Katie Jones has been identified by the AP as a likely front for AI-enabled espionage. The persona is networked with several high-profile figures in Washington, including a deputy assistant secretary of state, a senior aide to a senator, and an economist being considered for a seat on the Federal Reserve. But what’s most fascinating is the profile image: it demonstrates all the hallmarks of a deepfake, according to several experts who reviewed it.

Easy target: LinkedIn has long been a magnet for spies because it gives easy access to people in powerful circles. Agents will routinely send out tens of thousands of connection requests, pretending to be different people. Only last month, a retired CIA officer was sentenced to 20 years in prison for leaking classified information to a Chinese agent who made contact by posing as a recruiter on the platform.

Weak defense: So why did “Katie Jones” take advantage of AI? Because it removes an important line of defense for detecting impostors: doing a reverse image search on the profile photo. It’s yet another way that deepfakes are eroding our trust in truth as they rapidly advance into the mainstream.

To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.

Deep Dive

Artificial intelligence

A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.

The viral AI avatar app Lensa undressed me—without my consent

My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

Roomba testers feel misled after intimate images ended up on Facebook

An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.

How to spot AI-generated text

The internet is increasingly awash with text written by AI software. We need new tools to detect it.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.