Skip to Content
Policy

The inside scoop on watermarking and content authentication

President Biden's executive order makes a big bet on new AI-labeling technologies

November 6, 2023
arrows flag content on a laptop as neutral or warning pink to indicate the use of ai
Stephanie Arnett/MITTR | Unsplash, Envato

This article is from The Technocrat, MIT Technology Review's weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

On October 30, President Biden released his executive order on AI, a major move that I bet you’ve heard about by now. If you want a rundown of the most important points you need to know, check out a piece I wrote with my colleague Melissa Heikkilä

For me, one of the most interesting parts of the executive order was the emphasis on watermarking and content authentication. I’ve previously written a bit about these technologies, which aim to label content to determine whether it was made by a machine or a human. 

The order says that the government will be promoting these tools, the Department of Commerce will establish guidelines for them, and federal agencies will use such techniques in the future. In short, the White House is making a big bet on these methods as a way to fight AI-generated misinformation. 

The promotion of these technologies continued at the UK’s AI Safety summit, which began on November 1, when Vice President Kamala Harris said the administration is encouraging tech companies to “create new tools to help consumers discern if audio and visual content is AI-generated.”

While there isn’t much clarity on how exactly all this will happen, a senior administration official told reporters on Sunday that the White House planned on working with the group behind the open-source internet protocol known as the Coalition for Content Provenance and Authenticity, or C2PA. 

Lucky for you Technocrat readers, I dug into C2PA back in July! So here’s a refresher on what you need to know about it.

What are the basics?

Watermarking and other content-authentication technologies offer an approach to identifying AI-generated content that’s different from AI detection, which is done after the fact and has proved fairly ineffective so far. (AI detection relies on technology that evaluates an existing piece of content and asks, Was this created by AI?) 

In contrast, watermarking and content authentication, also called provenance technologies, operate on an opt-in model, where content creators can append information up front about the origins of a piece of content  and how it may have changed as it travels online. The hope is that this increases the level of trust for viewers of that information.

Most current watermarking technologies embed an invisible mark in a piece of content to signal that the material was made by an AI. Then a watermark detector identifies that mark. Content authentication is a broader methodology that entails logging information about where content came from in a way that is visible to the viewer, sort of like metadata.  

C2PA focuses primarily on content authentication through a protocol it calls Content Credentials, though the group says its technology can be coupled with watermarking. It is “an open-source protocol that relies on cryptography to encode details about the origins of a piece of content,” as I wrote back in July. “This means that an image, for example, is marked with information by the device it originated from (like a phone camera), by any editing tools (such as Photoshop), and ultimately by the social media platform that it gets uploaded to. Over time, this information creates a sort of history, all of which is logged.”

The result is verifiable information, collected in what C2PA proponents compare to a “nutrition label,” about where a piece of content came from, whether it was machine generated or not. The initiative and its affiliated open-source community have been growing rapidly in recent months as companies rush to verify their content. 

Where does the White House come in?

The key part of the EO notes that the Department of Commerce will be “establishing standards and best practices for detecting AI-generated content and authenticating official content” and notes that “federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.” 

Crucially, as Melissa and I reported in our story, the executive order falls short of requiring  industry players or government agencies to use this technology. 

But while the experts Melissa and I spoke with were generally encouraged by the provisions around standards, watermarking, and content labeling, watermarking in particular is not likely to solve all our problems. Researchers have found that the technique is vulnerable to being tampered with, which can trigger false positives and false negatives. 

Soheil Feizi, at the University of Maryland, has conducted two studies of watermarking technologies and found them “unreliable.” He says the risk of false positives and negatives is so extensive that watermarks provide “basically zero information.”

“Imagine if there is a tweet or a text with a hidden official White House watermark, but that tweet was actually written by adversaries,” Feizi warns. “That can cause more problems than solving any of the current problems.”

What’s more, his research found that invisible and tamper-proof watermarking technologies are theoretically “impossible,” though he has not studied the efficacy of content authentication techniques. 

I asked C2PA how it has been working with the federal government thus far. Mounir Ibrahim, the cochair of the governmental affairs team, said in an email that the group has been in “regular contact” with federal agencies including the National Security Council and White House.

In another email, a spokesperson said C2PA hopes the White House’s action will bring increased awareness and adoption of its content credentials. The spokesperson did not disclose any plans regarding the use of the protocol by federal agencies but said, “The C2PA and content credentials are ready for adoption today—and we encourage everyone to reach out and get involved. We stand ready to educate and help any agency begin testing and adopting.”

What I am reading this week

What I learned this week

One of the pioneers of AI bias research, Joy Buolamwini, is out with a new book, and my colleague Melissa chatted with her about her latest work and how she’s thinking about this critical moment in artificial intelligence. 

As Melissa writes, “She is calling for a radical rethink of how AI systems are built. Buolamwini tells MIT Technology Review that, amid the current AI hype cycle, she sees a very real risk of letting technology companies pen the rules that apply to them—repeating the very mistake, she argues, that has previously allowed biased and oppressive technology to thrive.”

Deep Dive

Policy

A brief, weird history of brainwashing

L. Ron Hubbard, Operation Midnight Climax, and stochastic terrorism—the race for mind control changed America forever.

Why the Chinese government is sparing AI from harsh regulations—for now

The Chinese government may have been tough on consumer tech platforms, but its AI regulations are intentionally lax to keep the domestic industry growing.

Eric Schmidt: Why America needs an Apollo program for the age of AI

Advanced computing is core to the security and prosperity of the US. We need to lay the groundwork now.

AI was supposed to make police bodycams better. What happened?

New AI programs that analyze bodycam recordings promise more transparency but are doing little to change culture.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.