Skip to Content
Artificial intelligence

How to create, release, and share generative AI responsibly

Companies including OpenAI and TikTok have signed up to a new set of guidelines designed to help them be more transparent around generative AI.

February 27, 2023
""
Stephanie Arnett/MITTR | Getty, Envato

A group of 10 companies, including OpenAI, TikTok, Adobe, the BBC, and the dating app Bumble, have signed up to a new set of guidelines on how to build, create, and share AI-generated content responsibly. 

The recommendations call for both the builders of the technology, such as OpenAI, and creators and distributors of digitally created synthetic media, such as the BBC and TikTok, to be more transparent about what the technology can and cannot do, and disclose when people might be interacting with this type of content. 

The voluntary recommendations were put together by the Partnership on AI (PAI), an AI research nonprofit, in consultation with over 50 organizations. PAI’s partners include big tech companies as well as academic, civil society, and media organizations. The first 10 companies to commit to the guidance are Adobe, BBC, CBC/Radio-Canada, Bumble, OpenAI, TikTok, Witness, and synthetic-media startups Synthesia, D-ID, and Respeecher. 

“We want to ensure that synthetic media is not used to harm, disempower, or disenfranchise but rather to support creativity, knowledge sharing, and commentary,” says Claire Leibowicz, PAI’s head of AI and media integrity. 

One of the most important elements of the guidelines is a pact by the companies to include and research ways to tell users when they’re interacting with something that’s been generated by AI. This might include watermarks or disclaimers, or traceable elements in an AI model’s training data or metadata. 

Regulation attempting to rein in potential harms relating to generative AI is still lagging behind. The European Union, for example, is trying to include generative AI in its upcoming AI law, the AI Act, which could include elements such as disclosing when people are interacting with deepfakes and obligating companies to meet certain transparency requirements. 

While generative AI is a Wild West right now, says Henry Ajder, an expert on generative AI who contributed to the guidelines, he hopes they will offer companies key things they need to look out for as they incorporate the technology into their businesses.

Raising awareness and starting a conversation around responsible ways to think about synthetic media is important, says Hany Farid, a professor at the University of California, Berkeley, who researches synthetic media and deepfakes. 

But “voluntary guidelines and principles rarely work,” he adds. 

While companies such as OpenAI can try to put guardrails on technologies they create, like ChatGPT and DALL-E, other players that are not part of the pact—such as Stability.AI, the startup that created the open source image-generating AI model Stable Diffusion—can let people generate inappropriate images and deepfakes.  

“If we really want to address these issues, we’ve got to get serious,” says Farid. For example, he wants cloud service providers and app stores such as those operated by Amazon, Microsoft, Google, and Apple, which are all part of the PAI, to ban services that allow people to use deepfake technology with the intent to create nonconsensual sexual imagery. Watermarks on all AI-generated content should also be mandated, not voluntary, he says. 

Another important thing missing is how the AI systems themselves could be made more responsible, says Ilke Demir, a senior research scientist at Intel who leads the company’s work on the responsible development of generative AI. This could include more details on how the AI model was trained, what data went into it, and whether generative AI models have any biases. 

The guidelines have no mention of ensuring that there’s no toxic content in the data set of generative AI models. “It’s one of the most significant ways harm is caused by these systems,” says Daniel Leufer, a senior policy analyst at the digital rights group Access Now. 

The guidelines include a list of harms that these companies want to prevent, such as fraud, harassment, and disinformation. But a generative AI model that always creates white people is also doing harm, and that is not currently listed, adds Demir.

Farid raises a more fundamental issue. Since the companies acknowledge that the technology could lead to some serious harms and offer ways to mitigate them, “why aren't they asking the question ‘Should we do this in the first place?’”

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.