Skip to Content
Artificial intelligence

Artists can now opt out of the next version of Stable Diffusion

The move follows a heated public debate between artists and tech companies over how text-to-image AI models are trained.

December 16, 2022
a missing file icon in an ornate gold frame
Stephanie Arnett/MITTR; Getty

Artists will have the chance to opt out of the next version of one of the world’s most popular text-to-image AI generators, Stable Diffusion, the company behind it has announced

Stability.AI will work with Spawning, an organization founded by artist couple Mat Dryhurst and Holly Herndon, who have built a website called HaveIBeenTrained that allows artists to search for their works in the data set that was used to train Stable Diffusion. Artists will be able to select which works they want to exclude from the training data.

The decision follows a heated public debate between artists and tech companies over how text-to-image AI models should be trained. Stable Diffusion is based on the open-source LAION-5B data set, which is built by scraping images from the internet, including copyrighted works of artists. Some artists’ names and styles have become popular prompts for wannabe AI artists

Dryhurst told MIT Technology Review that artists have “around a couple of weeks” to opt out before Stability.AI starts training its next model, Stable Diffusion 3. 

The hope, Dryhurst says, is that until there are clear industry standards or regulation around AI art and intellectual property, Spawning’s opt-out service will augment legislation or compensate for its absence. In the future, Dryhurst says, artists will also be able to opt in to having their works included in data sets.

A spokesperson for Stability.AI told MIT Technology Review: ”We are listening to artists and the community and working with collaborators to improve the dataset. This involves allowing people to opt out of the model and also to opt in when they are not already included.”

But Karla Ortiz, an artist and a board member of the Concept Art Association, an advocacy organization for artists working in entertainment, says she doesn’t think Stability.AI is going far enough.

The fact that artists have to opt out means “that every single artist in the world is automatically opted in and our choice is taken away,” she says.

“The only thing that Stability.AI can do is algorithmic disgorgement, where they completely destroy their database and they completely destroy all models that have all of our data in it,” she says. 

The Concept Art Association is raising $270,000 to hire a full-time lobbyist in Washington, DC, in hopes of bringing about changes to US copyright, data privacy, and labor laws to ensure that artists’ intellectual property and jobs are protected. The group wants to update laws on intellectual property and data privacy to address new AI technologies, require AI companies to adhere to a strict code of ethics, and work with labor unions and industry groups that deal with creative work. 

“It just truly does feel like we artists are the canary in the coal mine right now,” says Ortiz. 

Ortiz says the group is sounding the alarm to all creative industries that AI tools are coming for creative professions “really fast,” and “the way that it’s being done is extremely exploitative.” 

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.