Artists will have the chance to opt out of the next version of one of the world’s most popular text-to-image AI generators, Stable Diffusion, the company behind it has announced.
Stability.AI will work with Spawning, an organization founded by artist couple Mat Dryhurst and Holly Herndon, who have built a website called HaveIBeenTrained that allows artists to search for their works in the data set that was used to train Stable Diffusion. Artists will be able to select which works they want to exclude from the training data.
The decision follows a heated public debate between artists and tech companies over how text-to-image AI models should be trained. Stable Diffusion is based on the open-source LAION-5B data set, which is built by scraping images from the internet, including copyrighted works of artists. Some artists’ names and styles have become popular prompts for wannabe AI artists.
Dryhurst told MIT Technology Review that artists have “around a couple of weeks” to opt out before Stability.AI starts training its next model, Stable Diffusion 3.
The hope, Dryhurst says, is that until there are clear industry standards or regulation around AI art and intellectual property, Spawning’s opt-out service will augment legislation or compensate for its absence. In the future, Dryhurst says, artists will also be able to opt in to having their works included in data sets.
A spokesperson for Stability.AI told MIT Technology Review: ”We are listening to artists and the community and working with collaborators to improve the dataset. This involves allowing people to opt out of the model and also to opt in when they are not already included.”
But Karla Ortiz, an artist and a board member of the Concept Art Association, an advocacy organization for artists working in entertainment, says she doesn’t think Stability.AI is going far enough.
The fact that artists have to opt out means “that every single artist in the world is automatically opted in and our choice is taken away,” she says.
“The only thing that Stability.AI can do is algorithmic disgorgement, where they completely destroy their database and they completely destroy all models that have all of our data in it,” she says.
The Concept Art Association is raising $270,000 to hire a full-time lobbyist in Washington, DC, in hopes of bringing about changes to US copyright, data privacy, and labor laws to ensure that artists’ intellectual property and jobs are protected. The group wants to update laws on intellectual property and data privacy to address new AI technologies, require AI companies to adhere to a strict code of ethics, and work with labor unions and industry groups that deal with creative work.
“It just truly does feel like we artists are the canary in the coal mine right now,” says Ortiz.
Ortiz says the group is sounding the alarm to all creative industries that AI tools are coming for creative professions “really fast,” and “the way that it’s being done is extremely exploitative.”
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.