Skip to Content
Artificial intelligence

AI that makes images: 10 Breakthrough Technologies 2023

AI models that generate stunning imagery from simple phrases are evolving into powerful creative and commercial tools.

""
Erik Carter via Dall-e 2

WHO

OpenAI, Stability AI, Midjourney, Google

WHEN

Now

OpenAI introduced a world of weird and wonderful mash-ups when its text-to-image model DALL-E was released in 2021. Type in a short description of pretty much anything, and the program spat out a picture of what you asked for in seconds. DALL-E 2, unveiled in April 2022, was a massive leap forward. Google also launched its own image-making AI, called Imagen

Yet the biggest game-changer was Stable Diffusion, an open-source text-to-image model released for free by UK-based startup Stability AI in August. Not only could Stable Diffusion produce some of the most stunning images yet, but it was designed to run on a (good) home computer.

By making text-to-image models accessible to all, Stability AI poured fuel on what was already an inferno of creativity and innovation. Millions of people have created tens of millions of images in just a few months. But there are problems, too. Artists are caught in the middle of one of the biggest upheavals in a decade. And, just like language models, text-to-image generators can amplify the biased and toxic associations buried in training data scraped from the internet.

The tech is now being built into commercial software, such as Photoshop. Visual-effects artists and video-game studios are exploring how it can fast-track development pipelines. And text-to-image technology has already advanced to text-to-video. The AI-generated video clips demoed by Google, Meta, and others in the last few months are only seconds long, but that will change. One day movies could be made just by feeding a script into a computer.

Nothing else in AI grabbed people’s attention more last year—for the best and worst reasons. Now we wait to see what lasting impact these tools will have on creative industries—and the entire field of AI.

No one knows where the rise of generative AI will leave us. Read more here.

Deep Dive

Artificial intelligence

A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.

The viral AI avatar app Lensa undressed me—without my consent

My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

Roomba testers feel misled after intimate images ended up on Facebook

An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.

How to spot AI-generated text

The internet is increasingly awash with text written by AI software. We need new tools to detect it.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.