Skip to Content
Artificial intelligence

What’s next for AI

Get a head start with our four big bets for 2023.

""
Stephanie Arnett/MITTR; Unsplash, Pexels, Wellcome Collection

In 2022, AI got creative. AI models can now produce remarkably convincing pieces of text, pictures, and even videos, with just a little prompting.

It’s only been nine months since OpenAI set off the generative AI explosion with the launch of DALL-E 2, a deep-learning model that can produce images from text instructions. That was followed by a breakthrough from Google and Meta: AIs that can produce videos from text. And it’s only been a few weeks since OpenAI released ChatGPT, the latest large language model to set the internet ablaze with its surprising eloquence and coherence. 

The pace of innovation this year has been remarkable—and at times overwhelming. Who could have seen it coming? And how can we predict what’s next?

Luckily, here at MIT Technology Review we’re blessed with not just one but two journalists who spend all day, every day obsessively following all the latest developments in AI, so we’re going to give it a go. 

Here, Will Douglas Heaven and Melissa Heikkilä tell us the four biggest trends they expect to shape the AI landscape in 2023.

Over to you, Will and Melissa.

Get ready for multipurpose chatbots

GPT-4 may be able to handle more than just language

The last several years have seen a steady drip of bigger and better language models. The current high-water mark is ChatGPT, released by OpenAI at the start of December. This chatbot is a slicker, tuned-up version of the company’s GPT-3, the AI that started this wave of uncanny language mimics back in 2020.

But three years is a long time in AI, and though ChatGPT took the world by storm—and inspired breathless social media posts and newspaper headlines thanks to its fluid, if mindless, conversational skills—all eyes now are on the next big thing: GPT-4. Smart money says that 2023 will be the year the next generation of large language models kicks off.

What should we expect? For a start, future language models may be more than just language models. OpenAI is interested in combining different modalities—such as image or video recognition—with text. We’ve seen this with DALL-E. But take the conversational skills of ChatGPT and mix them up with image manipulation in a single model and you’d get something a lot more general-purpose and powerful. Imagine being able to ask a chatbot what’s in an image, or asking it to generate an image, and have these interactions be part of a conversation so that you can refine the results more naturally than is possible with DALL-E.

We saw a glimpse of this with DeepMind’s Flamingo, a “visual language model” revealed in April, which can answer queries about images using natural language. And then, in May, DeepMind announced Gato, a “generalist” model that was trained using the same techniques behind large language models to perform different types of tasks, from describing images to playing video games to controlling a robot arm.

If GPT-4 builds on such tech, expect the power of the best language and image-making AI (and more) in one package. Combining skills in language and images could in theory make next-gen AI better at understanding both. And it won’t just be OpenAI. Expect other big labs, especially DeepMind, to push ahead with multimodal models next year.

But of course, there’s a downside. Next-generation language models will inherit most of this generation’s problems, such as an inability to tell fact from fiction, and a penchant for prejudice. Better language models will make it harder than ever to trust different types of media. And because nobody has fully figured out how to train models on data scraped from the internet without absorbing the worst of what the internet contains, they will still be filled with filth.   

—Will Douglas Heaven

AI’s first red lines

New laws and hawkish regulators around the world want to put companies on the hook 

Until now, the AI industry has been a Wild West, with few rules governing the use and development of the technology. In 2023 that is going to change. Regulators and lawmakers spent 2022 sharpening their claws. Next year, they are going to pounce. 

We are going to see what the final version of the EU’s sweeping AI law, the AI Act, will look like as lawmakers finish amending the bill, potentially by the summer. It will almost certainly include bans on AI practices deemed detrimental to human rights, such as systems that score and rank people for trustworthiness. 

The use of facial recognition in public places will also be restricted for law enforcement in Europe, and there’s even momentum to forbid that altogether for both law enforcement and private companies, although a total ban will face stiff resistance from countries that want to use these technologies to fight crime. The EU is also working on a new law to hold AI companies accountable when their products cause harm, such as privacy infringements or unfair decisions made by algorithms. 

In the US, the Federal Trade Commission is also closely watching how companies collect data and use AI algorithms. Earlier this year, the FTC forced weight loss company Weight Watchers to destroy data and algorithms because it had collected data on children illegally. In late December, Epic, which makes games like Fortnite, dodged the same fate by agreeing to a $520 million settlement. The regulator has spent this year gathering feedback on potential rules around how companies handle data and build algorithms, and chair Lina Khan has said the agency intends to protect Americans from unlawful commercial surveillance and data security practices with “urgency and rigor.”

In China, authorities have recently banned creating deepfakes without the consent of the subject. Through the AI Act, the Europeans want to add warning signs to indicate that people are interacting with deepfakes or AI-generated images, audio, or video. 

All these regulations could shape how technology companies build, use and sell AI technologies. However, regulators have to strike a tricky balance between protecting consumers and not hindering innovation — something tech lobbyists are not afraid of reminding them of. 

AI is a field that is developing lightning fast, and the challenge will be to keep the rules precise enough to be effective, but not so specific that they become quickly outdated. As with EU efforts to regulate data protection, if new laws are implemented correctly, the next year could usher in a long-overdue era of AI development with more respect for privacy and fairness. 

—Melissa Heikkilä

Big tech could lose its grip on fundamental AI research

AI startups flex their muscles 

Big Tech companies are not the only players at the cutting edge of AI; an open-source revolution has begun to match, and sometimes surpass, what the richest labs are doing. 

In 2022 we saw the first community-built, multilingual large language model, BLOOM, released by Hugging Face. We also saw an explosion of innovation around the open-source text-to-image AI model Stable Diffusion, which rivaled OpenAI's DALL-E 2

The big companies that have historically dominated AI research are implementing massive layoffs and hiring freezes as the global economic outlook darkens. AI research is expensive, and as purse strings are tightened, companies will have to be very careful about picking which projects they invest in—and are likely to choose whichever have the potential to make them the most money, rather than the most innovative, interesting, or experimental ones, says Oren Etzioni, the CEO of the Allen Institute for AI, a research organization.

That bottom-line focus is already taking effect at Meta, which has reorganized its AI research teams and moved many of them to work within teams that build products

But while Big Tech is tightening its belt, flashy new upstarts working on generative AI are seeing a surge in interest from venture capital funds

Next year could be a boon for AI startups, Etzioni says. There is a lot of talent floating around, and often in recessions people tend to rethink their lives—going back into academia or leaving a big corporation for a startup, for example. 

Startups and academia could become the centers of gravity for fundamental research, says Mark Surman, the executive director of the Mozilla Foundation. 

“We’re entering an era where [the AI research agenda] will be less defined by big companies,” he says. “That’s an opportunity.” 

—Melissa Heikkilä

Big Pharma is never going to be the same again

From AI-produced protein banks to AI-designed drugs, biotech enters a new era

In the last few years, the potential for AI to shake up the pharmaceutical industry has become clear. DeepMind's AlphaFold, an AI that can predict the structures of proteins (the key to their functions), has cleared a path for new kinds of research in molecular biology, helping researchers understand how diseases work and how to create new drugs to treat them. In November, Meta revealed ESMFold, a much faster model for predicting protein structure—a kind of autocomplete for proteins, which uses a technique based on large language models.

Between them, DeepMind and Meta have produced structures for hundreds of millions of proteins, including all that are known to science, and shared them in vast public databases. Biologists and drug makers are already benefiting from these resources, which make looking up new protein structures almost as easy as searching the web. But 2023 could be the year that this groundwork really bears fruit. DeepMind has spun off its biotech work into a separate company, Isomorphic Labs, which has been tight-lipped for more than a year now. There’s a good chance it will come out with something big this year.

Further along the drug development pipeline, there are now hundreds of startups exploring ways to use AI to speed up drug discovery and even design previously unknown kinds of drugs. There are currently 19 drugs developed by AI drug companies in clinical trials (up from zero in 2020), with more to be submitted in the coming months. It’s possible that initial results from some of these may come out next year, allowing the first drug developed with the help of AI to hit the market.

But clinical trials can take years, so don’t hold your breath. Even so, the age of pharmatech is here and there’s no going back. “If done right, I think that we will see some unbelievable and quite amazing things happening in this space,” says Lovisa Afzelius at Flagship Pioneering, a venture capital firm that invests in biotech. 

—Will Douglas Heaven

This story is a part of MIT Technology Review’s What’s Next series, where we look across industries, trends, and technologies to give you a first look at the future.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.