Researchers are struggling to replicate AI studies
Missing code and data are making it difficult to compare machine-learning work—and that may be hurting progress.
The problem: Science reports that from a sample of 400 papers at top AI conferences in recent years, only 6 percent of presenters shared code. Just a third shared data, and a little over half shared summaries of their algorithms, known as pseudocode.
Why it matters: Without access to that information, it’s hard to reproduce a study’s findings. That makes it all but impossible to benchmark newly developed tools against existing ones, so it’s hard for researchers to know which direction to push future research.
How to solve it: Sometimes a lack of sharing may be understandable—say, if intellectual property is owned by a private firm. But there seems to be a wider-spread culture of keeping details under wraps. Some meetings and journals are now encouraging sharing; perhaps more ought to follow.
Deep Dive
Artificial intelligence

Meta has built a massive new language AI—and it’s giving it away for free
Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3

Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.

The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.

The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
Stay connected

Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.