Skip to Content

AI’s Research Rut

When we think of AI as one particular thing, we drag the whole field down.

When you picture AI, what do you see? A humanoid robot? When you think about a real-world application of AI, what comes to mind? Probably autonomous driving. When you think about the technical details of AI, what approach do you name? I’m willing to bet it’s deep learning.

In reality AI comes in many shapes and forms. AI machines go far beyond humanoid robots; they range from software detecting bullying on social media to wearable devices monitoring personal health risk factors to robotic arms learning to feed paralyzed people to autonomous robots exploring other planets. The potential applications of AI are limitless: personalized education, elderly assistance, wildlife behavior analysis, medical-record mining, and much more.

Our failure to appreciate this spectrum threatens to hold back the field. When we collectively picture AI as one type of thing—whether it’s humanoid robots or self-driving cars or deep learning—we’re encouraging the next generation of researchers to be excited exclusively about those narrow things. If students are presented with a homogeneous pool of AI research role models, then it’s a self-fulfilling prophecy that only students who “fit in” will remain in the field.

Since AI has enticingly broad possible applications, we need people with a comparably broad set of experiences and worldviews working on AI problems. Wouldn’t research teams working on AI medical applications benefit from researchers trained in biology? Wouldn’t teams working on AI hunger relief benefit from researchers with firsthand experience in poor countries? Wouldn’t teams working on AI assistive devices benefit from researchers with physical disabilities?

Today there’s a lot of fascinating work going on in AI (see “AI’s Language Problem”), but we’re also kind of in a rut. We’ve tended to breed the same style of researchers over and over again—people who come from similar backgrounds, have similar interests, read the same books as kids, learn from the same thought leaders, and ultimately do the same kinds of research. Given that AI is such an all-encompassing field, and a giant part of our future, we can’t afford to do that anymore.

Olga Russakovsky is a postdoctoral research fellow at the Robotics Institute of Carnegie Mellon University.

Keep Reading

Most Popular

open sourcing language models concept
open sourcing language models concept

Meta has built a massive new language AI—and it’s giving it away for free

Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3

transplant surgery
transplant surgery

The gene-edited pig heart given to a dying patient was infected with a pig virus

The first transplant of a genetically-modified pig heart into a human may have ended prematurely because of a well-known—and avoidable—risk.

Muhammad bin Salman funds anti-aging research
Muhammad bin Salman funds anti-aging research

Saudi Arabia plans to spend $1 billion a year discovering treatments to slow aging

The oil kingdom fears that its population is aging at an accelerated rate and hopes to test drugs to reverse the problem. First up might be the diabetes drug metformin.

Yann LeCun
Yann LeCun

Yann LeCun has a bold new vision for the future of AI

One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.