Skip to Content
Artificial intelligence

A giant, superfast AI chip is being used to find better cancer drugs

A new generation of specialized hardware could make drug development and material discovery orders of magnitude faster.
November 20, 2019
The Cerebras CS-1 computer, optimized for artificial intelligence applications.
The Cerebras CS-1 computer, optimized for artificial intelligence applications.Argonne National Laboratory

At Argonne National Laboratory, roughly 30 miles from downtown Chicago, scientists try to understand the origin and evolution of the universe, create longer-lasting batteries, and develop precision cancer drugs.

All these different problems have one thing in common: they are tough because of their sheer scale. In drug discovery, it’s estimated that there could be more potential drug-like molecules than there are atoms in the solar system. Searching such a vast space of possibilities within human time scales requires powerful and fast computation. Until recently, that was unavailable, making the task pretty much unfathomable.

But in the last few years, AI has changed the game. Deep-learning algorithms excel at quickly finding patterns in reams of data, which has sped up key processes in scientific discovery. Now, along with these software improvements, a hardware revolution is also on the horizon.

Yesterday Argonne announced that it has begun to test a new computer from the startup Cerebras that promises to accelerate the training of deep-learning algorithms by orders of magnitude. The computer, which houses the world’s largest chip, is part of a new generation of specialized AI hardware that is only now being put to use.

“We’re interested in accelerating the AI applications that we have for scientific problems,” says Rick Stevens, Argonne’s associate lab director for computing, environment, and life sciences. “We have huge amounts of data and big models, and we’re interested in pushing their performance.”

An aerial view of the Argonne National Lab campus.
The Argonne National Lab's campus, located in the outskirts of Chicago.
Argonne National Laboratory

Currently, the most common chips used in deep learning are known as graphical processing units, or GPUs. GPUs are great parallel processors. Before their adoption by the AI world, they were widely used for games and graphic production. By coincidence, the same characteristics that allow them to quickly render pixels are also the ones that make them the preferred choice for deep learning.

But fundamentally, GPUs are general purpose; while they have successfully powered this decade’s AI revolution, their designs are not optimized for the task. These inefficiencies cap the speed at which the chips can run deep-learning algorithms and cause them to soak up huge amounts of energy in the process.

In response, companies have raced to design new chip architectures that are specially suited for AI. Done well, such chips have the potential to train deep-learning models up to 1,000 times faster than GPUs, with far less energy. Cerebras is among the long list of companies that have since jumped to capitalize on the opportunity. Others include startups like Graphcore, SambaNova, and Groq, and incumbents like Intel and Nvidia.

The Cerebras computer being installed at Argonne
The Cerebras computer being installed at Argonne.
courtesy of National Argonne Laboratory

A successful new AI chip will have to meet several criteria, says Stevens. At a minimum, it has to be 10 or 100 times faster than the general-purpose processors when working with the lab’s AI models. Many of the specialized chips are optimized for commercial deep-learning applications, like computer vision and language, but may not perform as well when handling the kinds of data common in scientific research. “We have a lot of higher-dimensional data sets,” Stevens says—sets that weave together massive disparate data sources and are far more complex to process than a two-dimensional photo.

The chip must also be reliable and easy to use. “We’ve got thousands of people doing deep learning at the lab, and not everybody’s a ninja programmer,” says Stevens. “Can people use the chip without having to spend time learning something new on the coding side?”

Thus far, Cerebras’s computer has checked all the boxes. Thanks to its chip size—it is larger than an iPad and has 1.2 trillion transistors for making calculations—it isn’t necessary to hook multiple smaller processors together, which can slow down model training. In testing, it has already shrunk the training time of models from weeks to hours. “We want to be able to train these models fast enough so the scientist that’s doing the training still remembers what the question was when they started,” says Stevens.

The Cerebras CS-1 computer up and running.
The computer up and running.
courtesy of National Argonne Laboratory

Initially, Argonne has been testing the computer on its cancer drug research. The goal is to develop a deep-learning model that can predict how a tumor might respond to a drug or combination of drugs. The model can then be used in one of two ways: to develop new drug candidates that could have desired effects on a specific tumor, or to predict the effects of a single drug candidate on many different types of tumors.

Stevens expects Cerebras’s system to dramatically speed up both development and deployment of the cancer drug model, which could involve training the model hundreds of thousands of times and then running it billions more times to make predictions on every drug candidate.

He also hopes it will boost the lab’s research in other topics, such as battery materials and traumatic brain injury. The former work would involve developing an AI model for predicting the properties of millions of molecular combinations to find alternatives to lithium-ion chemistry. The latter would involve developing a model to predict the best treatment options. It’s a surprisingly hard task because it requires processing so many types of data—brain images, biomarkers, text—very quickly.

Ultimately Stevens is excited by the potential that the combination of AI software and hardware advancements will bring to scientific exploration. “It’s going to change dramatically how scientific simulation happens,” he says.

To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.