Skip to Content

Precision Brain Scans

High-tech imaging takes the guesswork out of diagnosis.

The main skill in diagnosing neurological disorders such as multiple sclerosis and Alzheimer’s disease: educated guesswork. Indeed, today’s doctors rely primarily on interviews, physical examination and laboratory tests to detect these complex neurological diseases; the problem is that symptoms can vary dramatically from one patient to the next, making diagnosis tricky and subjective. But by combining new databases with improved medical-imaging techniques able to resolve telltale anatomical features a millimeter in size or less, researchers are starting to make the invisible visible, potentially enabling them to offer patients earlier and more accurate diagnoses.

At the State University of New York at Buffalo, for example, researchers have developed software that renders three-dimensional pictures of the brain from magnetic-resonance imaging data, allowing them to digitally parcel off areas of the brain and precisely calculate their size and volume. Rohit Bakshi, director of the Buffalo Neuroimaging Analysis Center, has used the technology to show that the caudate nucleus-a part of the brain’s gray matter involved in motor control and thinking-is significantly smaller in multiple-sclerosis patients than in healthy patients (see image). Through such software tools, Bakshi hopes to standardize the way neurologists analyze MRIs. “Today, two clinicians can look at the same MRI and see it differently,” says Bakshi. “We’re working on making MRI a quantitative and standardized test, like a blood test, where you get a specific, reliable value back, and you can accurately compare the results to normal people.”

Besides aiding diagnosis, the new techniques could help track the course of a disease-and the benefits of treatments. Bruce Rosen, director of the Martinos Center for Biomedical Imaging at the Massachusetts General Hospital in Boston, and his colleagues are already using MRI machines to measure the thickness of brain structures only one-tenth to two-tenths of a millimeter in size. “That means we could see changes in the brain in response to a drug that occur in three to six months instead of assessing memory improvements, which tend to evolve over 12 to 18 months,” says Rosen.

To help standardize the diagnosis process, Bakshi’s center is developing a large database of brain scans taken from multiple sites across the state of New York. With more than a thousand images already in stock, he and his colleagues are building software that correlates scans of multiple-sclerosis patients with data about the courses the disease takes with them, to identify variations and predict whether patients will recover or develop chronic illness. A consortium of U.S. universities, which includes Rosen’s imaging center, started work this year on a similar database network containing brain scans of Alzheimer’s patients from across the country. The $20 million project will connect databases at hospitals and universities in California, North Carolina and Massachusetts. “Ultimately, our hope is that when somebody comes in and we take a brain scan, we can make a diagnosis, stratify their disease and determine what treatments would be most effective,” says Rosen.

“Databases like these are definitely the future,” says Robert Knowlton, a neurologist at the University of Alabama at Birmingham Hospital. Better classifications of brain diseases, he says, will ultimately lead to better treatments.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.