The chronic neurological condition called Alzheimer’s disease is one of the more insidious in modern society. In 2015, some 30 million were thought to have this disease. As a hugely expensive condition to manage, this places significant burdens on health-care systems all over the world.
Although there is no known way to halt the disease in advanced cases, there is evidence that its progression can be slowed or halted if it is identified early. So finding a reliable way to spot people who are at risk of developing the disease is an important goal.
Today, Hongyoon Choi at the Cheonan Public Health Center and Kyong Hwan Jin at the Korea Advanced Institute of Science and Technology, both in South Korea, say they have used deep learning to develop just such a technique. These guys say their process can accurately identify people likely to develop Alzheimer’s in the next three years.
Cognitive decline is inevitable as we age. We tend to become more forgetful, lose our train of thought more often, and find it harder to make decisions or accomplish tasks. Doctors call this mild cognitive impairment, and it affects most people as they get older.
Many people with mild cognitive impairment go on to develop Alzheimer’s, which is much more severe. People with this condition lose their vocabulary, frequently using incorrect word substitutions. They stop recognizing close relatives, lose basic self-care skills and eventually become entirely dependent on caregivers. Most die within a few years of diagnosis.
But curiously, not all people with mild cognitive impairment follow this path. Some never deteriorate and a few even improve. So doctors would dearly love to be able to spot those likely to develop Alzheimer’s because they are most likely to benefit from treatment.
One way to do this is by studying positron emission tomography (PET) scans of the brain. Alzheimer’s is known to be characterized by the unwanted growth of protein clumps called amyloid plaques and by a slow brain metabolism as measured by the rate at brain uses glucose.
Certain types of PET scans can reveal signs of both these conditions and can therefore be used to spot people with mild cognitive impairment who are most at risk of developing Alzheimer’s.
That’s the theory. In practice, interpreting the images is hard. Researchers have found one or two strong markers that trained observers can look for, but this method is time-consuming and prone to error.
Enter Hongyoon and Kyong who have replaced human observers in this process with a deep-learning neural network.
Their method is straightforward. In recent years, Alzheimer’s researchers around the world have been constructing a database of brain images of people with and without Alzheimer’s. Hongyoon and Kyong use this database to train a convolutional neural network to recognize the difference between them.
This data set consists of brain images of 182 people in their 70s with normal brains and brain images of 139 people of roughly the same age who have been diagnosed with Alzheimer’s. With conventional training, the machine soon learns to recognize the difference with an accuracy of almost 90 percent.
Hongyoon and Kyong then use their machine to analyze a different data set. This consists of brain images of 181 people in their 70s with mild cognitive impairment of whom 79 went on to develop Alzheimer’s within three years. The task that Hongyoon and Kyong set the machine was to spot these susceptible individuals.
The results make for interesting reading. Hongyoon and Kyong say their neural network identified those at risk of developing Alzheimer’s with an accuracy of 81 percent. That is significantly higher than trained observers manage when visually analyzing the images. “These results show the feasibility of deep learning as a tool for predicting disease outcome using brain images,” they say.
That’s an interesting result. It suggests a relatively quick way of spotting people at risk of developing Alzheimer’s and those who would most benefit most from early intervention. That’s an approach that could improve the quality of life for many people and save overburdened health-care systems significant amounts of money.
More generally, Hongyoon and Kyong’s technique is just one example of the growing use of deep learning in medical diagnosis. The evidence suggests that deep-learning machines can spot complex conditions earlier and more accurately than humans. And the technique works for diverse conditions from heart disease to cancer.
Clearly, deep learning is set to change the world of medicine. The only question for those currently suffering with mild cognitive impairment is how quickly.
Ref: arxiv.org/abs/1704.06033: Predicting Cognitive Decline with Deep Learning of Brain Metabolism and Amyloid Imaging
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.