Nvidia’s Deep-Learning Chips May Give Medicine a Shot in the Arm
The chip maker Nvidia is riding the current artificial-intelligence boom with hardware designed to power cutting-edge learning algorithms. And the company sees health care and medicine as the next big market for its technology.
Kimberly Powell, who leads Nvidia’s efforts in health care, says the company is working with medical researchers in a range of areas and will look to expand these efforts in coming years.
“There’s this amazing surge in medical imaging research,” Powell said at MIT Technology Review’s EmTech Digital conference in San Francisco on Monday. “More and more we’re visiting providers at hospitals today, and they’re imagining new artificial-intelligence applications.”
Most notably, a machine-learning technique called deep learning is being applied to processing medical images and sifting through large amounts of medical data. Deep learning, which is very loosely inspired by the way neurons in the brain seem to work, has already proved incredibly useful for finding images and processing audio files (see “10 Breakthrough Technologies: Deep Learning”).
This AI technique certainly seems to be gaining acolytes in medical research. Last year a team from Google showed that deep learning can be used to automate the diagnosis of eye disease. Meanwhile, a group from Stanford University published a paper in the journal Nature that showed the technique can spot skin cancer as well as a trained dermatologist. A group from Mount Sinai Hospital in New York used the approach to analyze patients’ electronic health records and predict, with surprisingly high accuracy, what disease a person would go on to develop.
These are just a few high-profile examples. Powell noted during her talk that large medical-imaging conferences have become dominated by deep-learning papers.
The graphics processors made by Nvidia are very well suited to performing the parallel calculations required for deep learning, and the chip maker has already built a sizable business supplying hardware to deep-learning researchers in academia and industry. Nvidia makes a growing number of specialized deep-learning products, including a powerful research computer called the DGX-1 and a system for self-driving vehicles called the Drive PX.
Powell believes the company’s hardware will increasingly be found in hospitals and medical research centers, too. The approach could help improve the reliability of diagnosis, she said, and might significantly boost standards of care in developing countries, where expertise is scarce. Powell added that drug discovery would likely be another big area for deep learning in the future.
But deep learning might also help doctors find patterns that would otherwise be invisible. Nvidia is, for example, working with Bradley Erickson, a neuro-radiologist at the Mayo Clinic, to apply deep learning to brain images. Erickson has had some success in identifying genetic factors related to brain disease from images, Powell said.
Earlier, at the same event, Gary Marcus, a professor from NYU, singled out medicine as the area in which AI could have its biggest impact. “Think about cancer,” Marcus said. The risk factors that might indicate the likelihood of such a disease may be hard for a person to identify, but they could be uncovered by an algorithm, he said. “The killer app [for AI] might be major advances in how we treat medicine.”
There are, however, significant challenges in applying techniques like deep learning to medicine. The approach is so complex and opaque that it may not be clear to a doctor why an algorithm comes up with a particular diagnosis. Powell acknowledged this challenge but said that solutions, such as new ways of visualizing the behavior of deep-learning networks, were emerging. “It’s a big topic in research right now,” she said.
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.
AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.
The original startup behind Stable Diffusion has launched a generative AI for video
Runway’s new model, called Gen-1, can change the visual style of existing videos and movies.
GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why
We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.