IBM says that Watson, its artificial-intelligence technology, can use advanced computer vision to process huge volumes of medical images. Now Watson has its sights set on using this ability to help doctors diagnose diseases faster and more accurately.
Last week IBM announced it would buy Merge Healthcare for a billion dollars. If the deal is finalized, this would be the third health-care data company IBM has bought this year (see “Meet the Health-Care Company IBM Needed to Make Watson More Insightful”). Merge specializes in handling all kinds of medical images, and its service is used by more than 7,500 hospitals and clinics in the United States, as well as clinical research organizations and pharmaceutical companies. Shahram Ebadollahi, vice president of innovation and chief science officer for IBM’s Watson Health Group, says the acquisition is part of an effort to draw on many different data sources, including anonymized, text-based medical records, to help physicians make treatment decisions.
Merge’s data set contains some 30 billion images, which is crucial to IBM because its plans for Watson rely on a technology, called deep learning, that trains a computer by feeding it large amounts of data.
Watson won Jeopardy! by using advanced natural-language processing and statistical analysis to interpret questions and provide the correct answers. Deep learning was added to Watson’s skill set more recently (see “IBM Pushes Deep Learning with a Watson Upgrade”). This new approach to artificial intelligence involves teaching computers to spot patterns in data by processing it in ways inspired by networks of neurons in the brain (see “Breakthrough Technologies 2013: Deep Learning”). The technology has already produced very impressive results in speech recognition (see “Microsoft Brings Star Trek’s Voice Translator to Life”) and image recognition (see “Facebook Creates Software That Matches Faces Almost as Well as You Do”).
IBM’s researchers think medical image processing could be next. Images are estimated to make up as much as 90 percent of all medical data today, but it can be difficult for physicians to glean important information from them, says John Smith, senior manager for intelligent information systems at IBM Research.
One of the most promising near-term applications of automated image processing, says Smith, is in detecting melanoma, a type of skin cancer. Diagnosing melanoma can be difficult, in part because there is so much variation in the way it appears in individual patients. By feeding a computer many images of melanoma, it is possible to teach the system to recognize very subtle but important features associated with the disease. The technology IBM envisions might be able to compare a new image from a patient with many others in a database and then rapidly give the doctor important information, gleaned from the images as well as from text-based records, about the diagnosis and potential treatments.
Finding cancer in lung CT scans is another good example of how such technology could help diagnosis, says Jeremy Howard, CEO of Enlitic, a one-year-old startup that is also using deep learning for medical image processing (see “A Startup Hopes to Teach Computers to Spot Tumors in Medical Scans”). “You have to scroll through hundreds and hundreds of slices looking for a few little glowing pixels that appear and disappear, and that takes a long time, and it is very easy to make a mistake,” he says. Howard says his company has already created an algorithm capable of identifying relevant characteristics of lung tumors more accurately than radiologists can.
Howard says the biggest barrier to using deep learning in medical diagnostics is that so much of the data necessary for training the systems remains isolated in individual institutions, and government regulations can make it difficult to share that information. IBM’s acquisition of Merge, with its billions of medical images, could help address that problem.