UCSF Professor (and MD) Bob Wachter has a unique proposal for future applications of IBM’s Jeopardy question answering supercomputer, Watson: Why not turn it into a front-line tool for generating possible diagnoses based on patient histories?
It’s not such a far-fetched notion: John Kelly, head of I.B.M.’s research labs, told the New York Times that he’d like to see a “medical version” of Watson, one that could be available within a few years.
Wachter takes that notion and runs with it, imagining a system that extends Watson’s knowledge base, which is currently rather static, and adapts it to a specific context.
Let’s say that Watson, M.D. is pressed into service in New Orleans, a city that has suffered with mosquito-borne illnesses for so long that Tulane University has one of the only departments of Tropical Medicine in the U.S. Rather than simply churning through statistical relationships between various symptoms, as it normally would, the system would also learn from the local patient body - making it more likely, for example, to identify outbreaks of otherwise statistically unlikely diseases.
According to Wachter, what makes Watson so promising is its ability to handle ambiguity - both the wordplay present in Jeopardy clues and any extraneous information. In the past, attempts to apply AI to medical diagnoses have lacked the sophistication necessary to handle relatively straightforward inputs, much less natural language queries:
…In the 1980s medical informaticians dove headlong into the quest for a “killer app” medical AI program. Going by names like DxPlain and Iliad, virtually all suffered from an inability to “roll with the punches” - to handle unexpected or extraneous data - like an expert. While they could create lists of possible diagnoses that included a few surprising and plausible choices, all of them also spewed out lots of unusable garbage. Moreover, the programs were clunky and expensive, and, because all clinical data were on paper charts, it took redundant work to enter the necessary information into the computer program to generate the output. By the early 1990s, the field of medical AI was moribund, the enthusiasm sapped.
The exponential growth of information represents an enormous challenge for doctors. There are terabytes of data about diseases and symptoms, treatments and outcomes. This data is leading to an explosion of research papers. In 2008, there were 50,000 papers published on neuroscience alone, more than twice as many as in 2006.
It’s impossible for one person, or even a team of people to keep on top of these learnings. Conceivably, a question-answering machine, like IBM’s Watson, could reading those thousands of papers, find trends and correlations, and answer questions about them.
A tool like this, matching symptoms of patients with findings in the literature and records, could help doctors come up with diagnoses, and point to dangers and downfalls of their own suggestions. This machine, a bionic Dr. House, would by no means be infallible. Some of its suggestions would be silly, and it would be up to humans to vet its suggestions. But it could be a useful tool.
Systems like Watson would not represent the end of expert judgement, but merely its enhancement: doctors still have to make the call, relying on their own experience to filter out any nonsense diagnoses.
That said, it’s not hard to imagine a mid-20th century version of Dr. House that looks something like this: