Technology Listens as Doctors Keep Talking
How speech recognition software is changing so doctors don’t have to.
Doctors don’t like technology to get in their way, especially when they are dictating notes about patients. When the typewriter was invented, doctors found someone else to type their observations. When the tape recorder arrived, they mailed off tapes to transcription services.
With computers, speech recognition software has automated the work of turning a doctor’s spoken words into text. The match has been good for doctors and also for Nuance Communications, based in Burlington, Massachusetts, the market leader in medical dictation software, which last year generated about $450 million in sales of its Dragon speech software to the medical profession.
But now both Nuance and doctors are facing a threat to the way they do business: the spread of electronic medical records. Record-keeping software, heavily promoted by the government, is meant to improve patient care by getting doctors to record data in digital forms with computer-readable fields. The problem: doctors can’t talk into the forms.
That is turning out to be a major obstacle, in part because many physicians consider the dictation of rich, informative patient notes a nearly sacrosanct part of the job. “You’re asking them to do something different in order to help the computer,” says Jim Flanagan, Nuance’s chief medical information officer for research and development.
Now the speech recognition industry is racing to adapt its products so that doctors can use them to fill out the new electronic forms by talking. Working with the IBM researchers who created Watson, the computer system that was able to understand normal conversation—or “natural” language—well enough to beat humans on Jeopardy!, Nuance is developing what it calls Clinical Language Understanding. The technology is designed to automatically extract information from a doctor’s dictated narrative description of a patient and use it to fill out electronic records.
Nuance’s efforts are a response to the Health Information Technology for Economic and Clinical Health (HITECH) Act, passed in 2009. The act provides financial incentives and penalties to push health-care providers toward using electronic patient records. The hope is to reduce errors that come from handwritten notes and make it easier to track all aspects of patient care.
The problem, says Chris Russell, a neurologist with Peachtree Neurological Clinic in Atlanta, is that “electronic health records make life better for everyone in the doctor’s office except the doctor.” Russell, a software developer as well as a physician, created his own product, called NoteSwift, which helps doctors use Nuance’s software to fill out fields in an electronic medical record one by one.
One worry among physicians is that electronic records could threaten the tradition of talking through a patient’s symptoms and the potential causes. Software designers “would like you to think that you can point and click your way through a document and get the same picture of a patient,” says Paul Logan, a cardiovascular nurse practitioner whose company, Logan Solutions, helps physicians implement electronic health records. But doctors who use them, he says, find them annoying and end up recording less detail about patients.
Nuance’s system for interpreting clinical language relies on two key pieces of technology, Flanagan says. The first is what Nuance calls a detailed computer map of medical knowledge, including groups of related symptoms and degrees of severity for particular illnesses. Second, the technology can organize speech into sentences that make grammatical sense to a computer. Using these two tools, the system will draw conclusions about what a doctor is talking about when describing a patient. “It’s important to get actual data out of the text that the computer can understand,” Flanagan says.
Nuance’s system won’t merely pull information out of a description, says Flanagan. It will also flag possible missing pieces and connections. For example, if a doctor describes symptoms but doesn’t offer a diagnosis, it will prompt for an explanation. Flanagan believes this is in line with the goals that led to the push for electronic health records in the first place.
“Having back-end clinical language understanding will really transform the way dictation is done,” says Logan. “There’s so much excitement among physicians when you mention this to them.”
But Russell, the Atlanta neurologist, says it may not be necessary to introduce so much technology to keep doctors using speech recognition technology. “In a way it may be trying to take on too big a task,” he says. He believes that physicians already know how to structure their information as needed. He suggests that speech recognition companies just need to make it less onerous to navigate electronic forms, especially in the short term. Russell says, “Our goal is to get back to where we were.”
Be there when AI pioneers take center stage at EmTech Digital 2019.Register now