End-of-life care can be stressful for patients and their loved ones, but a new algorithm could help provide better care to people during their final months.
A paper published in arXiv by researchers from Stanford describes a deep neural network that can look at a patient’s records and estimate the chance of mortality in the next three to 12 months. The team found that this serves as a good way to identify patients who could benefit from palliative care. Importantly, the algorithm also creates reports to explain its predictions to doctors.
Palliative care is a growing trend in the U.S. It can make the end of someone’s life much less painful, and it can usually be done at home. Even as such care becomes more widespread, though, the researchers note that although 80 percent of Americans say they would like to die at home, only 20 percent end up getting to do so.
The paper points out that a shortage of palliative-care professionals means patients face delays in being examined for services, so using an algorithm could help overstretched doctors focus on patients in the greatest need.
The system works by training on several years’ worth of electronic health records and then analyzing a patient’s own records. It generates a prediction about the patient’s mortality, as well as a report for doctors to review about how it came to its conclusion. This includes details on how much certain factors—like the number of days someone has been in the hospital, the medications prescribed, and the severity of the diganosis—played into its prediction. The results have so far been positive, and the algorithm is being used in a pilot program at a university hospital, though the team didn’t say where.
As we have noted before, doctors are much more likely to trust and accept an automated system if they understand its reasoning. Andrew Ng, a coauthor of the paper and the former head of AI research at Baidu, has worked on previous automated systems that have been shown to outperform doctors in diagnosing lung diseases and spotting heart arrhythmias. But the addition of a clear way to explain the machines’ superhuman abilities may be the most valuable advance yet.
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.