Skip to Content
Artificial intelligence

A new study shows what it might take to make AI useful in health care

Researchers used machine vision to help nurses monitor ICU patients. The way they approached their work shows the value of asking what people actually need artificial intelligence for.
March 23, 2019

Hospital intensive care units can be frightening places for patients. And for good reason. In the US, the ICU has a higher mortality rate than any other hospital unit—between 8% and 19%, totaling roughly 500,000 deaths a year. Those who do not die may suffer in other ways, such as long-term physical and mental impairment. For nurses, working in one can easily lead to burnout because it takes so much physical and emotional stamina to administer round-the-clock care.

Now a new paper, published in Nature Digital Medicine, shows how AI might be able to help. It also offers a timely example of how and why AI researchers should work alongside practitioners in other industries.

“This study was really pioneering,” says Eric Topol, a leading physician and author of the newly released book Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. He also serves as co–editor in chief of the journal. “They went somewhere where others haven’t been before.”

The study is the result of a six-year collaboration between AI researchers and medical professionals at Stanford University and Intermountain LDS Hospital in Salt Lake City, Utah. It used machine vision to continuously monitor ICU patients during day-to-day tasks. The goal was to test the feasibility of passively tracking how often they moved and for how long. Early studies of ICU patients have shown that movement can accelerate healing, reduce delirium, and prevent muscle atrophy, but the scope of those studies has been limited by the challenges of monitoring patients at scale.

Depth sensors were installed in seven individual patient rooms and collected three-dimensional silhouette data 24 hours a day over the course of two months. The researchers then developed algorithms to analyze the footage—helping them detect when patients climbed into and out of bed or got into and out of a chair, as well as the number of staff involved in each activity.

The results showed preliminary success: on average, the algorithm for detecting mobility activities correctly identified the activities a patient was performing 87% of the time. The algorithm for tracking the number of personnel fared less well, reaching 68% accuracy. The researchers say that both measures would probably be improved by using multiple sensors in each room, to compensate for people blocking one another from a single sensor’s view.

While the results were not as robust as those typically seen in journal publications, the study is one of the first to demonstrate the feasibility of using sensors and algorithms to understand what’s happening in the ICU. “A lot of people might not have even thought this is possible at all,” says Topol. “A patient’s room is kind of like Grand Central Station. There’s so many things going on.”

The demonstration suggests how these systems might augment the work of hospital staff. If algorithms can track when a patient has fallen or even anticipate when someone is starting to have trouble, they can alert the staff that help is required. This could spare nurses the worry provoked by leaving one patient alone as they go on to care for another.

But what makes the study even more notable is its approach. Much AI research today focuses purely on advancing algorithms out of context, such as by fine-tuning computer vision in a simulated rather than live environment. But when dealing with sensitive applications such as health care, this can lead to algorithms that, while accurate, are unsafe to deploy or do not tackle the right problems.

In contrast, the Stanford team worked with medical professionals from the very beginning to understand what they needed and reframe those needs as machine-vision problems. For example, through discussions with the nurses and other hospital staff, the AI researchers concluded that using depth sensors rather than regular cameras would protect the privacy of patients and personnel. “The clinicians I worked with—we discussed computer vision and AI for years,” says Serena Yeung, one of the lead authors on the paper, who will become an assistant professor of biomedical data science at Stanford this fall. “Through this process, we were able to unearth new application areas that could benefit from this technology.” 

The approach meant the study went slowly: it took time to get buy-in from all levels of the hospital, and it was technically complex to analyze the hectic, messy environment of the ICU while using only silhouette data. But taking this time was absolutely critical to design a safe, effective prototype of a system that will one day be genuinely beneficial to the patients and care staff, says Yeung.

Unfortunately, the current culture and incentives in AI research do not lend themselves to such collaborations. The pressure to move fast and publish quickly leads researchers to avoid projects that don’t produce immediate results, and the privatization of a lot of AI funding hurts projects without clear commercialization opportunities. “It is rare to see people working on an end-to-end system in the real world, and also spending the many years that it takes and doing the grunt work that is required to do this type of impactful work,” says Timnit Gebru, co-lead of the Ethical AI Team at Google, who was not involved in the research.

Fortunately, a growing number of experts are pushing to change the status quo. MIT and Stanford are each opening new interdisciplinary research hubs with a charge to pursue human-centered, ethical AI. Yeung also sees opportunities for algorithmically focused AI conferences like NeurIPS and ICML to partner more closely with researchers who focus on social impact.

Topol is optimistic that deeper collaboration between the AI and medical communities will bring forth a new standard of health care. “We’ve never had truly patient-centered care,” he says. “I hope we will get there with this technology.”

This story originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your inbox, sign up here for free.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.