Skip to Content

Is DeepMind’s Health-Care App a Solution, or a Problem?

Software will alert medics to early signs of illness, but some critics argue that too much data is being shared.
November 22, 2016

DeepMind, Google’s artificial intelligence outfit, wants to streamline health care by using machine learning to provide medics with intelligent notifications. But not everyone is happy with the piles of data being shared with the company.

The project will provide medics across a number of London hospitals with alerts about patients via an app called Streams. The app is meant to provide easy access to patient histories and test results for nurses and doctors. But its system will also learn to track patterns in patients’ blood test data and flag cases that show early signs of kidney injury to the appropriate doctors.

DeepMind has signed a five-year contract with the U.K.’s National Health Service to provide the app, according to the BBC. In return, DeepMind gets access to records belonging to over 1.6 million patients who are registered with one of the Royal Free NHS Trust’s three London hospitals.

DeepMind wants to make better use of medical records.

The new agreement is actually an overhaul to an existing contract that came under fire earlier this year as part of an investigation by New Scientist. A freedom of information request showed that the Royal Free Hospital Trust had already entered into a data-sharing agreement with DeepMind, but the partnership had failed to register the app with the U.K. government’s Medicines and Healthcare products Regulatory Agency. The new contract rectifies that oversight.

But some critics still worry about the arrangement. Speaking to the Financial Times, Julia Powles, a lawyer from the University of Cambridge who specializes in technology, said that DeepMind is getting “swift and broad access into the NHS, on the back of persuasive but unproven promises of efficiency and innovation.” She added that there’s no real way of knowing what the company is doing with the data.

For its part, DeepMind argues that the software will help streamline health care at the hospitals by “freeing up clinicians’ time from juggling multiple pager, desktop-based, and paper systems.” Ultimately, the company claims that once the software is fully rolled out it could “redirect over half a million hours per year away from admin and towards direct patient care.” It also points out that patient data is encrypted, and is used only by DeepMind, not the larger organization of Google.

It’s not the only health-related project that DeepMind is working on. The company is also using its machine learning approaches to help University College Hospital streamline its radiotherapy treatments, and with Moorfields Eye Hospital to spot the early signs of visual degeneration.

Much rests on the underlying aims of DeepMind. Its critics clearly suspect that these projects are a way to obtain large quantities of data that would otherwise be off-limits to it. But if its goal is to simply make health-care services as efficient as possible, then there’s little other way to enlist its help than offering up such large data sets. With any luck the mantras of “first, do no harm” and “don’t be evil” can happily co-exist.

(Read more: BBC, Financial Times, “DeepMind Will Use AI to Streamline Targeted Cancer Treatment,” “DeepMind’s First Medical Research Gig Will Use AI to Diagnose Eye Disease”)

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.