Artificial intelligence is infiltrating health care. We shouldn’t let it make all the decisions.
AI paternalism could put patient autonomy at risk—if we let it.
This article is from The Checkup, MIT Technology Review's weekly biotech newsletter. To receive it in your inbox every Thursday, sign up here.
Would you trust medical advice generated by artificial intelligence? It’s a question I’ve been thinking over this week, in view of yet more headlines proclaiming that AI technologies can diagnose a range of diseases. The implication is often that they’re better, faster, and cheaper than medically trained professionals.
Many of these technologies have well-known problems. They’re trained on limited or biased data, and they often don’t work as well for women and people of color as they do for white men. Not only that, but some of the data these systems are trained on are downright wrong.
There's another problem. As these technologies begin to infiltrate health-care settings, researchers say we’re seeing a rise in what’s known as AI paternalism. Paternalism in medicine has been problematic since the dawn of the profession. But now, doctors may be inclined to trust AI at the expense of a patient’s own lived experiences, as well as their own clinical judgment.
AI is already being used in health care. Some hospitals use the technology to help triage patients. Some use it to aid diagnosis, or to develop treatment plans. But the true extent of AI adoption is unclear, says Sandra Wachter, a professor of technology and regulation at the University of Oxford in the UK.
“Sometimes we don’t actually know what kinds of systems are being used,” says Wachter. But we do know that their adoption is likely to increase as the technology improves and as health-care systems look for ways to reduce costs, she says.
Research suggests that doctors may already be putting a lot of faith in these technologies. In a study published a few years ago, oncologists were asked to compare their diagnoses of skin cancer with the conclusions of an AI system. Many of them accepted the AI’s results, even when those results contradicted their own clinical opinion.
There’s a very real risk that we’ll come to rely on these technologies to a greater extent than we should. And here’s where paternalism could come in.
“Paternalism is captured by the idiom ‘the doctor knows best,’” write Melissa McCradden and Roxanne Kirsch of the Hospital for Sick Children in Ontario, Canada, in a recent scientific journal paper. The idea is that medical training makes a doctor the best person to make a decision for the person being treated, regardless of that person’s feelings, beliefs, culture, and anything else that might influence the choices any of us make.
“Paternalism can be recapitulated when AI is positioned as the highest form of evidence, replacing the all-knowing doctor with the all-knowing AI,” McCradden and Kirsch continue. They say there is a “rising trend toward algorithmic paternalism.” This would be problematic for a whole host of reasons.
For a start, as mentioned above, AI isn’t infallible. These technologies are trained on historical data sets that come with their own flaws. “You’re not sending an algorithm to med school and teaching it how to learn about the human body and illnesses,” says Wachter.
As a result, “AI cannot understand, only predict,” write McCradden and Kirsch. An AI could be trained to learn which patterns in skin cell biopsies have been associated with a cancer diagnosis in the past, for example. But the doctors who made those past diagnoses and collected that data might have been more likely to miss cases in people of color.
And identifying past trends won’t necessarily tell doctors everything they need to know about how a patient’s treatment should continue. Today, doctors and patients should collaborate in treatment decisions. Advances in AI use shouldn’t diminish patient autonomy.
So how can we prevent that from happening? One potential solution involves designing new technologies that are trained on better data. An algorithm could be trained on information about the beliefs and wishes of various communities, as well as diverse biological data, for instance. Before we can do that, we need to actually go out and collect that data—an expensive endeavor that probably won’t appeal to those who are looking to use AI to cut costs, says Wachter.
Designers of these AI systems should carefully consider the needs of the people who will be assessed by them. And they need to bear in mind that technologies that work for some groups won’t necessarily work for others, whether that’s because of their biology or their beliefs. “Humans are not the same everywhere,” says Wachter.
The best course of action might be to use these new technologies in the same way we use well-established ones. X-rays and MRIs are used to help inform a diagnosis, alongside other health information. People should be able to choose whether they want a scan, and what they would like to do with their results. We can make use of AI without ceding our autonomy to it.
Read more from Tech Review's archive
Philip Nitschke, otherwise known as “Dr. Death,” is developing an AI that can help people end their own lives. My colleague Will Douglas Heaven explored the messy morality of letting AI make life-and-death decisions in this feature from the mortality issue of our magazine.
In 2020, hundreds of AI tools were developed to aid the diagnosis of covid-19 or predict how severe specific cases would be. None of them worked, as Will reported a couple of years ago.
Will has also covered how AI that works really well in a lab setting can fail in the real world.
My colleague Melissa Heikkilä has explored whether AI systems need to come with cigarette-pack-style health warnings in a recent edition of her newsletter, The Algorithm.
Tech companies are keen to describe their AI tools as ethical. Karen Hao put together a list of the top 50 or so words companies can use to show they care without incriminating themselves.
From around the web
Scientists have used an imaging technique to reveal the long-hidden contents of six sealed ancient Egyptian animal coffins. They found broken bones, a lizard skull, and bits of fabric. (Scientific Reports)
Genetic analyses can suggest targeted treatments for people with colorectal cancer—but people with African ancestry have mutations that are less likely to benefit from these treatments than those with European ancestry. The finding highlights how important it is for researchers to use data from diverse populations. (American Association for Cancer Research)
Sri Lanka is considering exporting 100,000 endemic monkeys to a private company in China. A cabinet spokesperson has said the monkeys are destined for Chinese zoos, but conservationists are worried that the animals will end up in research labs. (Reuters)
Would you want to have electrodes inserted into your brain if they could help treat dementia? Most people who have a known risk of developing the disease seem to be open to the possibility, according to a small study. (Brain Stimulation)
A gene therapy for a devastating disease that affects the muscles of some young boys could be approved following a decision due in the coming weeks—despite not having completed clinical testing. (STAT)
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
The future of generative AI is niche, not generalized
ChatGPT has sparked speculation about artificial general intelligence. But the next real phase of AI will be in specific domains and contexts.
Video: Geoffrey Hinton talks about the “existential threat” of AI
Watch Hinton speak with Will Douglas Heaven, MIT Technology Review’s senior editor for AI, at EmTech Digital.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.