Skip to Content
Artificial intelligence

Why AI shouldn’t be making life-and-death decisions

Plus: Meta wants to use AI to give people legs in the metaverse. 

assisted suicide machine
Exit International

To receive The Algorithm in your inbox every Monday, sign up here.

Welcome to The Algorithm! 

Let me introduce you to Philip Nitschke, also known as “Dr. Death” or “the Elon Musk of assisted suicide.” 

Nitschke has a curious goal: He wants to “demedicalize” death and make assisted suicide as unassisted as possible through technology. As my colleague Will Heaven reports, Nitschke  has developed a coffin-size machine called the Sarco. People seeking to end their lives can enter the machine after undergoing an algorithm-based psychiatric self-assessment. If they pass, the Sarco will release nitrogen gas, which asphyxiates them in minutes. A person who has chosen to die must answer three questions: Who are you? Where are you? And do you know what will happen when you press that button?

In Switzerland, where assisted suicide is legal, candidates for euthanasia must demonstrate mental capacity, which is typically assessed by a psychiatrist. But Nitschke wants to take people out of the equation entirely.

Nitschke is an extreme example. But as Will writes, AI is already being used to triage and treat patients in a growing number of health-care fields. Algorithms are becoming an increasingly important part of care, and we must try to ensure that their role is limited to medical decisions, not moral ones.

Will explores the messy morality of efforts to develop AI that can help make life-and-death decisions here.

I’m probably not the only one who feels extremely uneasy about letting algorithms make decisions about whether people live or die. Nitschke’s work seems like a classic case of misplaced trust in algorithms’ capabilities. He’s trying to sidestep complicated human judgments by introducing a technology that could make supposedly “unbiased” and “objective” decisions.

That is a dangerous path, and we know where it leads. AI systems reflect the humans who build them, and they are riddled with biases. We’ve seen facial recognition systems that don’t recognize Black people and label them as criminals or gorillas. In the Netherlands, tax authorities used an algorithm to try to weed out benefits fraud, only to penalize innocent people—mostly lower-income people and members of ethnic minorities. This led to devastating consequences for thousands: bankruptcy, divorce, suicide, and children being taken into foster care. 

As AI is rolled out in health care to help make some of the highest-stake decisions there are, it’s more crucial than ever to critically examine how these systems are built. Even if we manage to create a perfect algorithm with zero bias, algorithms lack the nuance and complexity to make decisions about humans and society on their own. We should carefully question how much decision-making we really want to turn over to AI. There is nothing inevitable about letting it deeper and deeper into our lives and societies. That is a choice made by humans.

Deeper Learning

Meta wants to use AI to give people legs in the metaverse 

Last week, Meta unveiled its latest virtual-reality headset. It has an eye-watering $1,499.99 price tag. At the virtual event, Meta pitched its vision for a “next-generation social platform” accessible to everyone. As my colleague Tanya Basu points out: “Even if you are among the lucky few who can shell out a grand and a half for a virtual-reality headset, would you really want to?”

The legs were fake: One of the big selling points for the Metaverse was the ability for avatars to have legs. Legs! At the event, a leggy avatar of Meta CEO Mark Zuckerberg announced that the company was going to use artificial intelligence to enable this feature, allowing avatars not only to walk and run but also to wear digital clothing. But there’s one problem. Meta hasn’t actually figured out how to do this yet, and the “segment featured animations created from motion capture,” as Kotaku reports

Meta’s AI lab is one of the biggest and richest in the industry, and it’s hired some of the field’s top engineers. I can’t imagine that this multibillion-dollar push to make VR Sims happen is very fulfilling work for Meta’s AI researchers. Do you work in AI/ML teams at Meta? I want to hear from you. (Drop me a line melissa.heikkila@technologyreview.com)

Bits and Bytes

Learn more about the exploited labor behind artificial intelligence
In an essay, Timnit Gebru, former co-lead of Google’s ethical AI team, and researchers at her Distributed AI Research Institute argue that AI systems are driven by labor exploitation, and that AI ethics discussions should prioritize transnational worker organization efforts. (Noema)

AI-generated art is the new clip art
Microsoft has teamed up with OpenAI to add text-to-image AI DALL-E 2 to its Office suite. Users will be able to enter prompts to create images that can be used in greeting cards or PowerPoint presentations. 
(The Verge

An AI version of Joe Rogan interviewed an AI Steve Jobs
This is pretty mind-blowing. Text-to-voice AI startup Play.ht trained an AI model on Steve Jobs’s biography and all the recordings it could find of him online in order to mimic the way Jobs would have spoken in a real podcast. The content is pretty silly, but it won’t be long until the technology develops enough to fool anyone. (Podcast.ai)

Tour Amazon’s dream home, where every appliance is also a spy
This story offers a clever way to visualize how invasive Amazon’s push to embed “smart” devices in our homes really is. (The Washington Post)

Tweet of the week
What it's like to build a machine-learning startup these days, from Hugging Face CEO and cofounder Clem Delangue

Thanks for making it this far! Catch you next week. 

Melissa

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.