Skip to Content

How Can We Optimize AI for the Greatest Good, Instead of Profit?

At a summit in Geneva, academics, policymakers, and humanitarians are plotting how AI could be used to transform the planet for the better.

How can we ensure that artificial intelligence provides the greatest benefit to all of humanity? 

By that, we don’t necessarily mean to ask how we create AIs with a sense of justice. That's important, of course—but a lot of time is already spent weighing the ethical quandaries of artificial intelligence. How do we ensure that systems trained on existing data aren’t imbued with human ideological biases that discriminate against users? Can we trust AI doctors to correctly identify health problems in medical scans if they can’t explain what they see? And how should we teach driverless cars to behave in the event of an accident?

The thing is, all of those questions contain an implicit assumption: that artificial intelligence is already being put to use in, for instance, the workplaces, hospitals, and cars that we all use. While that might be increasingly true in the wealthy West, it’s certainly not the case for billions of people in poorer parts of the world. To that end, United Nations agencies, AI experts, policymakers and businesses have gathered in Geneva, Switzerland, for a three-day summit called AI for Good. The aim: “to evaluate the opportunities presented by AI, ensuring that AI benefits all of humanity.”

That is, of course, a broad and open-ended mission. It’s also unfair to suggest that AI hasn’t been put to good use already. Facebook has developed machine-learning software to work out from aerial imagery exactly which parts of the world are inhabited, in a bid to deliver the entire world Internet. Amazon has worked with satellite providers to use AI to identify and track, say, the growth of shantytowns. And IBM has experimented with using artificial intelligence to ease China’s smog problems.

Even so, those are small projects when you’re considering global issues such as inequality. Indeed, during the first day of the summit, Yoshua Bengio, a computer scientist at the University of Montreal, argued that a key priority of using AI for good is to utilize it to redistribute wealth and reduce inequalities within and between nations.

That, he suggested, could be achieved by focusing on research that benefits everyone—such as improving the environment, or building services available to anyone with a phone. That final point chimes nicely with a growing endeavor in Silicon Valley, as the likes of Apple, Google, and Facebook all push to develop new AI software that can run faster and more efficiently on mobile devices, rather than requiring expensive Internet connections to haul data back and forth to the cloud.

Of course, incentivizing organizations to build systems that benefit the greatest number of people isn’t itself straightforward—after all, where's the money? And to that point, cognitive scientist and ex-Uber AI researcher Gary Marcus floated an intriguing idea at the summit: a CERN for AI. For physics, CERN provided a forum in which researchers could build equipment and test theories that would further humanity’s understanding, and yet would never have been funded by regular industry or academia. Marcus wonders whether something similar could be true for AI. Perhaps such an organization would produce software that always sought to improve the lives of the many rather than the few?

If that sounds like a pipe dream, the message delivered at the same event by Salil Shetty, Secretary General of the human rights organization Amnesty International, may be worth bearing in mind. "If we base AI on the way the world works today, it will be riddled with historical biases,” he explained. “We can do better."

(Read more: Nature, "Tech Giants Grapple with the Ethical Concerns Raised by the AI Boom," "Why We Should Expect Algorithms to Be Biased," "Can This Man Make AI More Human?")

 

 

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.