MIT Technology Review Subscribe

How Can We Optimize AI for the Greatest Good, Instead of Profit?

At a summit in Geneva, academics, policymakers, and humanitarians are plotting how AI could be used to transform the planet for the better.

How can we ensure that artificial intelligence provides the greatest benefit to all of humanity? 

By that, we don’t necessarily mean to ask how we create AIs with a sense of justice. That’s important, of course—but a lot of time is already spent weighing the ethical quandaries of artificial intelligence. How do we ensure that systems trained on existing data aren’t imbued with human ideological biases that discriminate against users? Can we trust AI doctors to correctly identify health problems in medical scans if they can’t explain what they see? And how should we teach driverless cars to behave in the event of an accident?

Advertisement

The thing is, all of those questions contain an implicit assumption: that artificial intelligence is already being put to use in, for instance, the workplaces, hospitals, and cars that we all use. While that might be increasingly true in the wealthy West, it’s certainly not the case for billions of people in poorer parts of the world. To that end, United Nations agencies, AI experts, policymakers and businesses have gathered in Geneva, Switzerland, for a three-day summit called AI for Good. The aim: “to evaluate the opportunities presented by AI, ensuring that AI benefits all of humanity.”

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

That is, of course, a broad and open-ended mission. It’s also unfair to suggest that AI hasn’t been put to good use already. Facebook has developed machine-learning software to work out from aerial imagery exactly which parts of the world are inhabited, in a bid to deliver the entire world Internet. Amazon has worked with satellite providers to use AI to identify and track, say, the growth of shantytowns. And IBM has experimented with using artificial intelligence to ease China’s smog problems.

Even so, those are small projects when you’re considering global issues such as inequality. Indeed, during the first day of the summit, Yoshua Bengio, a computer scientist at the University of Montreal, argued that a key priority of using AI for good is to utilize it to redistribute wealth and reduce inequalities within and between nations.

That, he suggested, could be achieved by focusing on research that benefits everyone—such as improving the environment, or building services available to anyone with a phone. That final point chimes nicely with a growing endeavor in Silicon Valley, as the likes of Apple, Google, and Facebook all push to develop new AI software that can run faster and more efficiently on mobile devices, rather than requiring expensive Internet connections to haul data back and forth to the cloud.

Of course, incentivizing organizations to build systems that benefit the greatest number of people isn’t itself straightforward—after all, where’s the money? And to that point, cognitive scientist and ex-Uber AI researcher Gary Marcus floated an intriguing idea at the summit: a CERN for AI. For physics, CERN provided a forum in which researchers could build equipment and test theories that would further humanity’s understanding, and yet would never have been funded by regular industry or academia. Marcus wonders whether something similar could be true for AI. Perhaps such an organization would produce software that always sought to improve the lives of the many rather than the few?

If that sounds like a pipe dream, the message delivered at the same event by Salil Shetty, Secretary General of the human rights organization Amnesty International, may be worth bearing in mind. “If we base AI on the way the world works today, it will be riddled with historical biases,” he explained. “We can do better.”

(Read more: Nature, “Tech Giants Grapple with the Ethical Concerns Raised by the AI Boom,” “Why We Should Expect Algorithms to Be Biased,” “Can This Man Make AI More Human?“)

 

Advertisement

 

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement