MIT Technology Review Subscribe

Solving bias

Bias is to AI as rust is to steel. It corrupts decisions, leaving us unsure of the integrity of our systems. Lurking within data and algorithms, these hidden prejudices skew AI results in unexpected and undesired directions. Next month in San Francisco, EmTech Digital explores the practical approaches to addressing bias in algorithms and data. 

Building ethical artificial intelligence is an enormously complex task because bias is in the eye of the beholder. Is an AI-based college admission system that considers gender and geography to balance the pool of accepted applicants more or less biased than one that does not? While we can probably agree that a balanced system is better, who has the authority to make these decisions

Advertisement

Artificial intelligence has given us algorithms capable of recognizing faces, diagnosing diseases, and, of course, crushing computer games. But even the smartest algorithms can sometimes behave in unexpected and unwanted ways. How can we prevent aberrant behavior in machine learning as the technology moves out of research labs and into the real world?

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

Join us next month to explore these topics and more at EmTech Digital.

Purchase your ticket now.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement