MIT Technology Review Subscribe

The Dangers of Tech-Bro AI

Tabitha Goldstaub, a cofounder of CognitionX, which helps companies deploy AI, says that diversifying the field is necessary to make sure products actually work well.

If you think about artificial intelligence and the core components of what makes an AI, you have the fact that it’s given a goal and then will find a way to reach that goal. It’s then often very un-transparent as to how it reached that goal. If you’re building into a machine some unconscious bias, you might not know that it’s there; the output could be detrimental to women and it’s very tough to work out exactly why that has happened.

In traditional technology, you can see [what has happened]: women dying in car crashes because the crash-test dummies were the shape of a man rather than the shape of a woman. [With AI there could be] similar life-or-death situations, in drug trials or in autonomous vehicles and things like that.

Advertisement

There are some examples of [gender bias in AI today]: Google ads displaying higher-earning [job] ads to men than women. We can hypothesize other types of situations that would happen—what if women weren’t as able to get loans or mortgages or insurance?

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

I don’t have a dystopian view of AI. I don’t see killer robots. I’m so much more focused on the narrow applications, and I think that if you look at every single one of those narrow applications there is a chance that it negatively affects women. I don’t think artificial intelligence is the issue here; it’s the additional issue rather than the cause. We’re talking about the risk that our unconscious sexism or unconscious racism seep into the machines that we’re building.

How do we get anyone who’s building AI to think about these things? We need to have consumers demand ethical AI. Not enough people are seeing this as more than just a gender issue; this is an actual, fundamental product issue.

as told to Rachel Metz

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement