MIT Technology Review Subscribe

If chatbots are going to get better, they might need to offend you

AIs have gotten better at holding a conversation, but tech firms are wary of rolling them out for fear of PR nightmares.

Better bots: The New York Times says recent AI advances helped Microsoft and Facebook build a “new breed” of chatbots that carefully choose how to converse. Microsoft, for instance, built one that picks the most human-sounding sentence from a bunch of contenders to create “precise and familiar” responses.

Advertisement

But: Like Microsoft’s disastrously racist Tay bot, they still go wrong. Facebook says 1 in 1,000 of its chatbots’ utterances may be racist, aggressive, or generally unwelcome. That’s almost inevitable when they’re trained on limited data, because there’s bound to be unsavory text in online conversations that are used as training sets.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

Why it matters: If the bots are going to keep improving, they must go in front of real users. But tech firms fear PR disasters if the software says the wrong thing. We may need to be more accepting of mistakes if we want the bots to get better.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement