AIs have gotten better at holding a conversation, but tech firms are wary of rolling them out for fear of PR nightmares.
Better bots: The New York Times says recent AI advances helped Microsoft and Facebook build a “new breed” of chatbots that carefully choose how to converse. Microsoft, for instance, built one that picks the most human-sounding sentence from a bunch of contenders to create “precise and familiar” responses.
But: Like Microsoft’s disastrously racist Tay bot, they still go wrong. Facebook says 1 in 1,000 of its chatbots’ utterances may be racist, aggressive, or generally unwelcome. That’s almost inevitable when they’re trained on limited data, because there’s bound to be unsavory text in online conversations that are used as training sets.
Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.