If chatbots are going to get better, they might need to offend you
AIs have gotten better at holding a conversation, but tech firms are wary of rolling them out for fear of PR nightmares.
Better bots:The New York Times says recent AI advances helped Microsoft and Facebook build a “new breed” of chatbots that carefully choose how to converse. Microsoft, for instance, built one that picks the most human-sounding sentence from a bunch of contenders to create “precise and familiar” responses.
But: Like Microsoft’s disastrously racist Tay bot, they still go wrong. Facebook says 1 in 1,000 of its chatbots’ utterances may be racist, aggressive, or generally unwelcome. That’s almost inevitable when they’re trained on limited data, because there’s bound to be unsavory text in online conversations that are used as training sets.
Why it matters: If the bots are going to keep improving, they must go in front of real users. But tech firms fear PR disasters if the software says the wrong thing. We may need to be more accepting of mistakes if we want the bots to get better.