If chatbots are going to get better, they might need to offend you
AIs have gotten better at holding a conversation, but tech firms are wary of rolling them out for fear of PR nightmares.
Better bots: The New York Times says recent AI advances helped Microsoft and Facebook build a “new breed” of chatbots that carefully choose how to converse. Microsoft, for instance, built one that picks the most human-sounding sentence from a bunch of contenders to create “precise and familiar” responses.
But: Like Microsoft’s disastrously racist Tay bot, they still go wrong. Facebook says 1 in 1,000 of its chatbots’ utterances may be racist, aggressive, or generally unwelcome. That’s almost inevitable when they’re trained on limited data, because there’s bound to be unsavory text in online conversations that are used as training sets.
Why it matters: If the bots are going to keep improving, they must go in front of real users. But tech firms fear PR disasters if the software says the wrong thing. We may need to be more accepting of mistakes if we want the bots to get better.
Deep Dive
Artificial intelligence
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
Deepfakes of Chinese influencers are livestreaming 24/7
With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
AI hype is built on high test scores. Those tests are flawed.
With hopes and fears about the technology running wild, it's time to agree on what it can and can't do.
You need to talk to your kid about AI. Here are 6 things you should say.
As children start back at school this week, it’s not just ChatGPT you need to be thinking about.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.