Skip to Content

Microsoft Says Maverick Chatbot Tay Foreshadows the Future of Computing

Despite a history of making irksome digital assistants, Microsoft thinks you want to converse with your computer.
March 30, 2016

If you’d inadvertently unleashed a Neo-Nazi sexbot on an unsuspecting Internet, you might be reluctant to proclaim the technology as the future of computing. Microsoft, it seems, has no such qualms.

Just a few days after yanking the errant chatbot Tay from the Internet, Microsoft’s CEO, Satya Nadella, announced that he expects similar (though presumably less offensive) bots to become a commonplace. In fact, Microsoft seems to believe that “conversational computing” could be a major new paradigm in computing.

“We want to take the power of human language and apply it more pervasively to all of the computing interface and the computing interactions,” Nadella said during his keynote at the company’s Build 2016 conference for developers.

Nadella also acknowledged the Tay debacle, though. “We want to build technology such that it gets the best of humanity, not the worst,” he said a little awkwardly. “Just last week when we launched Tay, which is a social bot, in the United States, we quickly realized it was not up to this mark, and so we’re back to the drawing board.”

Microsoft CEO Satya Nadella at the company's Build conference in San Francisco.

Microsoft demonstrated how developers could make use of chatbots by tapping into the voice-controlled personal assistant for Windows devices, Cortana, and by building their own customized bots using new tools launched today.

It’s a risky bet, and not just because, as Tay shows, conversational bots are prone to annoying errors. Microsoft also has a history of foisting irksome digital assistants on its users, and people still complain bitterly about the company’s well-meaning yet idiotic Windows assistant Clippy.

Even so, as one artificial intelligence expert pointed out to me, the Tay episode might not be such a bad thing for Microsoft. The real danger for the company these days may be seeming irrelevant compared to competitors like Google and Facebook. Anything that makes the company seem technologically adventurous, even edgy, can’t be bad.

Unless, that is, the company hasn’t learned anything from Clippy and Tay.

(Read more: "Why Microsoft Accidentally Unleashed a Neo-Nazi Sexbot," Bloomberg)

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.