If you’d inadvertently unleashed a Neo-Nazi sexbot on an unsuspecting Internet, you might be reluctant to proclaim the technology as the future of computing. Microsoft, it seems, has no such qualms.
Just a few days after yanking the errant chatbot Tay from the Internet, Microsoft’s CEO, Satya Nadella, announced that he expects similar (though presumably less offensive) bots to become a commonplace. In fact, Microsoft seems to believe that “conversational computing” could be a major new paradigm in computing.
“We want to take the power of human language and apply it more pervasively to all of the computing interface and the computing interactions,” Nadella said during his keynote at the company’s Build 2016 conference for developers.
Nadella also acknowledged the Tay debacle, though. “We want to build technology such that it gets the best of humanity, not the worst,” he said a little awkwardly. “Just last week when we launched Tay, which is a social bot, in the United States, we quickly realized it was not up to this mark, and so we’re back to the drawing board.”
Microsoft demonstrated how developers could make use of chatbots by tapping into the voice-controlled personal assistant for Windows devices, Cortana, and by building their own customized bots using new tools launched today.
It’s a risky bet, and not just because, as Tay shows, conversational bots are prone to annoying errors. Microsoft also has a history of foisting irksome digital assistants on its users, and people still complain bitterly about the company’s well-meaning yet idiotic Windows assistant Clippy.
Even so, as one artificial intelligence expert pointed out to me, the Tay episode might not be such a bad thing for Microsoft. The real danger for the company these days may be seeming irrelevant compared to competitors like Google and Facebook. Anything that makes the company seem technologically adventurous, even edgy, can’t be bad.
Unless, that is, the company hasn’t learned anything from Clippy and Tay.
(Read more: "Why Microsoft Accidentally Unleashed a Neo-Nazi Sexbot," Bloomberg)
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Data analytics reveal real business value
Sophisticated analytics tools mine insights from data, optimizing operational processes across the enterprise.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.