Hello,

We noticed you're browsing in private or incognito mode.

To continue reading this article, please exit incognito mode or log in.

Not an Insider? Subscribe now for unlimited access to online articles.

Intelligent Machines

Why Microsoft Accidentally Unleashed a Neo-Nazi Sexbot

It’s not surprising that Microsoft’s chatbot spewed racist invective, but here’s how it could have been avoided.

When Microsoft unleashed Tay, an artificially intelligent chatbot with the personality of a flippant 19-year-old, the company hoped that people would interact with her on social platforms like Twitter, Kik, and GroupMe. The idea was that by chatting with her you’d help her learn, while having some fun and aiding her creators in their AI research.

The good news: people did talk to Tay. She quickly racked up over 50,000 Twitter followers who could send her direct messages or tweet at her, and she’s sent out over 96,000 tweets so far.

The bad news: in the short time since she was released on Wednesday, some of Tay’s new friends figured out how to get her to say some really awful, racist things. Like one now-deleted tweet, which read, “bush did 9/11 and Hitler would have done a better job than the monkey we have now." There were apparently a number of sex-related tweets, too.

Microsoft has reportedly been deleting some of these tweets, and in a statement the company said it has “taken Tay offline” and is “making adjustments.”

Microsoft blamed the offensive comments on a “coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways.”

That may be partly true, but I got a taste of her meaner side on Wednesday without doing much to provoke her. I responded to a tweet from Meerkat founder Ben Rubin—he was asking Tay to summarize the Wikipedia entry for “inflection point”—telling him I doubted she could handle the task since she’d already failed to tell me if she preferred Katy Perry’s music to Taylor Swift’s. Tay responded to us both by saying, “taylor swift rapes us daily.” Ouch.

As artificial-intelligence expert Azeem Azhar told Business Insider, Microsoft’s Technology and Research and Bing teams, who are behind Tay, should have put some filters on her from the start. That way, she could refuse to respond to certain words (like “Holocaust” or “genocide”), or respond with a canned comment like “I don’t know anything about that.” She also should have been prevented from repeating comments, which seems to have been what caused some of the trouble.

But people act horribly online all the time. The behavior Tay reacted to—and the reactions she gave—should surprise nobody at Microsoft. Conversational AI is really tricky, and it learns by being trained on lots of data. Tay’s training set consisted of a bunch of nasty tweets, so her artificial brain slurped them up and she spit out what seemed like proper rejoinders.

Really, what happened provides an excellent learning opportunity if Microsoft wants to build AI that’s as intelligent as possible. If by chatting online Tay can help Microsoft figure out how to use AI to recognize trolling, racism, and generally awful people, perhaps she can eventually come up with better ways to respond.

(Read more: Business Insider, The Telegraph, "How DARPA Took On the Twitter Bot Menace with One Hand Behind Its Back")

Hear more about artificial intelligence at EmTech MIT 2017.

Register now

Uh oh–you've read all of your free articles for this month.

Insider Premium
$179.95/yr US PRICE

More from Intelligent Machines

Artificial intelligence and robots are transforming how we work and live.

Want more award-winning journalism? Subscribe to Insider Premium.
  • Insider Premium {! insider.prices.premium !}*

    {! insider.display.menuOptionsLabel !}

    Our award winning magazine, unlimited access to our story archive, special discounts to MIT Technology Review Events, and exclusive content.

    See details+

    What's Included

    Bimonthly magazine delivery and unlimited 24/7 access to MIT Technology Review’s website

    The Download: our daily newsletter of what's important in technology and innovation

    Access to the magazine PDF archive—thousands of articles going back to 1899 at your fingertips

    Special discounts to select partner offerings

    Discount to MIT Technology Review events

    Ad-free web experience

    First Look: exclusive early access to important stories, before they’re available to anyone else

    Insider Conversations: listen in on in-depth calls between our editors and today’s thought leaders

/
You've read all of your free articles this month. This is your last free article this month. You've read of free articles this month. or  for unlimited online access.