Skip to Content
Artificial intelligence

Microsoft’s neo-Nazi sexbot was a great lesson for makers of AI assistants

Yandex’s head of machine intelligence says Microsoft’s Tay showed how important it is to fix AI problems fast.
March 27, 2018
Jeremy Portje

Remember Tay, the chatbot Microsoft unleashed on Twitter and other social platforms two years ago that quickly turned into a racist, sex-crazed neo-Nazi?

What started out as an entertaining social experiment—get regular people to talk to a chatbot so it could learn while they, hopefully, had fun—became a nightmare for Tay’s creators. Users soon figured out how to make Tay say awful things. Microsoft took the chatbot offline after less than a day.

Yet Misha Bilenko, head of machine intelligence and research at Russian tech giant Yandex, thinks it was a boon to the field of AI helpers.

Speaking at MIT Technology Review’s annual EmTech Digital conference in San Francisco on Tuesday, Bilenko said Tay’s bugs—like the bot’s vulnerability to being gamed into learning or repeating offensive phrases—taught great lessons about what can go wrong.

The way Tay rapidly morphed from a fun-loving bot (she was trained to have the personality of a facetious 19-year-old) into an AI monster, he said, showed how important it is to be able to fix problems quickly, which is not easy to do. And it also illustrated how much people tend to anthropomorphize AI, believing that it has deep-seated beliefs rather than seeing it as a statistical machine.

“Microsoft took the flak for it, but looking back, it’s a really useful case study,” he said.

Chatbots and intelligent assistants have changed considerably since 2016; they’re a lot more popular now, they’re available everywhere from smartphone apps to smart speakers, and they’re getting increasingly capable. But they’re still not great at one of the things Tay was trying to do, which is show off a personality and generate chitchat.

Bilenko doesn’t expect this to change soon—at least, not in the next five years. The conversations humans have are “very difficult,” he said.

 

Deep Dive

Artificial intelligence

chasm concept
chasm concept

Artificial intelligence is creating a new colonial world order

An MIT Technology Review series investigates how AI is enriching a powerful few by dispossessing communities that have been dispossessed before.

open sourcing language models concept
open sourcing language models concept

Meta has built a massive new language AI—and it’s giving it away for free

Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3

spaceman on a horse generated by DALL-E
spaceman on a horse generated by DALL-E

This horse-riding astronaut is a milestone in AI’s journey to make sense of the world

OpenAI’s latest picture-making AI is amazing—but raises questions about what we mean by intelligence.

labor exploitation concept
labor exploitation concept

How the AI industry profits from catastrophe

As the demand for data labeling exploded, an economic catastrophe turned Venezuela into ground zero for a new model of labor exploitation.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.