MIT Technology Review Subscribe

How role-playing a dragon can teach an AI to manipulate and persuade

Combining natural-language processing and reinforcement learning in a text-based adventure game shows machines how to use language as a tool.

An AI that completes quests in a text-based adventure game by talking to the characters has learned not only how to do things, but how to get others to do things. The system is a step toward machines that can use language as a way to achieve their goals.

Pointless prose: Language models like GPT-3 are brilliant at mimicking human-written sentences, churning out stories, fake blogs, and Reddit posts. But there is little point to this prolific output beyond the production of the text itself. When people use language, it is wielded like a tool: our words convince, command, and manipulate; they make people laugh and make people cry.

Advertisement

Mixing things up: To build an AI that used words for a reason, researchers from the Georgia Institute of Technology in Atlanta and Facebook AI Research combined techniques from natural-language processing and reinforcement learning, where machine-learning models learn how to behave to achieve given objectives. Both these fields have seen enormous progress in the last few years, but there has been little cross-pollination between the two.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

Word games: To test their approach, the researchers trained their system in a text-based multiplayer game called LIGHT, developed by Facebook last year to study communication between human and AI players. The game is set in a fantasy-themed world filled with thousands of crowdsourced objects, characters, and locations that are described and interacted with via on-screen text. Players (human or computer) act by typing commands such as “hug wizard,” “hit dragon,” or “remove hat.” They can also talk to the chatbot-controlled characters.

Dragon quest: To give their AI reasons for doing things, the researchers added around 7,500 crowdsourced quests, not included in the original version of LIGHT. Finally, they also created a knowledge graph (a database of subject-verb-object relationships) that gave the AI common-sense information about the game’s world and the connections between its characters, such as the principle that a merchant will only trust a guard if they are friends. The game now had actions (such as “Go to the mountains” and “Eat the knight”) to perform in order to complete quests (such as “Build the largest treasure hoard ever attained by a dragon”).

Sweet talker: Pulling all of this together, they trained the AI to complete quests just by using language. To perform actions, it could either type the command for that action or achieve the same end by talking to other characters. For example, if the AI needed a sword, it could choose to steal one or convince another character to hand one over.

For now, the system is a toy. And its manner can be blunt: at one point, needing a bucket, it simply says: “Give me that bucket or I’ll feed you to my cat!” But mixing up NLP with reinforcement learning is an exciting step that could lead not only to better chatbots that can argue and persuade, but ones that have a much richer understanding of how our language-filled world works.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement