Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo


Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

In essence, by attributing emotions to an agent’s current status, it’s possible to monitor the behavior of the system so that decision making or planning is only carried out when absolutely necessary. “It’s a heuristic that can help make rational decision-making processes more realistic and much more computable,” says Dastani. “The point is that here we continuously monitor whether there is a chance of failure.”

Other robots have been designed to mimic human expressions. But Dastani’s focus on how emotions might affect decision makes it different from many of the other projects on emotional, or affective, computing, such as MIT’s Kismet robot, developed by Cynthia Breazeal. With Kismet, like other affective robots, the focus is on how to get the robot to express emotions and elicit them from people.

Dastani’s emotional functions have been derived from a psychological model known as the OCC model, devised in 1988 by a trio of psychologists: Andrew Ortony and Allan Collins, of Northwestern University, and Gerald Clore, of the University of Virginia. “Different psychologists have come up with different sets of emotions,” says Dastani. But his group decided to use this particular model because it specified emotions in terms of objects, actions, and events.

Indeed, one of the reasons for creating this model was to encourage such work, says Ortony. “It is very gratifying for us that the people are using the model this way,” he says. Most of the time when people talk about emotional or affective computing, it’s at the human-interaction level, but there’s a lot of work to be done looking at how emotions influence decision making, he says.

“It cuts across a lot of philosophical debates about the nature of human emotion and, indeed, of human thought,” says Blay Whitby, a philosopher who specializes in artificial intelligence at the University of Sussex, in the UK. This is not a bad thing, he says, but many philosophers would probably view the notion of emotional logic as an oxymoron, he says.

Having 22 different emotions makes for a very rich model of human emotion, even compared with some psychiatric theories, says Whitby. But it will need to be able to resolve conflicts between different emotional states, and it needs to be practically put to the test, he says. “The devil is in the detail with this sort of work, and they specifically don’t consider multiagent interactions.”

Dastani says that incorporating multiagent interactions–those involving multiple robots or robots and humans–is on his to-do list. He notes that it’s only then that end users are likely to see the benefits of this emotional logic, in the form of more-natural robot interactions or through the responses of intelligent agents in automated call centers. Before that happens, these emotional states are more likely to function behind the scenes in more-mundane activities like navigation and scheduling tasks, Dastani says, but it’s still too early to predict when such as system would be commercially available.

4 comments. Share your thoughts »

Credit: Philips

Tagged: Computing, robots, artificial intelligence, efficiency

Reprints and Permissions | Send feedback to the editor

From the Archives


Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me