A concept in psychology is helping AI to better navigate our world

The concept: When we look at a chair, regardless of its shape and color, we know that we can sit on it. When a fish is in water, regardless of its location, it knows that it can swim. This is known as the theory of affordance, a term coined by psychologist James J. Gibson. It states that when intelligent beings look at the world they perceive not simply objects and their relationships but also their possibilities. In other words, the chair “affords” the possibility of sitting. The water “affords” the possibility of swimming. The theory could explain in part why animal intelligence is so generalizable—we often immediately know how to engage with new objects because we recognize their affordances.
The idea: Researchers at DeepMind are now using this concept to develop a new approach to reinforcement learning. In typical reinforcement learning, an agent learns through trial and error, beginning with the assumption that any action is possible. A robot learning to move from point A to point B, for example, will assume that it can move through walls or furniture until repeated failures tell it otherwise. The idea is if the robot were instead first taught its environment’s affordances, it would immediately eliminate a significant fraction of the failed trials it would have to perform. This would make its learning process more efficient and help it generalize across different environments.
The experiments: The researchers set up a simple virtual scenario. They placed a virtual agent in a 2D environment with a wall down the middle and had the agent explore its range of motion until it had learned what the environment would allow it to do—its affordances. The researchers then gave the agent a set of simple objectives to achieve through reinforcement learning, such as moving a certain amount to the right or to the left. They found that, compared with an agent that hadn’t learned the affordances, it avoided any moves that would cause it to get blocked by the wall partway through its motion, setting it up to achieve its goal more efficiently.
Why it matters: The work is still in its early stages, so the researchers used only a simple environment and primitive objectives. But their hope is that their initial experiments will help lay a theoretical foundation for scaling the idea up to much more complex actions. In the future, they see this approach allowing a robot to quickly assess whether it can, say, pour liquid into a cup. Having developed a general understanding of which objects afford the possibility of holding liquid and which do not, it won’t have to repeatedly miss the cup and pour liquid all over the table to learn how to achieve its objective.
Deep Dive
Artificial intelligence

Artificial intelligence is creating a new colonial world order
An MIT Technology Review series investigates how AI is enriching a powerful few by dispossessing communities that have been dispossessed before.

Meta has built a massive new language AI—and it’s giving it away for free
Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3

This horse-riding astronaut is a milestone in AI’s journey to make sense of the world
OpenAI’s latest picture-making AI is amazing—but raises questions about what we mean by intelligence.

How the AI industry profits from catastrophe
As the demand for data labeling exploded, an economic catastrophe turned Venezuela into ground zero for a new model of labor exploitation.
Stay connected

Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.