Skip to Content

A Robot Finds Its Way Using Artificial “GPS” Brain Cells

One robot has been given a simulated version of the brain cells that let animals build a mental map of their surroundings.
October 19, 2015

The behavior and interplay of two types of neurons in the brain helps give humans and other animals an uncanny ability to navigate by building a mental map of their surroundings. Now one robot has been given a similar cluster of virtual cells to help it find its own way around.

Researchers in Singapore simulated two types of cells known to be used for navigation in the brain—so-called “place” and “grid” cells—and showed they could enable a small-wheeled robot to find its way around. Rather than simulate the cells physically, they created a simple two-dimensional model of the cells in software. The work was led by Haizhou Li, a professor at the Agency for Science, Technology and Research (A*STAR).

“Artificial grid cells could provide an adaptive and robust mapping and navigation system,” Li wrote in an e-mail coauthored with Huajin Tang and Yuan Miaolong, two research scientists at A*STAR who coauthored a paper about the work. “Humans and animals have an instinctual ability to navigate freely and deliberately in an environment rather effortlessly.”

The work is significant because it shows the potential for having machines mimic more complex activity in the brain. Roboticists increasingly use artificial neural networks to train robots to perform tasks such as object recognition and grasping, but these networks do not faithfully reflect the complexity and subtlety of a real biological brain.  

“Neural networks are actually very loosely inspired by the brain,” says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence in Seattle. “They are distributed computing elements, but they’re very simple as compared with neurons; the connections are extremely simple as compared with a synapse.” He says this new development that takes inspiration from the brain “seems like good work.”

Place cells were first identified in the 1970s by John O’Keefe, who found that they fire whenever a mouse passed the same spot in an area. Grid cells, pinpointed in a different part of the brain by May-Britt and Edvard Moser in 2005, activate when an animal arrives at any location on a triangular grid of points, thereby providing a more detailed sense of position in space.

Together with other types of cells, and by processing sensory information, grid and place cells are thought to afford animals with an innate sense of the world around them and of their location within it. The discovery of these cells earned the three scientists involved the Nobel Prize in Medicine in 2014 (see “Nobel for Brain’s Location Code”).

The Singaporean researchers tested the approach on a robot let loose in a 35-square-meter office space. They had the robot roam around the office space, and verified that its artificial place and grid cells functioned in a comparable way to their biological counterparts.

The navigation system isn’t yet as good as a conventional one, and the researchers say they need to develop a better understanding of the way biological cells function in order to improve it. However, they suggest that it could offer advantages over conventional systems, which may be confused by changes to an environment, for example.

As well as providing a more efficient and reliable way for machines to get about, Li hopes that the work could help neuroscientists understand the functioning of the brain’s navigation system. “This will provide a solution to predict neural activities using mobile robots before conducting experiments on rats,” the researchers write.

Artificial intelligence researchers are increasingly looking to research on the brain for ways to refine modern approaches to machine learning. However, Etzioni of the Allen Institute notes that the complexity of the organ makes applying neurological research difficult. “Which is why this work is exciting,” he says.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.