Skip to Content

Outward Bound for Robots

A new approach teaches objects how to navigate unfamiliar territory as humans might.

A computer navigation system based on a part of the brain called the hippocampus has been tested on an autonomous robotic car. By enabling the robot to take what its creators call “cognitive fingerprints” of its surroundings, the software allows the vehicle to explore and remember places in much the same way mammals do.

Autonomous robotic cars like this one could navigate by storing cognitive “fingerprints” of places – like humans do.

Tests on the robotic vehicle – an adapted Daimler-Chrysler Smart Car equipped with a laser range finder and omnidirectional camera as sensors – have shown that it can successfully explore and navigate more than one and a half kilometers of urban terrain without getting lost.

Similarly, the system has been tested on an indoor robot by “blindfolding” it, taking it to an unknown location, and getting it to find its way home, says Adriana Tapus, a roboticist at the University of Southern California in Los Angeles who developed the system. This “kidnapping task” is much more difficult than it might seem, she says. Yet this problem, known as simultaneous localization and mapping (SLAM), is becoming increasingly important for robots, autonomous vehicles, and military unmanned aerial vehicles (UAV).

The challenge is to create a map from which a robot can navigate while it is still exploring that same environment, says Chris Melhuish, director of the Bristol Robotics Laboratory at the University of the West of England and Bristol University in the U.K. This is difficult because it involves mapping an unfamiliar environment while at the same time updating one’s position within this map. It’s a chicken-and-egg problem, says Tapus: “To localize the robot, a map is necessary, and to update a map, the position of the mobile robot is needed.”

In addition, there’s the uncertainty inherent in all sensor measurements, which adds to the uncertainty in the map that the robot builds, says Andrew Davison, a computer scientist at Imperial College London, who was one of the first researchers to develop a real-time SLAM system for robots.

To solve this problem, Tapus decided to copy the way people navigate. Working with Roland Siegwart, head of the Autonomous Systems Laboratory at the École Polytechnique Fédérale de Lausanne in Switzerland, she developed a system that takes raw data detected by the robot’s sensors, such as vertical edges, corners, and colors, and combines them into a single low-level description or “fingerprint” of that place.

This fingerprint consists of a circular, or looped, list of significant features around the robot. “It’s not the features that are new, it is the combination of these features in a unique representation,” says Tapus, who believes that human brains form the same kinds of combinations as they establish the relative positions of landmarks.

“What we find in mammals are these cells called ‘place’ cells,” says Melhuish. In rats, these cells, which reside in the hippocampus, have been shown to fire in distinct patterns depending on the animal’s location, he says. Indeed, there’s a lot of interest in trying to copy biological models in robotics, says Melhuish, since they often appear to work so well.

Traditional SLAM solutions tend to use a robot’s sensors to continuously construct geometric maps of its surroundings or to create symbolic representations of features around the robot. But with these approaches comes a trade-off, says Tapus: if it’s more precise, the robot may have more difficulty recognizing it at a later stage, but if it’s not precise enough, it might be too easily confused with other places.

The cognitive fingerprints avoid this by providing a robust and effective way of representing locations in a way that requires few computational resources. In addition, because they still maintain the relative positions of landmarks, it’s easy to use probabilistic algorithms to reliably match places, even if the robot is not positioned in precisely the same place or if some of the objects in the environment have moved.

This could prove particularly useful for car navigation systems, for although GPS is sufficient for coarse positioning, says Tapus, often it’s useful to know the position of the robot or vehicle with respect to buildings, trees, and intersections. For this, a more refined technique is required, particularly when it comes to things that move, such as people.

Even if Tapus’s approach proves useful, though, it may be hard to say how closely it resembles human problem solving. Davison, for one, cautions against making too strong a comparison. “As computing power increases,” he says, “it is often hard to tell whether the algorithms being used successfully in robotics and computer vision have much relation with how the human brain solves these problems.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.