MIT Technology Review Subscribe

Outward Bound for Robots

A new approach teaches objects how to navigate unfamiliar territory as humans might.

A computer navigation system based on a part of the brain called the hippocampus has been tested on an autonomous robotic car. By enabling the robot to take what its creators call “cognitive fingerprints” of its surroundings, the software allows the vehicle to explore and remember places in much the same way mammals do.

Autonomous robotic cars like this one could navigate by storing cognitive “fingerprints” of places – like humans do.

Tests on the robotic vehicle – an adapted Daimler-Chrysler Smart Car equipped with a laser range finder and omnidirectional camera as sensors – have shown that it can successfully explore and navigate more than one and a half kilometers of urban terrain without getting lost.

Advertisement

Similarly, the system has been tested on an indoor robot by “blindfolding” it, taking it to an unknown location, and getting it to find its way home, says Adriana Tapus, a roboticist at the University of Southern California in Los Angeles who developed the system. This “kidnapping task” is much more difficult than it might seem, she says. Yet this problem, known as simultaneous localization and mapping (SLAM), is becoming increasingly important for robots, autonomous vehicles, and military unmanned aerial vehicles (UAV).

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

The challenge is to create a map from which a robot can navigate while it is still exploring that same environment, says Chris Melhuish, director of the Bristol Robotics Laboratory at the University of the West of England and Bristol University in the U.K. This is difficult because it involves mapping an unfamiliar environment while at the same time updating one’s position within this map. It’s a chicken-and-egg problem, says Tapus: “To localize the robot, a map is necessary, and to update a map, the position of the mobile robot is needed.”

In addition, there’s the uncertainty inherent in all sensor measurements, which adds to the uncertainty in the map that the robot builds, says Andrew Davison, a computer scientist at Imperial College London, who was one of the first researchers to develop a real-time SLAM system for robots.

To solve this problem, Tapus decided to copy the way people navigate. Working with Roland Siegwart, head of the Autonomous Systems Laboratory at the École Polytechnique Fédérale de Lausanne in Switzerland, she developed a system that takes raw data detected by the robot’s sensors, such as vertical edges, corners, and colors, and combines them into a single low-level description or “fingerprint” of that place.

This fingerprint consists of a circular, or looped, list of significant features around the robot. “It’s not the features that are new, it is the combination of these features in a unique representation,” says Tapus, who believes that human brains form the same kinds of combinations as they establish the relative positions of landmarks.

“What we find in mammals are these cells called ‘place’ cells,” says Melhuish. In rats, these cells, which reside in the hippocampus, have been shown to fire in distinct patterns depending on the animal’s location, he says. Indeed, there’s a lot of interest in trying to copy biological models in robotics, says Melhuish, since they often appear to work so well.

Traditional SLAM solutions tend to use a robot’s sensors to continuously construct geometric maps of its surroundings or to create symbolic representations of features around the robot. But with these approaches comes a trade-off, says Tapus: if it’s more precise, the robot may have more difficulty recognizing it at a later stage, but if it’s not precise enough, it might be too easily confused with other places.

The cognitive fingerprints avoid this by providing a robust and effective way of representing locations in a way that requires few computational resources. In addition, because they still maintain the relative positions of landmarks, it’s easy to use probabilistic algorithms to reliably match places, even if the robot is not positioned in precisely the same place or if some of the objects in the environment have moved.

Advertisement

This could prove particularly useful for car navigation systems, for although GPS is sufficient for coarse positioning, says Tapus, often it’s useful to know the position of the robot or vehicle with respect to buildings, trees, and intersections. For this, a more refined technique is required, particularly when it comes to things that move, such as people.

Even if Tapus’s approach proves useful, though, it may be hard to say how closely it resembles human problem solving. Davison, for one, cautions against making too strong a comparison. “As computing power increases,” he says, “it is often hard to tell whether the algorithms being used successfully in robotics and computer vision have much relation with how the human brain solves these problems.”

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement