MIT Technology Review Subscribe

Undersea Robots That Can Think

Giving “cognitive” control to underwater robots.

When deploying autonomous underwater vehicles (AUVs), an engineer spends a lot of time writing low-level commands in order to direct the robot to carry out a mission plan. Now a new programming approach developed at MIT and the Woods Hole Oceanographic Institution gives robots more “cognitive” capabilities, letting humans specify high-level goals while the robot figures out how to achieve them.

For example, an engineer may give a robot a list of locations to explore, along with time constraints and physical directions, such as staying a certain distance above the seafloor. Using the MIT system, the robot plans out a mission, choosing which locations to explore, in what order, within a given time frame. If an unforeseen event prevents the robot from completing a task, it can choose to drop that task.

Advertisement

In March, the team, in collaboration with Schmidt Ocean Institute, tested the system off the western coast of Australia, using an autonomous underwater glider. Over multiple deployments, it operated safely among a number of other autonomous vehicles while receiving higher-level commands. If another vehicle took longer than expected to explore a particular area, the glider reshuffled its priorities, choosing to stay in its current location longer in order to avoid potential collisions.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

When developing the system, a group led by aero-astro professor Brian Williams took inspiration from the Star Trek franchise and the top-down command center of the starship Enterprise, after which Williams named the system.

Just as a hierarchical crew runs the fictional starship, Williams’s Enterprise system incorporates levels of decision makers. One component of the system acts as a “captain,” deciding where and when to explore. Another component functions as a “navigator,” planning out a route to meet mission goals. The last component works as a “doctor” or “engineer,” diagnosing problems and replanning autonomously.

Giving robots control of higher-level decision making frees engineers to think about overall strategy, says Williams, who developed a similar system for NASA after it lost contact with the Mars Observer days before the spacecraft was scheduled to begin orbiting Mars in 1993. Such a system could also reduce the number of people needed on research cruises and let robots operate without being in continuous contact with engineers, freeing the vehicles to explore more remote recesses of the sea.

“If you look at the ocean right now, we can use Earth-orbiting satellites, but they don’t penetrate much below the surface,” Williams says. “You could send sea vessels that send one autonomous vehicle, but that doesn’t show you a lot. This technology can offer a whole new way to observe the ocean, which is exciting.”

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement