Skip to Content
77 Mass Ave

Undersea Robots That Can Think

Giving “cognitive” control to underwater robots.
August 18, 2015

When deploying autonomous underwater vehicles (AUVs), an engineer spends a lot of time writing low-level commands in order to direct the robot to carry out a mission plan. Now a new programming approach developed at MIT and the Woods Hole Oceanographic Institution gives robots more “cognitive” capabilities, letting humans specify high-level goals while the robot figures out how to achieve them.

For example, an engineer may give a robot a list of locations to explore, along with time constraints and physical directions, such as staying a certain distance above the seafloor. Using the MIT system, the robot plans out a mission, choosing which locations to explore, in what order, within a given time frame. If an unforeseen event prevents the robot from completing a task, it can choose to drop that task.

In March, the team, in collaboration with Schmidt Ocean Institute, tested the system off the western coast of Australia, using an autonomous underwater glider. Over multiple deployments, it operated safely among a number of other autonomous vehicles while receiving higher-level commands. If another vehicle took longer than expected to explore a particular area, the glider reshuffled its priorities, choosing to stay in its current location longer in order to avoid potential collisions.

When developing the system, a group led by aero-astro professor Brian Williams took inspiration from the Star Trek franchise and the top-down command center of the starship Enterprise, after which Williams named the system.

Just as a hierarchical crew runs the fictional starship, Williams’s Enterprise system incorporates levels of decision makers. One component of the system acts as a “captain,” deciding where and when to explore. Another component functions as a “navigator,” planning out a route to meet mission goals. The last component works as a “doctor” or “engineer,” diagnosing problems and replanning autonomously.

Giving robots control of higher-level decision making frees engineers to think about overall strategy, says Williams, who developed a similar system for NASA after it lost contact with the Mars Observer days before the spacecraft was scheduled to begin orbiting Mars in 1993. Such a system could also reduce the number of people needed on research cruises and let robots operate without being in continuous contact with engineers, freeing the vehicles to explore more remote recesses of the sea.

“If you look at the ocean right now, we can use Earth-orbiting satellites, but they don’t penetrate much below the surface,” Williams says. “You could send sea vessels that send one autonomous vehicle, but that doesn’t show you a lot. This technology can offer a whole new way to observe the ocean, which is exciting.”

Keep Reading

Most Popular

DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.

“This is a profound moment in the history of technology,” says Mustafa Suleyman.

What to know about this autumn’s covid vaccines

New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.

Human-plus-AI solutions mitigate security threats

With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure

Next slide, please: A brief history of the corporate presentation

From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.