When deploying autonomous underwater vehicles (AUVs), an engineer spends a lot of time writing low-level commands in order to direct the robot to carry out a mission plan. Now a new programming approach developed at MIT and the Woods Hole Oceanographic Institution gives robots more “cognitive” capabilities, letting humans specify high-level goals while the robot figures out how to achieve them.
For example, an engineer may give a robot a list of locations to explore, along with time constraints and physical directions, such as staying a certain distance above the seafloor. Using the MIT system, the robot plans out a mission, choosing which locations to explore, in what order, within a given time frame. If an unforeseen event prevents the robot from completing a task, it can choose to drop that task.
In March, the team, in collaboration with Schmidt Ocean Institute, tested the system off the western coast of Australia, using an autonomous underwater glider. Over multiple deployments, it operated safely among a number of other autonomous vehicles while receiving higher-level commands. If another vehicle took longer than expected to explore a particular area, the glider reshuffled its priorities, choosing to stay in its current location longer in order to avoid potential collisions.
When developing the system, a group led by aero-astro professor Brian Williams took inspiration from the Star Trek franchise and the top-down command center of the starship Enterprise, after which Williams named the system.
Just as a hierarchical crew runs the fictional starship, Williams’s Enterprise system incorporates levels of decision makers. One component of the system acts as a “captain,” deciding where and when to explore. Another component functions as a “navigator,” planning out a route to meet mission goals. The last component works as a “doctor” or “engineer,” diagnosing problems and replanning autonomously.
Giving robots control of higher-level decision making frees engineers to think about overall strategy, says Williams, who developed a similar system for NASA after it lost contact with the Mars Observer days before the spacecraft was scheduled to begin orbiting Mars in 1993. Such a system could also reduce the number of people needed on research cruises and let robots operate without being in continuous contact with engineers, freeing the vehicles to explore more remote recesses of the sea.
“If you look at the ocean right now, we can use Earth-orbiting satellites, but they don’t penetrate much below the surface,” Williams says. “You could send sea vessels that send one autonomous vehicle, but that doesn’t show you a lot. This technology can offer a whole new way to observe the ocean, which is exciting.”
Keep Reading
Most Popular

Meta has built a massive new language AI—and it’s giving it away for free
Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3

The gene-edited pig heart given to a dying patient was infected with a pig virus
The first transplant of a genetically-modified pig heart into a human may have ended prematurely because of a well-known—and avoidable—risk.

Saudi Arabia plans to spend $1 billion a year discovering treatments to slow aging
The oil kingdom fears that its population is aging at an accelerated rate and hopes to test drugs to reverse the problem. First up might be the diabetes drug metformin.

Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.
Stay connected

Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.