When deploying autonomous underwater vehicles (AUVs), an engineer spends a lot of time writing low-level commands in order to direct the robot to carry out a mission plan. Now a new programming approach developed at MIT and the Woods Hole Oceanographic Institution gives robots more “cognitive” capabilities, letting humans specify high-level goals while the robot figures out how to achieve them.
For example, an engineer may give a robot a list of locations to explore, along with time constraints and physical directions, such as staying a certain distance above the seafloor. Using the MIT system, the robot plans out a mission, choosing which locations to explore, in what order, within a given time frame. If an unforeseen event prevents the robot from completing a task, it can choose to drop that task.
In March, the team, in collaboration with Schmidt Ocean Institute, tested the system off the western coast of Australia, using an autonomous underwater glider. Over multiple deployments, it operated safely among a number of other autonomous vehicles while receiving higher-level commands. If another vehicle took longer than expected to explore a particular area, the glider reshuffled its priorities, choosing to stay in its current location longer in order to avoid potential collisions.
When developing the system, a group led by aero-astro professor Brian Williams took inspiration from the Star Trek franchise and the top-down command center of the starship Enterprise, after which Williams named the system.
Just as a hierarchical crew runs the fictional starship, Williams’s Enterprise system incorporates levels of decision makers. One component of the system acts as a “captain,” deciding where and when to explore. Another component functions as a “navigator,” planning out a route to meet mission goals. The last component works as a “doctor” or “engineer,” diagnosing problems and replanning autonomously.
Giving robots control of higher-level decision making frees engineers to think about overall strategy, says Williams, who developed a similar system for NASA after it lost contact with the Mars Observer days before the spacecraft was scheduled to begin orbiting Mars in 1993. Such a system could also reduce the number of people needed on research cruises and let robots operate without being in continuous contact with engineers, freeing the vehicles to explore more remote recesses of the sea.
“If you look at the ocean right now, we can use Earth-orbiting satellites, but they don’t penetrate much below the surface,” Williams says. “You could send sea vessels that send one autonomous vehicle, but that doesn’t show you a lot. This technology can offer a whole new way to observe the ocean, which is exciting.”
Here’s how a Twitter engineer says it will break in the coming weeks
One insider says the company’s current staffing isn’t able to sustain the platform.
Technology that lets us “speak” to our dead relatives has arrived. Are we ready?
Digital clones of the people we love could forever change how we grieve.
How to befriend a crow
I watched a bunch of crows on TikTok and now I'm trying to connect with some local birds.
Starlink signals can be reverse-engineered to work like GPS—whether SpaceX likes it or not
Elon said no thanks to using his mega-constellation for navigation. Researchers went ahead anyway.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.