When deploying autonomous underwater vehicles (AUVs), an engineer spends a lot of time writing low-level commands in order to direct the robot to carry out a mission plan. Now a new programming approach developed at MIT and the Woods Hole Oceanographic Institution gives robots more “cognitive” capabilities, letting humans specify high-level goals while the robot figures out how to achieve them.
For example, an engineer may give a robot a list of locations to explore, along with time constraints and physical directions, such as staying a certain distance above the seafloor. Using the MIT system, the robot plans out a mission, choosing which locations to explore, in what order, within a given time frame. If an unforeseen event prevents the robot from completing a task, it can choose to drop that task.
In March, the team, in collaboration with Schmidt Ocean Institute, tested the system off the western coast of Australia, using an autonomous underwater glider. Over multiple deployments, it operated safely among a number of other autonomous vehicles while receiving higher-level commands. If another vehicle took longer than expected to explore a particular area, the glider reshuffled its priorities, choosing to stay in its current location longer in order to avoid potential collisions.
When developing the system, a group led by aero-astro professor Brian Williams took inspiration from the Star Trek franchise and the top-down command center of the starship Enterprise, after which Williams named the system.
Just as a hierarchical crew runs the fictional starship, Williams’s Enterprise system incorporates levels of decision makers. One component of the system acts as a “captain,” deciding where and when to explore. Another component functions as a “navigator,” planning out a route to meet mission goals. The last component works as a “doctor” or “engineer,” diagnosing problems and replanning autonomously.
Giving robots control of higher-level decision making frees engineers to think about overall strategy, says Williams, who developed a similar system for NASA after it lost contact with the Mars Observer days before the spacecraft was scheduled to begin orbiting Mars in 1993. Such a system could also reduce the number of people needed on research cruises and let robots operate without being in continuous contact with engineers, freeing the vehicles to explore more remote recesses of the sea.
“If you look at the ocean right now, we can use Earth-orbiting satellites, but they don’t penetrate much below the surface,” Williams says. “You could send sea vessels that send one autonomous vehicle, but that doesn’t show you a lot. This technology can offer a whole new way to observe the ocean, which is exciting.”
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
ChatGPT is about to revolutionize the economy. We need to decide what that looks like.
New large language models will transform many jobs. Whether they will lead to widespread prosperity or not is up to us.
Sam Altman invested $180 million into a company trying to delay death
Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.
GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why
We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.