Skip to Content
Uncategorized

Divers Command Robot with Waterproof Tablet

Water is an unforgiving medium for communication, but a new approach using visual tags shows potential.

A new, surprisingly simple communications method between divers and underwater robots is a significant addition to the the small arsenal of tools available to operators of robotic submersibles, or ROVs. The method could enable new kinds of on-site diver/ROV collaboration in difficult situations, such as military operations, environmental remediation and aquaculture.

Underwater communication between divers and submersible robots, or ROVs, is an almost intractable problem. Unlike control of terrestrial robots, radio waves aren’t an option because they don’t travel far under water and are easily distorted. Sonar (sound) is low bandwidth and requires enormous amounts of power (enough to damage marine life, in some cases) and communication with light is foiled by “aquatic snow” - all the particulate matter floating in the water column.

“The problem is really quite bad, to the point that many militaries don’t operate ROVs and divers in the same environment, because the communication and coordination is too difficult,” says Michael Jenkin, senior author on a paper forthcoming from York University called Swimming with Robots: Human Robot Communication at Depth.

Jenkin’s and lead author Bart Verzijlenberg’s solution to the problem is a waterproof tablet that can display the same sort of two-dimensional bar codes, or tags, already in use on some products, as well as advertisements and stickers designed to be read by smart phones.

Flashing these tags at the AQUA robot’s underwater camera allows communication that is robust and high-bandwidth relative to other untethered underwater communication modalities. In the example at left, a 6x6 tag can display 36 bits, but is redundantly displaying only 10 bits, which correspond to a command stored in the robot’s memory.

A video of the robot operating in tethered mode (below) shows its ability to react to these visual tags in real time and transmit video back to the underwater tablet. In an autonomous mode, a video feed would be impossible, but the robot would be able to autonomously carry out tasks and report back to a diver directly, rather than traveling all the way to the surface in order to communicate with its handlers.

To date, the system has been tested in the open ocean and swimming pools, but future plans include penetration of wrecked ships.

Follow Christopher Mims on Twitter, or contact him via email.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.