MIT Technology Review Subscribe

Multi-Touch Control of Robot Swarms

The new Dream controller for Microsoft Surface could help speed up search-and-rescue operations.

The Dream controller (short for Dynamically-Resizing, Ergonomic and Multi-touch) is a new program designed for Microsoft’s Surface touch-screen that lets one or more people take control of a single robot, or a swarm of them. The software offers new ways for users to manage search-and-rescue robots through a touch screen interface, while integrating with virtual maps. Several virtual robot controllers are automatically integrated with a view of a virtual map.

When disaster strikes, search-and-rescue teams must quickly gather and assimilate the data needed to find survivors. A team of robots can help scout out for persons stuck in rubble or create new maps of the landscape. But first responders need ways to control those robots, and process incoming information quickly.

Advertisement

Robots are usually controlled using a physical device, like a joystick or games console-type controller. Mark Micire, a researcher at the University of Massachusetts Lowell who deployed search-and-rescue robots at the World Trade Center after 9/11 and a member of the Massachusetts FEMA team, built the Dream system to help first responders. Since they don’t need to use new device to control each of several robots while referring to a physical map, he says, it is possible to maneuver more quickly.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

“Right now, the state of practice is to use paper maps, with everyone gathered around,” says Holly Yanco, professor and head of the Robotics Lab at UMass Lowell. “We’ve designed the multi-touch application to replace these maps with interactive ones.” This lets live data–satellite imagery, sensor data, and video or photography from people, vehicles or robots–to be used nearly instantaneously.

“With our design, a person can select the robot, then place his or her hands down to form the Dream controller to directly drive the robot and see the robot’s eye video,” says Yanco. “Once done, the controller disappears.”

Each controller sizes itself to a person’s hand space and finger size, based on contact points made with the screen–so very large or very small hands would be able to control it just as easily. “We aren’t aware of any other multi-touch controllers that conform to the placement and size of a person’s hand,” says Yanco. The Dream controller’s hand and finger registration algorithm, which has a patent pending, is faster and outperforms other work, according to Yanco.

Here’s a video of the controller being used controlling a real robot (ATRV-Jr):

And this video shows it controlling a swarm of virtual robots:

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement