Skip to Content

Robots to the Rescue

Is it dangerous, physically demanding or just plain icky? For tough jobs, tomorrow’s workforce is cross-training on the playing fields of today.

Rush into that burning building, clean up that dangerous chemical spill or locate that barricaded killer? Maybe we should let Robbie do it.

Stationary assembly-line robots have excelled at meticulous and/or numbingly repetitive jobs in American factories for decades. Can they step (or roll) out into the real world and take on far more difficult tasks in uncharted terrain?

Robot undergrads and their designers from colleges around the world got a chance to test their metal at the International Joint Conference on Artificial Intelligence (IJCAI), a biannual gathering of 2,600 AI researchers that convened recently in Seattle’s cavernous Washington State Convention and Trade Center.

While it may appear to be all fun and games, the goal was still serious-to advance the state of robotics. Commercially, mobile robotics is still at the starting gate. But ActivMedia Research of Peterborough, NH, predicts that mobile robot sales will soar 2,500 percent, from $665 million in 2000 to $17 billion by 2005.

High-priced robot guards, bomb disposal specialists and hospital workers are already on some job sites. And soon, mobile robots could take on everyday roles as vacuum cleaners, lawn mowers or even cabbies. The number of mobile robots in the workforce is expected to grow to 865,000 by 2005, says ActivMedia analyst Harry Wolhandler.

Serious Fun

Nowhere was robotic potential more evident than in the IJCAI Robot Rescue competition, which simulates the rescue of injured people trapped inside fallen buildings. Different kinds of robots negotiated three test courses of increasing difficulty designed by the National Institute of Standards and Technology (NIST). The contest was jointly sponsored by the American Association for Artificial Intelligence (AAAI) and RoboCup, an annual robotics soccer competition.

Each team’s robot or robots had 25 minutes to negotiate the course, locate dummies simulating injured persons and return to report the location of each victim. The teams ranged from completely autonomous to completely remote controlled-or “tele-operated”-robots. More points were awarded for autonomy-that is, for relying on a robot’s AI programming to respond to changing situations.

“We chose the robot search-and-rescue model because we’re trying to focus all of those game-based AI efforts on real-world problems,” says Adam Jacoff, a mechanical engineer at NIST’s intelligent systems division in Gaithersburg, MD.

In the easiest course, robots were required to negotiate a simple two-dimensional maze-like area and locate victims with simulated life signs. One dummy might have a finger that moves, another would moan but have no motion. The robots needed to use these cues to determine which victims were alive or dead.

Then more and more obstacles and hazards were added to the courses, including walls that could collapse, debris on the floor that limited mobility, and different levels reachable by ramps and stairs, as well as victims located on upper levels. Obviously, that was particularly challenging for robots that use wheels for mobility.

While robots can’t yet be used to retrieve injured people or perform first aid, they can map the locations of victims and hazards to reduce risks for human rescuers. Some robots upload the information they gather to the rescuer’s computer system when they return; tele-operated robots equipped with a color video camera allow the operator to see what the robot sees in real time.

Of the six teams that competed (up from three last year), a team from Sharif University in Tehran, Iran, received a technical achievement award for its robot’s mobility, achieved with tank treads that enabled the tracked vehicle to negotiate the more difficult test courses.

A team from Swarthmore College in Swarthmore, PA, also received a technical achievement award for its innovative use of AI. By using two autonomous robots, the Swarthmore team was able to sweep more area and maximize the resources of the human rescuer. The small cylindrical robots were also able to take three-dimensional images that could be later viewed by the human operator using 3-D viewers. But because the robots used wheels, it was harder for them to negotiate the more difficult courses.

They Also Rolled/Ran

Teams who chose not to compete still demonstrated robots that may land real-world jobs sooner than most, reports Tucker Balch, associate chair for RoboCup 2001 and computer science professor at Georgia Tech. A University of Minnesota team demonstrated tiny robots about the size of a flashlight with wheels on both ends that could be used to get into small, tight spaces to seek out victims trapped under rubble.

Hiroaki Kitano, project director for the Kitano Symbiotic Systems Project at Japan Science and Technology Corp. in Tokyo, as well as founder of the RoboCup competition, demonstrated a humanoid robot-dubbed PINO-that walks more like a human instead of the somewhat hunched manner of existing bipedal robots, thus requiring less energy.

PINO runs on a Pentium III computer tethered to its bipedal body. It also includes a vision system for recognizing objects and sensors enabling the robot to gauge its posture, balance, momentum and foot forces. Each foot has eight individual sensors, and the robot uses a total of 26 joint angle potentiometers. But key to PINO is a genetic algorithm that enables the robot to walk by itself, says Kitano.

One of the more light-hearted examples of robotics was the hors d’oeuvres competition featuring robot-served food at a mock reception. The teams could use any technology they wanted, but each robot had to offer food to humans only, not inanimate objects.

NIST has its own robotics projects as well. “We have a Humvee that can drive itself autonomously,” says Jacoff. It’s earmarked for rescue operations, but also for situations where a human is too far away from the robot to communicate in real time-such as in space.

If humans aren’t the first to step out into the dust of Mars, it’ll probably be because we let Robbie do it.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.