Skip to Content

The Robots Running This Way

Some of the machines acquired recently by Google represent a giant leap forward for robot-kind.

In the pit lane of the Homestead-Miami speedway in Florida, inside a track on which race cars sometimes travel at over 300 kilometers an hour, a small crowd is watching something considerably slower but arguably far more impressive. On a sunny Saturday morning just before Christmas, a robot that roughly resembles a large person is contemplating a makeshift door on the tarmac ahead. It surveys the door using a laser scanner and a pair of cameras in its head; then, after a lengthy pause, the robot extends a gleaming aluminum arm, pushes open the door, and slowly steps through.

The robot, called Atlas and made by Boston Dynamics, is competing in the DARPA Robotics Contest, organized by the U.S. Defense Advanced Research Projects Agency. Over the weekend, robots of varying shape and design, all controlled remotely, attempt challenges meant to test the limits of artificial sensing, manipulation, and agility. Each task is inspired by work that could help stem a leak at a stricken nuclear power plant. The jobs are seemingly simple, but not for robots. In one, the machines must get across a pile of rubble; in another they have to climb a tall ladder.

Many of the robots struggle to complete the tasks without malfunctioning, freezing up, or toppling over. Of all the challenges facing them, one of the most difficult, and potentially the most important to master, is simply walking over uneven, unsteady, or just cluttered ground. But the Atlas robots (several academic groups have entered versions of the Boston Dynamics machine) walk across such terrain with impressive confidence.

A couple of times each day, the crowd gets to see two other legged robots made by Boston Dynamics. In one demo, a four-legged machine about the size of a horse trots along the track carrying several large packs; it cleverly shuffles its feet to stay upright when momentarily unbalanced by a hefty kick from its operator. In another, a smaller, more agile four-legged machine revs up a loud diesel engine, then bounds maniacally along the racetrack like a big cat, quickly reaching almost 20 miles per hour.

The crowd, filled with robotics researchers from around the world and curious members of the public, gasps and applauds. But the walking and running technology found in the machines developed by Boston Dynamics is more than just dazzling. If it can be improved, then these robots, and others like them, might stride out of research laboratories and populate the world with smart mobile machines. That helps explain why a few days before the DARPA Challenge, Boston Dynamics was acquired by Google.

Part 1
Learning to Hop

A few months before the DARPA contest, I visited Boston Dynamics, which occupies an ordinary-looking building on the edge of a quiet industrial park in Waltham, Massachusetts, a 20-minute drive from Boston. In the entrance, four-legged robots of various shape and size appear to stand guard. Within the large workshop inside, dozens of engineers tinkered away on all manner of mechanical beast. In one corner, a small four-legged machine with a long neck and a gripper instead of a head was using the appendage to hurl cinder blocks across the floor.

All these machines have their origins in groundbreaking work done by Marc Raibert, the founder and chief technology officer of Boston Dynamics. On the wall of Raibert’s office, next to a large poster showing Atlas in great technical detail, is a small poster identifying various dinosaurs. He recalls becoming interested in animal locomotion while studying for a PhD in the brain and cognitive sciences department at MIT in the late 1970s, when two prominent physiologists came to talk about research on cat locomotion. Fascinated by how a brain can produce such effortless agility, Raibert hatched a plan to start building machines to explore the phenomenon when he landed a job as an assistant professor at Carnegie Mellon University in 1980.

Other academics had built walking machines. Some had many legs, to ensure that lifting one of them to walk forward would not unbalance them. Others moved extremely carefully and deliberately to maintain a precarious balance. The machines were clumsy, slow, and altogether a poor imitation of most biological locomotion. In many cases, even the slightest slip or push would cause them to fall over.

Demonstrating a remarkable insight, Raibert decided his first walking robot would not be designed to avoid the instability that motion can introduce; it would embrace it. Instead of six legs or even four, he gave it just one.

The robot would have to bounce on its single leg, assessing its own movement and orientation with each leap, and quickly adjust the position of its leg and body as well as the amount of energy its leg would expend with the next hop. The calculations were surprisingly simple.

Remarkably, the robot worked perfectly, hopping around like a possessed pogo stick. While the first version was limited in its movement, the next could hop freely around the lab. “I can still remember—I think it was a day in August in 1983,” Raibert recalls. “We were all just standing around grinning. We would push the machine and it would travel across the room until the other guy got it and then he would push it back.”

Raibert knew that a leaping animal becomes unbalanced as it jumps and must constantly adjust, and that it uses gravity to move itself along. The rudimentary hopping robot solved the same problems, and it showed how to build more nimble machines. “It looked to me like the dynamics of [biological movement], where there’s a lot of energy and motion, where there’s tipping all the time—that those were really the characteristics you wanted to get,” he recalls.

Inspired by the success of the approach, Raibert and his students started building other legged machines using what roboticists call dynamic balance—an ability to use movement to maintain balance. The next version trotted along on two front and two back legs. Other robots had far more sophisticated joints, actuators, and control software.

In 1986 Raibert’s “Leg Lab” moved from CMU to MIT, where it developed other robots that could walk, bounce, run, and jump in ways that often seemed oddly recognizable. Machines had names inspired by biological counterparts. Spring Flamingo and Spring Turkey would strut around the lab like giant birds, while Uniroo hopped along using a tail for balance, like an awkward, one-legged kangaroo.

Raibert founded Boston Dynamics in 1995, initially to sell simulation software developed at his lab. But the company also consulted on commercial robotics projects, including the development of AIBO and QRIO, robotic toys that Sony made in 1999 and 2003, respectively. And a contract with DARPA, in 2003, saw Boston Dynamics start making its own legged machines.

Part 2
Learning to Run

BigDog

In 2003, armed with a DARPA contract for creating a prototype vehicle capable of following troops across ground inaccessible to wheeled or tracked vehicles, Boston Dynamics began developing BigDog, a four-legged machine roughly the size of a large Bernese mountain dog. The robot had to be able to navigate messy, unpredictable real-world terrain. This meant it needed to be tough, exceptionally agile, able to carry its own power source, and capable of sensing its own movement—and the environment—in greater detail than any of the walking machines built before.

“Most of the laboratory stuff we’d done up until BigDog was in a pretty benign lab environment,” Raibert says. “It was clean, it was dry, and it was flat.”

The resulting machine was powered by a go-kart engine, and it used 69 sensors to monitor the movement of its legs, the forces exerted on those limbs, and factors including temperature and hydraulic pressure. Using dynamic balance, it could walk across sand, snow, and even ice. Most spectacularly, it could stay on its feet when given a strong kick. BigDog can be driven remotely, but its balancing behavior, like that of other Boston Dynamics robots, is automatically controlled by an onboard computer.

LS3

With additional military funding, including some money from the Marines, Boston Dynamics started building a larger, more powerful version of BigDog in 2009. Dubbed Alpha Dog, but officially called the Legged Squad Support System, or LS3, the robot is the size of a horse and can carry 180 kilograms, or four Marines’ fully loaded backpacks, for up to 20 miles a day over rough terrain.

Like BigDog, LS3 uses a laser ranging instrument, or lidar, and stereo video cameras in its head to identify obstacles, map its surroundings, and follow a soldier walking up to 45 meters ahead, identified by a reflective patch. Last summer, the Marines began testing LS3 at a desert base in California and in the woods at Fort Devens in Massachusetts. These tests have involved simulated combat missions with LS3 as a pack mule.

WildCat

DARPA also provided funding for a more mobile, agile, and faster four-legged robot. The first version, Cheetah, can run at 47 kilometers per hour on a treadmill while attached to a stabilizing bar. Boston Dynamics developed a larger, untethered version, called WildCat, in 2013. Like Cheetah, WildCat flexes its body to extend its stride and increase its speed. It can run at 26 kilometers per hour under remote control. Boston Dynamics has posted video online of the robot bounding and galloping around its parking lot.

Part 3
Learning to Walk

In 1989, one of Raibert’s graduate students, Rob Playter, who had been a champion gymnast at Ohio State, helped him build a free-roaming, two-legged robot that could perform somersaults and other acrobatic feats on a treadmill or while bounding around the lab. “The somersault was showing off,” Raibert admits. But it demonstrated a level of control that promised to help robots navigate much more difficult terrain. It also hinted at how machines might one day move through environments designed for humans. Wheels are a fine way to move when the ground ahead is flat and clear, but a wheel can’t easily climb stairs or get past an overturned chair. If robots are ever to be used widely in our homes, it is likely they will need to walk.

“Because we arrange our houses to suit human beings, it’s very important that the robots have the same competencies of locomotion and manipulation as human beings do,” says Gill Pratt, the DARPA program manager in charge of the robotics challenge. “Legs can provide a tremendous advantage over wheels and tracks; a leg doesn’t need a continuous path of support; a leg can step over things, which is an extraordinary thing to do.”

The specific inspiration for the DARPA Robotics Challenge came in dramatic circumstances, when an earthquake struck off the coast of Japan in March 2011. Attempts at cleaning up the damaged nuclear reactor at Fukishima highlighted the limitations of the best existing robots and showed the need for machines that can better navigate the human world. DARPA devised its challenge to inspire robots that could help should such a situation occur again. The robots must be able not only to work in environments designed for humans but also to navigate those sites after they are severely damaged.

Atlas performed well in Miami, but it is a long way from perfect. For one thing, the power needed to drive its hydraulic systems limits its usefulness. The robots deployed in the contest each required external generators to power their hydraulics; the generators are too large to carry, relatively inefficient, and loud. Even though future versions of Atlas are meant to carry their own power source, this will still be a rudimentary solution until researchers can figure how to make the machines far more energy efficient.

Perception is another big challenge. Atlas uses dynamic balance, and it can scan its surroundings for obstacles, but the way it uses this information to navigate is still slow and crude. “If you watch someone dancing or climbing or doing parkour, we are incredibly far [away from] a robot that can do that,” Pratt says.

During the DARPA challenge, Atlas operated partly autonomously, in that teams could provide specific instructions and command it to perform a task, but much of the robot’s behavior, including its split-second rebalancing, happened automatically. DARPA’s vision is for rescue robots to operate this way, with humans providing guidance and assistance but the robots functioning autonomously when needed, such as when a communications link fails. But if robots are ever to perform the kinds of tasks that some envision—such as helping the elderly in the home—they will need to have the ability to work with even greater autonomy.

Back in the pit lane, near a garage commandeered by a support team from Boston Dynamics, Raibert says humans and animals have extraordinary mobility, more than any human-made vehicle, so it makes sense to make robots with legs. “Let me just say I think the future of robotics has got to go there,” he says, just before one of his robots starts walking assuredly over a pile of rubble. “You can do stuff now without it, but eventually you’re really going to want that, and that’s what we’re hoping to enable.”

Story by
Will Knight

Front End Development by
Drew Chandler

Principal photography by
Adam DeTour

Additional photography and video courtesy of
Boston Dynamics

Creative Director
Eric Mongeon

Senior Web Producer
Kyanna Sutton

Senior Software Engineer
Molly Frey

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.