Skip to Content
MIT News magazine

Forces of Nature

Sangbae Kim ditched the rules of robotics—and built tomorrow’s first responder.

Suspended over a treadmill in the middle of the Biomimetic Robotics Lab, Sangbae Kim’s best-known creation waits for its next test run. Cheetah III is a bundle of joints, circuits, and electric motors. Like the animal that shares its name, the four-legged bot weighs in at about 90 pounds, and it is quick and competent. Designed to leap over obstacles and make its way through difficult environments at speeds of up to 3 meters per second, or 6.7 miles per hour, Cheetah III “can go almost anywhere a human can go, with minimal supervision,” says Kim.

At the moment, it needs protection from paparazzi, if nothing else. Although they lack features like fur and ears, Cheetah bots maintain a mammalian charisma. When one goes out—to trot down Massachusetts Avenue or leap around an MIT soccer field—it tends to draw a crowd. In Kim’s Ice Bucket Challenge video from 2014, an earlier Cheetah model steals the show by kicking over the bucket. Lab members have papered over the windows of 5-017 so they can get work done.

Kim, an associate professor of mechanical engineering, gets his ideas from nature. “Biology guides us toward what can be possible,” he says. It has helped him make machines that move like insects, lizards, and cats, not to mention rack up hundreds of thousands of YouTube views. But he doesn’t want to stop there. Kim’s latest research takes its cues from a particularly inspiring animal: the human.

Reinventing the leg
In the Biomimetic Robotics Lab in the basement of Building 5, the Cheetah III is surrounded by more familiar machines. Kim’s lab doubles as a small manufacturing facility, with 3-D printers, a laser cutter, a drill press, and a CNC milling machine. While most robotics labs use prefab parts, Kim prefers a DIY approach. “We basically build everything we have,” he says.

This is prudent when you’re making and testing a new mechanical species: over the course of a hard day of exercise, the Cheetah used to go through a whole set of polyurethane paw pads. (It now has rubber ones instead.) But more important, the DIY approach allows the team to start from scratch, unfettered by the assumptions built into standard hardware. “How it moves is really different from most robots, because we actually designed our system by ourselves,” says Kim.

Most of the Cheetah’s peers are optimized for factories. They’re manufacturing robots, made to do the same set of tasks over and over again, whether that’s packing a pallet or screwing in a bolt. “They’re way faster, more precise, and more consistent than humans,” says Kim. “But they don’t actually interact with their environment or objects like we do.”

To demonstrate, Kim performs a classic human action: grabbing his laptop off the table. He’s cartoonishly clumsy about it—his forearms slam into the table’s surface, and his hands hit the sides of the screen—but both human and laptop remain unharmed. Then he puts it back down and imitates how a factory robot would approach the same task. This time, he glides his hands with great concentration, decelerating as he goes. By the time he actually reaches the object, he is barely moving.

“Your walking is a million times more amazing than jet fighters flying.”

“A human can just grab things in a second or less,” he says. But a robot? “It has to be slow, because it cannot collide.” The same stiffness that makes a factory robot consistent prevents it from safely absorbing the energy produced by an impact. Instead, that energy ends up breaking it, or whatever it’s trying to interact with. This limitation is crippling for machines that want to, say, walk: after all, “every step is a collision,” Kim says. (Vehicles also have trouble coping with the ground. As Kim points out, airplanes and boats have the run of air and sea, but we’ve had to smooth the way for land transportation with roads and railway tracks.)

Photo of Sangbae Kim
Sangbae Kim’s first Cheetah prototype had a head, a spine, and a tail. The latest runs like a cheetah but “doesn’t look like an animal anymore,” he says.

When people try to make robots that walk and run, they often start with the same elements as those in industrial bots. For example, when it’s time to choose an actuator—the part of a machine that transforms an energy source into motion—they’ll go with a hydraulic one: strong and precise, but very stiff and unable to absorb shocks. They’ll put the actuator at the robot’s hip and a force sensor at the foot. When the robot walks, the force sensor figures out how hard it’s hitting the ground and tells the actuator, which adjusts accordingly.

Generally, though, this strategy doesn’t work too well. “You sense the force here [at the foot], but your actuator is far away and too slow,” Kim explains. “There’s quite a bit of mass and dynamics in between … it goes unstable.” (He likes to hammer this point home with a supercut from the 2015 DARPA Robotics Challenge, in which a series of expensive, impressive-looking bipedal robots topple over like tranquilized Terminators.)

So Kim decided to start over, designing a new type of actuator with different priorities. His actuator is lean and mean—it’s high-torque, so it can generate a lot of rotational force, but it’s powered by a lightweight electric motor with minimal rotational inertia, which lets it quickly change how fast it spins. The rest of the Cheetah’s leg is designed “to be as light and as low-friction as possible,” he says.

Because the leg is skinny and light, the force produced by the actuator barely changes by the time it gets to the foot, making a force sensor unnecessary. This gives the Cheetah faster reflexes: it can change the amount of force it exerts about 50 times more quickly than robots that use both actuators and force sensors.

A clumsier robot needs a whole data loop to kick in before it can figure out how hard its foot just hit the pavement and what it should do next. But “when the Cheetah lands from jumping over an obstacle, the feet are controlling the necessary forces to balance and recover immediately after colliding with the ground,” says Kim. (The design can also absorb energy much more readily—when the foot hits the ground, the impact force travels back up the leg and into the actuator, recharging the motor rather than breaking it.)

Instead of sensing force, the Cheetah focuses on figuring out where it is in space. Joint position sensors, accelerometers, and gyroscopes are constantly feeding data into a set of algorithms, which work to determine when and how hard each leg is likely to hit the ground next. When the Cheetah’s foot steps on something unexpected—say, a rock that causes its body to tilt—this information helps the robot decide whether to continue its step or pull back. If it does commit to a step, another algorithm kicks in to predict how much force to apply to get over the object—or what compensating force is needed to adjust its balance if it gets jostled.

This set of priorities has enabled the Cheetah to do things most other bots cannot, like trot and jump. It’s also extremely efficient—it uses energy only slightly less judiciously than an actual cheetah, putting it leagues ahead of other robots. It can even maneuver around its environment without cameras. In one highlight reel, a “blind” Cheetah runs across a gravel patch, climbs a flight of stairs, and repeatedly rights itself as a lab member prods it with a stick. Kim calls his approach “proprioceptive actuation,” after the “sixth sense” that gives humans awareness of our bodies’ position in space.

Achieving such stability requires sacrificing some accuracy—“we have a 10% or 15% error [of force control] constantly,” Kim says. While that might dissatisfy some engineers, the Cheetah is so light and energy-­absorptive it can generally tolerate the error rate, even with heavy impacts from running and jumping.

Translating the behavior of living creatures into mechanical terms requires a multilayered mind-set. “Everybody in robotics is very focused on their own little area—there are a lot of software groups that think everything can be solved with code, or hardware groups with hardware,” says João Ramos, PhD ’18, one of the lab’s postdocs. “Sangbae has an integrated view. If you want to solve the problem, you have to think about it at a concept level, a hardware level, and software.”

“This paradigm shift was possible because I’m a mechanical engineer,” agrees Kim. “I think about dynamics of rigid bodies instead of [only] writing software.” Several companies, including Boston Dynamics, now use his actuator design in parts of their robots, too.

Climbing up
Kim got into the habit of looking for new ways of doing things while growing up in Seoul, South Korea, living in a small space without many resources—or a workshop. “I built a lot of things,” he says. “I found every possible way to create my own tools.” He took apart home appliances to see if he could piece them back together. When his friends raced their radio-controlled cars around, he would put his belly-up and tinker with it.

As a student at Yonsei University in Seoul, he designed what was then the world’s least expensive 3-D scanner. (He also served his mandatory stint in the South Korean army, an experience he says intensified his distaste for bureaucracy.) He joined a startup that was commercializing the scanner but, soon after developing the first prototype, realized he preferred inventing to fine-tuning and decided to return to academia.

When he got to Stanford for grad school in 2002, he wanted to keep working in hardware design but realized that many tasks that once required tinkering with moving parts now happened on computers. “What cannot be replaced by electronics?” he says. “If you have to work on something that physically interacts with the environment, it can’t be replaced by a piece of code or a chip … That’s why I stepped into the robotics world.”

Kim joined Mark Cutkosky’s Biomimetics and Dexterous Manipulation Laboratory at Stanford. “I was fascinated by how animals move,” he says. “I was focused on the principle ‘Oh, this is something in animals—let’s replicate it.’” He worked on a spiderlike climbing machine and a swarm of cockroach-inspired bots that could run by themselves. Later, as a postdoc at Harvard, he built an autonomous robotic earthworm. (It moves by squeezing its segments in response to an electrical current, and it is soft enough to survive being stepped on.)

But his first big breakthrough was Stickybot, a robot that can scale walls like a gecko. Like cheetah legs, gecko feet accomplish two difficult things at once: they can adhere to a wall with great strength, but they can also detach from it with great speed. “If you think about building a climbing suit—if you have really sticky hands, you can climb the wall, but if your hands are that sticky, you can’t get off the wall,” says Kim. “But the geckos are running up.”

Photo of the Mini Cheetah robot
Photo of the Mini Cheetah robot

Mini Cheetah, a smaller, safer, more agile Cheetah, is for research and education. Data is collected through its tether, and control algorithms can be changed the same way.

In 2006, Kellar Autumn, a biologist at Lewis & Clark College, published a paper detailing exactly how geckos manage it. The key is in tiny hairs on their feet, which are structured to stick only when pulled in one direction. Kim used the principle to create the Stickybot and an adhesive he calls “gecko tape.” “It’s probably still my favorite project in terms of science,” he says. “We developed a new material—a new concept that didn’t exist before we understood the gecko.”

In 2009, Kim joined the MIT faculty, and for years he often met up with Rodney Brooks at Starbucks to toss around ideas. (Brooks, the former director of CSAIL, had gone off to found Rethink Robotics in 2008.) “He thinks broadly,” Brooks says—and tries things that might scare other people. Brooks recalls that at a 2017 Amazon conference, Kim decided to figure out how to give the Cheetah speech commands using an Amazon Echo. “By the time his demo came around the next morning, he was able to talk to his robot for the first time,” he says. “That was a gutsy move.”

Kim earned tenure in 2016, and every other year he teaches 2.74 (Bio-Inspired Robotics), for which students have made bots that swing like a monkey or jump like a kangaroo. He also co-teaches 2.007 (Design and Manufacturing). The storied robotics design class culminates in a themed competition that always draws a crowd, and Kim and his co-instructor, Amos Winter, SM ’05, PhD ’11, dress up in costumes to emcee it. Last year, Kim played Willy Wonka. “A lot of the high-level lectures he gave were about how to take inspiration from nature,” recalls Selam Gano ’18, who took his biomimetics class in 2017. “He’ll say things like ‘When you leave this class I want you to just stare at your hand and be like, “Wow, this is incredible!”’… He really infects everyone with how excited he is.”

Push it to the limit
Sometimes you don’t know how incredible something really is until it stops working. For example, about 15 years ago, Kim ruptured his Achilles tendon. It threw him for a loop: sure, he was playing basketball at the time, but he wasn’t doing anything fancy. “I was just walking,” he says. “It was weird.” His doctor put him on six months of couch rest.

Kim—who has since switched to tennis—is not a fan of couch rest. All the same, he found the experience enlightening. “Your muscles are strong enough to rip out tendons and dislocate joints all the time,” he says. “Our nervous systems are always carefully adjusting the amount of force you need to generate.” Somehow his body had bypassed this, and he’d pushed past his own limits. But most of the time, we protect ourselves. Unlike those clumsy DARPA Challenge robots, we manage to have both power and control.

“I was focused on the principle ‘Oh, this is something in animals—let’s replicate it.’”

What’s more, like geckos scaling a wall, we do this without even thinking about it. “Your walking is a million times more amazing than jet fighters flying,” says Kim, rattling off a list of our subconscious skills. We can open doors without losing our balance. We can jog down the street while distracted. We can eat breakfast as we carry on a conversation, “and we’re not thinking ‘Oh, I’m moving this little lump of potato back to the left side of the teeth so the teeth can crush it down into a reasonably sized piece,’” he says. “We take too many things for granted!”

We may never need a robot that chews. But if we want one that’s great at staying upright, it might help to tap into our own abilities—as another one of Kim’s projects does. HERMES (which stands for “highly efficient robotic mechanisms and electromechanical system”) is a bipedal robot that uses the same unique actuators as the Cheetah. But instead of operating completely on its own, it is controlled by a human, using what Kim calls a “balance feedback interface.”

To control HERMES, a human operator wears a special motion-detector vest and stands on a platform with embedded force sensors. By tracking and transferring motion data in both directions through wires, vest and platform forge an experiential connection between human and robot. Say HERMES is supposed to open a heavy door. The human operator makes a pushing motion, and the robot follows suit. When HERMES hits the door, the human feels the impact and adjusts his balance accordingly. HERMES makes the same adjustments and avoids falling over. Algorithms tweak the relevant forces so that a human wearing the vest can control a smaller robot, or a four-legged one.

In this way, the system allows both human and bot to bring their strengths to the situation while minimizing their weaknesses. Humans are smart, and good at balancing and fine manipulation, but we’re pretty fragile. Robots are strong and tough, but they need a lot of direction. Kim wants to combine this technology with the Cheetah, replacing one of its legs with a proprioceptive robotic arm he’s working on. The arm joins human with machine at a more delicate scale, letting the operator feel what’s happening when the robot grips a rope or twists a doorknob.

He imagines a first responder using these tools to “explore” a dangerous area. “You’ve got [VR] goggles, and maybe voice command: ‘Cheetah, go to room 507,’” he says. The Cheetah quickly makes its way there, moving efficiently and avoiding debris. It finds its target: “Oh, there’s a gas leak, and you need to close this valve.” The bot can then stand on three legs as the human manipulates the fourth leg—which also doubles as the robotic arm—to adjust the valve.

“This is my big vision: human-level mobility, mostly autonomous, with the manipulation mostly done by humans,” Kim says. “These three components will eventually allow us to do this.” When they can, he adds, more possibilities will open up. Kim can picture his robots in the homes of the elderly, remotely activated when necessary by a person in a control room: they could provide both help and privacy for people who need assistance but still want to live on their own.

Or maybe his cheetah bots will end up doing dangerous manual labor, guided by skilled workers ensconced in safe places nearby. He predicts that in two to three years, Cheetah III will be able to navigate in a radiation-­filled power plant; in a decade, its successor should be able to do more physically demanding work, such as manipulating debris. And in 15 to 20 years, he thinks, it could enter a burning building and rescue people.

But Kim has stopped focusing on copying other creatures outright. “When I first imagined my robot running, galloping like a cheetah, I’d always think about this beautiful body bending,” he says. He quickly realized, though, that a supple backbone wouldn’t make his bot any better at its eventual job. The same goes for other details: “At the beginning, I’d look at each bone shape, trajectories, and so on,” he says. “I still take a look at a lot of biology studies to really understand what’s going on.” But he treats them more like inspiration than instruction booklets: “Now I’m like, ‘Okay, four legs is good.’”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.