Skip to Content
MIT News: 77 Mass Ave

Now playing: DribbleBot

A four-legged robot that can maneuver a soccer ball on diverse terrain could also be useful in disaster aid.

close up on a person's legs kicking a soccer ball toward a four-legged robot on a campus lawn
Through trial and error in a simulation, the robot learned how to apply force with its legs to manipulate a ball on different surfaces.Mike Grimmett/MIT CSAIL

It’s no Lionel Messi, but a four-legged robot developed at CSAIL’s Improbable Artificial Intelligence Lab can dribble a soccer ball on surfaces including grass, sand, gravel, mud, and snow. 

To develop these hard-to-script skills, the researchers turned to a simulation—a digital twin of the natural world. “DribbleBot” started out with no idea how to dribble, but it learned through trial and error what sequence of forces it should apply with its legs. Four thousand versions of the robot could be simulated in parallel, vastly speeding the process. 

“Over time it learns to get better and better at manipulating the soccer ball to match the desired velocity,” says MIT PhD student Gabe Margolis ’20, MEng ’21, who led the work along with research assistant Yandong Ji. Thanks to a recovery controller the team built into its system, the bot can also navigate unfamiliar terrain and recover from a fall. These skills could have applications far beyond soccer.

“Today, most robots are wheeled. But imagine that there’s a disaster scenario, flooding, or an earthquake, and we want robots to aid humans in the search-and-rescue process. We need the machines to go over terrains that aren’t flat, and wheeled robots can’t traverse those landscapes,” says Pulkit Agrawal, EECS professor and director of the Improbable AI Lab. Previous attempts to program soccer-­playing robots have assumed flat, hard ground, and the bot wasn’t “trying to run and manipulate the ball simultaneously,” says Ji.

On the hardware side, the robot has sensors that let it perceive the environment, actuators that let it apply forces, and a computer “brain” that converts sensor data into actions—all in one compact, autonomous package. 

“Our robot can go in the wild because it carries all its sensors, cameras, and [computing resources] on board,” Margolis says. The team’s next steps include teaching it new skills like handling slopes and stairs.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.