In a month’s time, a motley assortment of robots will attempt to navigate a punishing obstacle course laid out in a fairground park in Pomona, California. At the challenge, organized by the Defense Advanced Research Projects (DARPA), about two dozen machines will make their way through a series of tasks meant to push the limits of robot navigation, manipulation, and locomotion.
Before many of the robots set foot (or wheel) on the course, however, they will be put through their paces in a highly realistic virtual world. This 3-D environment, called Gazebo, makes it possible to try out robot hardware or software without having to power up the real thing. It’s a cheap and quick way to experiment without risking damage to valuable hardware components. And it allows many researchers to work on a single robot simultaneously.
DARPA is a government agency charged with funding far-out research, and its contest is meant to encourage the development of robots that could enter an extremely dangerous environment— such as a badly damaged nuclear power plant after a meltdown—and perform work that humans would normally do. Each task the robots will face in Pomona will simulate vital repair work, such as turning off a water pump, sealing a contaminated building, or driving vehicles carrying equipment. Most of the robots involved are humanoid in shape, although some more closely resemble huge mechanical spiders.
DARPA has also funded development of Gazebo in recent years. The software resembles the sort of 3-D virtual space seen in many computer games, but it offers far more realistic approximations of physical forces and phenomena such as friction and lighting. Realistic noise can be fed into robot sensors to simulate the kinds of challenges roboticists will face when a robot tries to perform a task for real.
“We are trying to mimic reality as closely as we can,” says Nate Koenig, CTO of the Open Source Robotics Foundation, which is developing Gazebo, and who has spent the last decade leading its development. “The goal is to easily switch over to a real robot.”
Gazebo is part of the Robot Operating System, free and open-source software for controlling various parts of a robot. Because roboticists contribute code back to the ROS project, the operating system has gained considerable momentum as a platform for robot development, especially within academia. Gazebo and ROS are being used to develop many other types of hardware. A researcher in Switzerland, for instance, is using the software to develop an autopilot system for quadcopter aircraft.
“This is part of a recent trend toward a democratization of robotics,” Pras Velagapudi, a researcher at Carnegie Mellon University who is developing one of the robots taking part in the DARPA contest, said via e-mail. “Using robotic systems historically meant solving a lot of problems yourself. You had to create your own hardware, write your own software to use that hardware, and set up your own simulation tools to test both.”
A few industrial robots are already using ROS and Gazebo. This includes the robots made by Boston-based Rethink Robotics. Rethink’s robots, which are meant to be easy to program, can work alongside humans on a simple factory production line. The company has developed its own software simulation platform for commercial customers, but it encourages academic researchers to use Gazebo to experiment with its first robot, a two-armed machine called Baxter.
“No one wants to start from scratch,” says Brian Benoit, a senior product manager at Rethink. “If you have a lab that’s really good at machine vision, you don’t want to have to worry about inverse kinematics,” he says, referring to the mathematical equations used to model the movement of a robot’s joints.
A highly accurate 3-D environment is especially useful for robots designed to perform in a complex and unpredictable setting. The robots involved in the DARPA challenge will face variable lighting and physical setups, and a misstep could easily leave them damaged. “A lot of the time, especially with humanoids, you’ll typically try grasping things and see if you’d collide with yourself,” Koenig says.
Gazebo is also being used by many of the teams involved in another robot challenge, which will be held at a major robotics conference in Seattle this month. This contest, funded by Amazon, will involve robots identifying and picking products from shelves, much as humans do in Amazon’s warehouses (see “Amazon Robot Contest May Accelerate Warehouse Automation”). Amazon already uses robots in its fulfillment centers to move shelves around. Grasping items from those shelves is a much harder challenge.
“It’s very useful,” says Joe Romano, who is helping to organize the Amazon challenge, and who works for a robotics startup that is currently in stealth mode. “Anyone wanting to build a robot will want to simulate it a lot. Gazebo is the go-to tool.”
Even so, Velagapudi says there are limits to what can be done through Gazebo, simply because there are limits to how well we can model the physical world. The way a robot makes contact with a physical surface, for example, is difficult to simulate accurately. “The real world has a tremendous amount of detail that isn’t easy to represent in our models,” he says.
Toronto wants to kill the smart city forever
The city wants to get right what Sidewalk Labs got so wrong.
Saudi Arabia plans to spend $1 billion a year discovering treatments to slow aging
The oil kingdom fears that its population is aging at an accelerated rate and hopes to test drugs to reverse the problem. First up might be the diabetes drug metformin.
Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.