Skip to Content

Open-Source Robots Distributed

Eleven teams will share their work as they teach robots real-world skills.

Robotics company Willow Garage has started a two-year project to work with institutions from around the world on new applications for its robot: the PR2. Each of 11 teams will work on their own projects, but will share their code with each other and the rest of the world. Everything created will be open-source, meaning others can use the code for their own endeavors. (The PR2 runs on a software platform called Robot Open Source, also developed by Willow Garage.)

Reminiscent of Johnny 5 from the movie “Short Circuit”, the PR2 has two compliant arms that are strong but capable of performing delicate tasks: the PR2 can turn the pages of a book, for example. The arms gather data about the forces applied to them to help them respond accordingly. Stereo cameras, laser scanning range finders, inertial measurement sensors and an array of other tools provide the necessary data regarding the robot’s environment to complete a wide range of tasks, including navigating a room and opening a door with a spring-loaded handle.

Each team hopes to expand the system’s skills. The team from Stanford University (where the technology behind the robot was born) is working on software for cleaning up a table and taking inventory. Folks at MIT’s CSAIL lab, meanwhile, will work on object recognition and putting away groceries. Bosch will develop skins for the robots to allow them to feel their environment. Using an earlier version of the robot, Pieter Abbeel’s lab at the University of California, Berkeley developed software for neatly folding towels. (Look out Gap employees! T-shirts could be next!)

“We want to get robots out of factories and into the real world,” said Willow Garage CEO Steve Cousins at a press conference yesterday in the company’s Menlo Park, CA offices.

The view from there: Eric Berger, co-director of the Willow Garage Personal Robotics Program, explains the many features of the PR2 robot beside him. The picture shows the view from a Texai video-conferencing robot, which can be operated over the Internet. The main image shows the “head” view, while the picture in the lower right corner shows the floor around the robot and provides a much better view for navigation.

I attended the event via another of the company’s creations: the Texai. It’s a bit like video conferencing while driving a remote-controlled car via the Internet. The robot consists primarily of a flat screen monitor, with audio and video recording equipment. Folks who looked at my screen saw my face as I sat in my living room in New Jersey. Using Skype, I was able to see and hear most of the press conference with ease. I got a good spot in the front row, and drove up to a few folks afterwards to ask follow-up questions. It was, however, a bit hard to hear some people while mingling in the noisy room after the event. But as long as the person was facing me directly, I could hear them just fine.

The only other oddity: because of the position of the camera on the Texai, it often seemed as though people were staring at my chest instead of looking me square in the eyes. But I suppose that happens a fair bit in real life, too.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.