Putting your hand in front of an industrial robot arm is not, generally, a good idea. These machines might move quickly and precisely, but they are so blind and stupid that they’ll gladly break a limb without so much as an “oops.”
So it took a little courage to try this trick with a robot arm being tested at Realtime Robotics, a startup located in Boston’s Seaport neighborhood. I reached forward to intercept its movement as it grasped a widget from a table and moved to put it in a box. Thankfully, the robot paused, moved deftly around my outstretched arm, and then neatly deposited the item in its box. No broken limbs today.
This kind of graceful adaptability could prove incredibly useful for the robotics industry. There are some robots that can work alongside people, but they tend to be low-power, imprecise, and of limited use. The most capable, and powerful, industrial machines still have to work in very precisely controlled environments, away from soft, breakable humans.
“Even if you’re not worried about having humans next to the robot, you might want to modify your cell without incurring the cost of bringing in a technician,” says Sean Murray, a robotics engineer and cofounder at Realtime Robotics who showed me around.
The movement problem
A number of companies are trying to find ways around this problem. Some are testing sensors that will stop a powerful robots in its tracks if it spots an obstacle. Realtime Robotics is trying to go further, by giving robots the kind of low-level intelligence needed to move through the real world. This is the physical awareness that humans and animals take for granted whenever they move an arm or a leg.
In several different rooms at Realtime, industrial robot arms are testing the capabilities of a new chip that the company has developed to make this possible. When hooked up to 3D sensors, this chip lets the machines rapidly consider a range of different actions, effectively “imagining” the outcome, before choosing the one best suited to the task at hand. In one room, I watched as two robots performed balletic feats of teamwork, gliding around one another and occasionally handing over items.
“The fundamental challenge is that robots are so stupid,” says George Konidaris, founder and chief roboticist at Realtime as well as an assistant professor at Brown University in Providence, Rhode Island. “We have this basic motor competence and robots don’t.”
Motion planning is deceptively difficult for a robot, partly because each joint adds an extra dimension to the calculations that must be performed.
Make your move
The company’s chip supercharges the mathematical computations behind a relatively simple motion-planning algorithm developed by Konidaris and others while he was at Duke University. By running the computations in parallel, the dedicated chip can perform them more than 10,000 times more quickly than a regular computer chip, while also using less power.
“The approach is very clever,” says Tomás Lozano-Pérez, a professor at MIT who advised Konidaris when he was a graduate student.
This is part of a broader trend. Advances in software and hardware are gradually starting to increase robot IQ, perhaps paving the way for more capable industrial robots that can be used in powerful new ways. Smarter robots could sit on a production line next to a person—for example, figuring out how to grab objects no matter how they are arranged, and without accidentally hurting anyone. This could accelerate the spread of automation across many industries.
Lozano-Pérez adds that better motion planning will be fundamentally important for the future of robotics. “Any robot that is going to move around purposefully to achieve goals had better think about how it should move,” he says. “The challenge is that motion planning is slow when the environment is cluttered, and especially when the robot has many degrees of freedom.”
There is one other big potential application for the technology: self-driving vehicles. Just as a robot needs to plan its motion if it is to avoid hitting things, a self-driving car needs to quickly decide the safest route around obstacles. Realtime is already developing a version of the chip with this in mind. Konidaris says it should let self-driving vehicles adapt rapidly to complications on the road, perhaps making them safer.
The first test will be whether robot arm makers decide to make use of Realtime’s technology. Already several are testing it out. “The ultimate success will depend on how it’s integrated,” says MIT’s Lozano-Pérez. “But it seems to me that it opens new possibilities for robot system design.”
Why Meta’s latest large language model survived only three days online
Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense.
A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing
Online videos are a vast and untapped source of training data—and OpenAI says it has a new way to use it.
Google’s new AI can hear a snippet of song—and then keep on playing
The technique, called AudioLM, generates naturalistic sounds without the need for human annotation.
Responsible AI has a burnout problem
Companies say they want ethical AI. But those working in the field say that ambition comes at their expense.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.