Traditionally, robots were designed to work separately from people. That is starting to change as robots begin working alongside humans to courier medicine in hospitals and assemble complex machinery. New legged robots could soon accompany soldiers across treacherous terrain or perform rescue missions at stricken nuclear power facilities. But for the most part, robots still can’t function in human environments without requiring costly changes to people’s own working patterns.
Researchers are now beginning to understand how to build robots that can integrate seamlessly and safely into human spaces. One approach is to give them more humanlike physical capabilities. A human-size robot with legs, arms, and hands can use the same pathways, doors, and tools that we do, so the environment need not be laboriously retrofitted. Of course, a robot does not have to do a job the same way as a person. The Roomba vacuum cleaner appears to bounce randomly around the room, while we would employ a more efficient and methodical approach. However, the Roomba, unlike us, has only one job to do and does not get bored or impatient. In designing a robot’s physical capabilities, we must think carefully about the context in which it will be deployed and remember it isn’t necessarily bound by the considerations guiding the way people work.
The same applies as we begin to design robots intelligent enough to work alongside people. It is as impractical to redesign our work practices for robots as it is to redesign our physical world for them. We must instead build robots capable of doing their jobs with only minimal disruption to the people they work with or near.
This will require them to have mental models of what governs our actions. Robots can build these models the same ways people do: through communication, experience, and practice. We do not require that robots have our full human capabilities for decision-making, communication, or perception. Through careful study of effective human work practices, my own research group is designing robots with planning, sensing, and communication capabilities suited to their contexts. For example, our assembly-line robot learns when to retrieve the right tool by observing its human coworkers, without necessarily having to ask. Robots like this one work seamlessly with people and reduce the economic overhead of deploying new systems. As a result, it will soon be practical to extend human capability through human-robot teamwork.
Julie Shah is an assistant professor at MIT and leads the Interactive Robotics Group.
The gene-edited pig heart given to a dying patient was infected with a pig virus
The first transplant of a genetically-modified pig heart into a human may have ended prematurely because of a well-known—and avoidable—risk.
Meta has built a massive new language AI—and it’s giving it away for free
Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3
Saudi Arabia plans to spend $1 billion a year discovering treatments to slow aging
The oil kingdom fears that its population is aging at an accelerated rate and hopes to test drugs to reverse the problem. First up might be the diabetes drug metformin.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.