AI that leads to fewer car accidents and less roadway congestion. Personal augmentation systems that help people thrive as they age. Robotic orderlies that make emergency rooms safer and more efficient. In their new book, What to Expect When You’re Expecting Robots, Laura Major, SM ’05, and Julie Shah ’04, SM ’06, PhD ’11, envision such Jetson-like benefits from human-robot collaboration.
“People seem to be concerned about whether robots will one day make us obsolete—whether they will become smarter, faster, better than their human creators. But the reality is that robots and humans will probably always be good at different things,” they write. “It is possible that some of our most stubborn societal problems could be better addressed by the kind of collaboration we envision.”
Major and Shah are well suited to explore the future of human-robot collaboration: Major is CTO of Motional, an autonomous-driving joint venture of Hyundai and Aptiv, and Shah, an associate professor of aero-astro, focuses on industrial human-robot collaboration as director of MIT’s Interactive Robotics Lab. This excerpt from their book examines how we can adjust the environment in small ways to turn robots into effective collaborators.
It’s the end of the day on a Friday, and you didn’t make it to the mall to pick up favors for your child’s birthday party this weekend. So you log on to Amazon to see what’s available for next-day delivery. In addition to the favors, you find lightbulbs to replace the burned-out bulb in your table lamp, and spot a new book that you decide to go for. You click “Place my order,” and a short while later, robots in Amazon’s fulfillment warehouses are whizzing away to make sure it’s delivered to you right on time.
The warehouse is full of small, flat robots that shimmy underneath shelves loaded full of everything from blenders to wool coats to table saws. When your order is queued up, robots near the shelves with your products are notified. They slip beneath the requisite shelves, lift them, and zip through the warehouse, stopping and starting, moving left and right, “dancing” around all the other robots also moving through the warehouse. It’s a truly beautiful sight.
You might be surprised to learn, however, that these robots are mostly blind. Equipped with just a few sensors, they “navigate” their world by looking straight down at paper taped onto the floor by human workers. The warehouse is one big grid, with a unique paper pattern taped to the floor of each grid square. The robots simply track the sequence of paper patterns they pass to confirm their location. When one of the pieces of paper is ripped up by the robots’ wheels, a person pauses the robots to walk into the space and retape the paper to the ground. Amazon has 175 fulfillment centers around the world, and these robots currently zip around in 26 of them, working with humans to fill your orders.
This may sound like a hacky solution, developed by a startup to get the robots out the door. But Amazon, one of the most successful companies on Earth, still chooses to tape paper to the floor. Why? Robots with fewer sensors are less expensive—and less likely to fail. Robots are much more reliable if you program them to follow a pattern of papers on the ground than if you try to get them to observe the world around them, detect obstacles, plan a path around the obstacles, and then continue to look for their destination. Sometimes simple is best.
Our societies are not currently built to handle the needs of independent robots, and it’s not clear that we can simply make robots that need only what our infrastructure currently offers.
While things like traffic lights, speed-limit signs, orange cones, interstate on-ramps, and crosswalks help humans coordinate driver and pedestrian activities safely and efficiently, robots will need even more structure and support from the environment. Their sensory systems take in lots of data about the world around them, but they are not as good as we are at deriving meaning from it.
The good news is that we can change our environment in small, simple ways that will make the world much easier for robots—and safer for us. Aviation offers a good example.
Learning from the friendly skies
The average airplane passenger probably doesn’t realize that planes fly in lanes, following a virtual trail of breadcrumbs designed around a network of fixed ground beacons that revolutionized aviation safety. These navigational aids help the pilot and air traffic controllers track the plane’s location and have been in use since long before GPS existed. Airspace has also been divided into different flight levels and tracks that act like lanes on a highway, except these lanes are very wide (and tall) to accommodate the high speed of aircraft, potential errors in location estimates, and other factors, such as the wake vortices created by each plane. Today the vertical lanes are separated from each other by 1,000 feet. These lanes in the sky minimize the potential for aircraft to cross each other’s paths unexpectedly and collide. They also simplify the procedures for managing air traffic. For example, if two aircraft are on a collision course, rather than trying to calculate exactly when each one will arrive at the collision point, or recommending a slight maneuver to one of the pilots to prevent the collision, air traffic controllers typically ask one to climb 1,000 feet—and then the two planes are guaranteed not to come close to each other, because they are in separate lanes.
Structuring the airspace in this way has had a tremendous impact on the efficiency and safety of air transportation, because it offers clear rules that regulate the behavior of every plane in the sky.
Beyond considering how airspace is safely shared, the history of aviation navigation offers other lessons for learning to work effectively with robots. From aviation’s earliest days, airplanes needed to be carefully coordinated. At first, we simply used bonfires at the end of runways at night, and pilots looked for the blazing light to find where to land. Next, beacons were installed, and aircraft could use radio navigation to find the runway even on a cloudy day. Transmitters broadcast a modulated signal, which is received on the aircraft. The position of the transmitter is calculated using time of flight between received signals, and this data is used to determine the position of the aircraft. Initially this was calculated by hand; now it is automated and extremely robust. World War II brought us radar surveillance, allowing air traffic controllers to track planes without relying on transmitters, especially in congested airspace such as the area around airports.
But the real revolution in air traffic came in 1956, following the collision of two planes over the Grand Canyon. The planes were operating in uncontrolled airspace, where pilots are expected to “see and avoid” other aircraft without any external help. Both pilots were maneuvering around scattered cumulus clouds to get a better view of the canyon, and both entered the same cloud, making it impossible for them to see each other. All 128 passengers were killed.
This midair collision, which took place during the rise of commercial aviation, created a panic. Aviation rules at the time had no good way to protect planes against such conflicts. The solution was to centralize the management of airspace. The US Congress appropriated $250 million to upgrade the nation’s airway system and created the Federal Aviation Administration (FAA), giving it broad authority to combat aviation hazards. The FAA mandated ample separation between aircraft and planned “super skyways” to connect major East and West Coast cities, carving out airspace with separate rules to facilitate heavy cross-country travel.
Will we one day need such a central agency to create rules, develop external navigation support, and regulate other aspects of robot operation and control? Possibly. But at the very least, industry cooperation for negotiating the shared resources these robots will utilize—our roads, sidewalks, hallways, and aisles—will be key.
Working safely, side by side
Factories have been learning to work with robots for a few years now. Today, automotive factories are full of big, fast-moving robot arms that operate only in highly controlled environments confined by cages. The parts the robots are handling need to be placed precisely—if they are out of place by even a few millimeters, the entire operation churns to a halt. And the robots can’t sense people nearby. If someone were to enter their space, it would be a significant safety hazard.
However, the truth is that relatively little work in most factories—even automotive factories—can be structured so carefully for robots. A car body can be built almost entirely by robots, but the rest of the job—installing wiring, seats, and dashboard elements—is still done almost entirely by people. This work can’t—and won’t—be performed exclusively by robots for the foreseeable future, because it requires skills that robots do not yet have. But manufacturing engineers are realizing that the work robots do already, such as assembly, welding, and packaging, can be done better and faster if robots are freed from their cages to work alongside people. Rather than trying to reproduce the tasks of a human worker, robots can actively assist the human—handing over the right part at just the right time, for example—and thereby drastically improve the productivity of the line. In fact, our studies, and others, show that with close-proximity collaboration between humans and robots, tasks can be accomplished much more efficiently—up to 85% faster—than when humans perform assembly tasks without robot assistance.
Will we one day need a central agency like the FAA to create rules, develop external navigation support, and regulate other aspects of robot operation and control? Possibly.
Companies are therefore tackling the challenge of managing these complex machines in a way that is safe for the people who surround them. The robot smarts required to monitor the progress of humans and anticipate what they need is a far cry from the blind industrial robot in a cage, or even from the robots in Amazon’s warehouses that navigate using paper markers. Factories today need technology that allows for a more intimate dance of humans and machines, much like the modern complex choreography of planes crisscrossing the skies. And to make this technology work, we need fail-safe methods of ensuring that robots can’t harm their coworkers.
Recently, scientists have created new, dynamic ways of marking “personal space” for people and robots, and this enables close physical collaboration in manufacturing without endangering workers. In place of a static demarcation of robot and human space, the industrial environment is outfitted with new sensors that function effectively as virtual fences.
If a person moves close to a robot and crosses the virtual fence, the robot immediately stops moving. In more advanced environments, sensors are used to create dynamic safety zones, in which the distance between the person and the robot is actively monitored. As the person nears the robot, the robot slows, giving the person time to react before the robot stops completely.
Just as aircraft have different rules for separation in the air, industrial robots must conform to what are known as “speed and separation monitoring” standards, maintaining specified distances from people based on their speed. The faster the robot is moving, the farther away it must stay from people, and as a person nears the robot, it must slow and stop. One of the first systems of this kind was deployed in a BMW plant in Munich in 2017. A human associate worked underneath a towering orange industry robot, two to three times his height, as they safely negotiated shared factory floor space to build cars.
These simple switches, from physical cages to virtual fences and from static demarcation of safe space to dynamic adjustment of safe zones, make it easier for humans and robots to collaborate on manufacturing tasks, completing them more efficiently or to a higher standard than either human or robot could achieve working alone.
The “rules of the road” for working with robots don’t have to be static. They can adapt over time as robots become more capable and we become accustomed to them. As robots evolve, we can release them from their fixed “lanes.” Through more dynamic negotiation of shared resources, we can take some big leaps toward integrating robots into human environments.
Adapted from What to Expect When You’re Expecting Robots: The Future of Human-Robot Collaboration, by Laura Major and Julie Shah. Copyright © 2020. Available from Basic Books, an imprint of Hachette Book Group.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.