Business Impact

From the factory floor to the OR, robots can make great teammates

Julie Shah is figuring out the best ways for us to interact with our future robotic colleagues.

Mar 1, 2018
Julie Shah is all about robotic augmentation.
Christopher Churchill

Business Impact

From the factory floor to the OR, robots can make great teammates

Julie Shah is figuring out the best ways for us to interact with our future robotic colleagues.

Mar 1, 2018

Robot coworkers and AI assistants are coming to an office near you.

And instead of worrying about a robotic takeover, Julie Shah is embracing it. An associate professor at MIT and one of our 2014 Innovators Under 35, Shah works on ways to make humans and machines into safe, efficient teammates. Her work has taken her to factory floors and bustling hospitals, where she tries to figure out how automation can make humans more productive. We sat down to talk to her about what we can expect to encounter when we begin working alongside robots—something many of us are already doing.

In factory settings, Shah has worked on introducing mobile robots to moving assembly lines.
courtesy of BMW

This article is part of a series paired with our newsletter Clocking In, which covers the impact of technology on the future of work. Sign up here—it’s free!

Erin: What do you think is the most common misconception about robots in the workplace?

Julie: People often think artificial intelligence is one very general and powerful capability moving its way through all these different jobs. But AI does not work like that today. Currently each AI system needs to be designed to perform a very specific task. It takes a lot of engineering. The sets of tasks are expanding, but we don’t have this “general AI” that will take over large swaths of human work. As it becomes more capable, it is able to do many small tasks across many different fields.

How does the potential to implement robots differ in a factory and a place like, say, a hospital?

When you talk about robots going into more service environments, hospitals, and office buildings, you have a much less structured environment. Robots need to learn context: personal preference, when things are busy, what time of week it is. It’s cumbersome to try to encode all of that.

We have been working on developing techniques to watch experts perform work. We watch nurses make decisions on which patients to assign to each room. By learning from the observation of human experts, robots can be trained.

Hospitals can turn to this robot for help with things such as assigning nurses, procedures, and rooms to patients.

Have you noticed people being more accepting of automation in different industries?

In health care, there is less of a long history that could create resistance to robots. In manufacturing there has been a cultural sensitivity to robots taking jobs. You generally encounter a little more skepticism. There is a higher burden to prove that the robots will enhance human work and not displace workers.

In the hospital, we studied nurses who performed the nurse manager role. They control parts of the operating room schedule: which rooms patients are assigned to, and which nurses get assigned to which patients. They are doing a job that’s mathematically more difficult than an air traffic controller’s, but they don’t have any of the same decision tools to help them. The nurses had a sense of the unique value they were bringing to the job. They knew their job was hard, and even though they are the best at it, they felt there was room for improvement.

How do you think the conversations around AI and work need to change?

I think one thing that is sometimes lacking from the discussion is that AI is not a technology out of our control. We are the designers of the AI. How we frame the problem changes what AI can produce.

This interview has been edited for clarity and length.