Skip to Content

Robo-Sabotage Is Surprisingly Common

The beating of hitchBot reflects widespread robot sabotage in many workplaces.
August 4, 2015

As you probably know by now, HitchBot—a device made of pool noodles, rubber gloves, a bucket, and the computer power needed to talk, smile, and tweet—was deliberately decapitated and dismembered this week, only 300 miles into its hitchhiking journey across the United States. HitchBot had successfully made similar journeys across the Netherlands, Germany, and Canada, relying on bemused strangers for transportation. The geek-o-sphere is up in arms, claiming that this violence reveals something special and awful about America, or at least Philadelphia.

HitchBot at Niagara Falls, in happier times.

I think perhaps there’s something else at work here. Beyond building robots to increase productivity and do dangerous, dehumanizing tasks, we have made the technology into a potent symbol of sweeping change in the labor market, increased inequality, and recently the displacement of workers (see “Who Will Own the Robots?”). If we replace the word “robot” with “machine,” this has happened in cycles extending well back through the Industrial Revolution. Holders of capital invest in machinery to increase production because they get a better return, and then many people, including some journalists, academics, and workers cry foul, pointing to the machinery as destroying jobs. Amidst the uproar, eventually there are a few reports of people angrily breaking the machines.

Two years ago, I did an observational study of semiautonomous mobile delivery robots at three different hospitals. I went in looking for how using the robots changed the way work got done, but I found out that beyond increasing productivity through delivery work, the robots were kept around as a symbol of how progressive the hospitals were, and that when people who’d been doing similar delivery jobs at the hospitals quit, their positions weren’t filled.

Most entry-level workers did not like this one bit. Soon after implementation, managers at all my sites noticed that some of these workers sabotaged the robots. This took more violent forms—kicking them, hitting them with a baseball bat, stabbing their “faces” with pens, shoving, and punching. But much of this sabotage was more passive—hiding the robots in the basement, moving them outside their preplanned routes, obscuring sensors, walking slowly in front of them, and most of all, minimizing usage. Workers and managers attributed these stories to an ongoing, frustrated workplace dialogue about fair work for fair pay.

The irony is that this resistance is misdirected. If technology is responsible for job losses and increased inequality in organizations and society, we should be focused on networked computers and software. Robotics isn’t even a rounding error in that equation. But while information technology is practically invisible and inert, robots move with apparent intention in the spaces where we live and work. A growing body of research shows that we have strong visceral reactions to technology with this combination of characteristics. I saw the lighter side of this at hospitals, too: workers named these robots, dressed them, and talked to them. Kids waited all day just for the chance to walk with them around their cancer ward. One employee even baked a robot a loaf of zucchini bread for Thanksgiving.

We’ll probably never know what inspired the attack on HitchBot, but like looms in the Luddite era, robots will likely be sabotaged by angry people who see their livelihoods and opportunities fading while a wealthy few reap dividends. What’s more, sabotaging robots probably won’t change much, and might even distract us from issues and technologies that matter.

Matt Beane is a doctoral student in his final year at MIT’s Sloan School of Management. He studies the implications of robotics for collaborative work.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.