Skip to Content

Does a Tele-Robot Operator Need a Visa and W-2?

Experts gathered this week at Stanford’s Law School to discuss the robot revolution.
April 11, 2013

As robotics software and hardware is commercialized, companies will face some interesting new condundrums, which may give them pause before adopting technologies ranging from workplace telepresence robots and robotic surgical tools, to driverless cars and commercial drones.

But make no mistake, it will be the lawyers just as often as the technologists guiding purchasing decisions, and a hundred legal experts gathered at a conference at Stanford’s Law School yesterday to mull over wide-ranging legal questions posed by the robots marching over the commercial horizon. 

Consider an experiment conducted at the Silicon Valley robot incubator Willow Garage. Employees there hated doing the dishes, so they hired Internet workers through Amazon’s Mechanical Turk system. How could an Internet worker do the dishes? First a worker took some online training, and if he passed different levels of tests driving the robot, he was given the ability to operate a PR2, Willow Garage’s $285,000-and-up robot that is dexterous enough to (slowly) fold laundry, set a table, and yes, wash dishes.

It might be a clever solution for chore-averse robotics researchers, but the situation is a walking HR nightmare, and has larger economic implications concerning service workers who can telecommute. As the Willow Garage experiment shows, remote workers won’t just participate in intellectual collaboration or online tasks. They could one day be doing actual labor, mediated through semi-autonomous robots. And so lawyers, ever practical, want to know: In what states will these employees pay their taxes? And what happens if this contract worker sexually harasses an employee? And, how  does one make sure companies are protecting their trade secrets and employees’ privacy when telepresence machines enter the office?

Already, there are HR department issues being posed by more common telepresence robots (see “Beam Yourself to Work in a Remote Controlled Body”). These robots are meant to help remote workers have a physical presence in an office environment, but they can’t manipulate objects or do other physical tasks. Yet when the browser company Mozilla tested some telepresence robots with Willow Garage, legal teams required six months of negotiations to hash out who is to blame if, say, a human-guided robot falls down the stairs.

Some of these questions are not unique to robotics; others will likely be solved with good insurance policies. But the near-term challenge is that machines are starting to combine software-controlled autonomy and human involvement ways that presents entirely new situations. The first death caused by a driveless car will raise similar issues. Bugs are inevitable in complex software, but it’s different when this software could cause a hit-and-run. 

Incidentally, Willow Garage decided to end the crowdsourced kitchen worker experiement, says robot social scientist Leila Takayama (see “TR35: Leila Takayama”), who told the story at the “We Robot” conference at Stanford. Unsurprisingly, Willow Garage employees were creeped-out by the idea of an anonymous Internet person hanging around their kitchen. They decided it was just time to start dealing with their own dishes.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.