Skip to Content

The Dark Side of the Technology Utopia

It’s time to figure out how to preserve the human element in industries that are being automated.
November 11, 2009

At Defrag 2009, a technology conference in Denver, I am in a room full of people who hope that technology will play a big role in helping the economy recover. As usual at technology conferences, people tend towards a combination of idealism and hubris. People are ready to believe that smart technological solutions exist for many of today’s ills, but they also expect those solutions to raise the quality of life for most people.

It’s no surprise then that Andy Kessler, a frequent Wall Street Journal contributor, struck a big nerve yesterday with a keynote titled “Be Soylent–Eat People.” Kessler’s talk certainly shared kinship with the usual technology idealist’s line–he expressed an absolute faith in the ability of technologists to solve problems and produce ever-increasing automation.

His celebration of technology, however, took on a dark note that had many up in arms. His basic premise was that all business boils down to a basic, cold equation: output per worker-hour. Some workers are creators, and therefore productive. Everyone else’s jobs should be automated out of existence. It is a testament, perhaps, to the extremity of his vision that he suggested so much automation that even technology enthusiasts were offended.

Kessler’s ideas were presented with all the subtlety and compassion of a sledgehammer. He classed teachers as “sloppers,” the category of jobs that he characterized as “moving things from one side of the room to another.” He also claimed that required entrance exams for professions are “bogus”, and called doctors “sponges.”

These last provocative statements seem too based on ignorance to be taken seriously. It’s a poor view of education, for example, that sees learning as simply moving facts from some repository into students’ heads.

Analyst Stowe Boyd, in a later keynote, attacked Kessler’s views as a “remorseless Taylorist vision.” Productivity, in Boyd’s view, is not as easily quantifiable as Kessler seems to believe. Boyd pointed out that when most people receive a request from a friend, they stop the (productive) thing they’re doing and take a few moments to make an introduction or write a recommendation. “People will continue to trade personal productivity for connectedness,” he said, suggesting that connectedness could have its own payoff.

Everyone I’ve talked to today has made some reference to Kessler, and thus when a negative reaction is so powerful and prevalent, it’s worth examining why that is.

Technologists often promise that they will automate the tasks that people find unpleasant, and Kessler seemed to suggest that vast swathes of society’s tasks should be considered as such. His vision is rooted in the automation that came to farming and factories.

Yet, today’s technology innovators don’t see themselves this way. The obsession with information and social software is billed as a way to stay connected with people, not as a way of automating them out of existence.

Kessler was disrespectful of many of the jobs he suggested could be automated. And Boyd was right that productivity’s not so easy to measure or understand. However, Kessler made people uncomfortable partly because he pointed out and celebrated the dark side of the vision of technological utopia as it still exists today. Industries are being automated out of existence–just ask people in advertising or publishing.

At a press event I attended recently, Google CEO Eric Schmidt was challenged about the mixed effect the search engine has had on the newspaper industry. Schmidt responded by saying that technology companies such as Google have a responsibility to help protect what’s valuable in the information sources they depend on. He added, however, “We’ve not yet figured out how to exercise that responsibility.”

There’s not much time to answer that question. Kessler acknowledged the cold, uncomfortable equation by which machines replace people. If that vision offends the people creating those technologies, now is the time to think about how to avoid losing human value in the course of introducing new technologies. Otherwise, that human value gets relegated to boutique movements such as the organic food industry.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.