Skip to Content
Silicon Valley

AI’s PR Problem

Had artificial intelligence been named something less spooky, we’d probably worry about it less.
March 3, 2017

HBO’s Westworld features a common plot device—synthetic hosts rising up against their callous human creators. But is it more than just a plot twist? After all, smart people like Bill Gates and Steven Hawking have warned that artificial intelligence may be on a dangerous path and could threaten the survival of the human race.

They’re not the only ones worried. The Committee on Legal Affairs of the European Parliament recently issued a report calling on the EU to require intelligent robots to be registered, in part so their ethical character can be assessed. The “Stop Killer Robots” movement, opposed to the use of so-called autonomous weapons in war, is influencing both United Nations and U.S. Defense Department policy.

Artificial intelligence, it seems, has a PR problem. While it’s true that today’s machines can credibly perform many tasks (playing chess, driving cars) that were once reserved for humans, that doesn’t mean that the machines are growing more intelligent and ambitious. It just means they’re doing what we built them to do.

The robots may be coming, but they are not coming for us—because there is no “they.” Machines are not people, and there’s no persuasive evidence that they are on a path toward sentience.

Jerry Kaplan

We’ve been replacing skilled and knowledgeable workers for centuries, but the machines don’t aspire to better jobs and higher employment. Jacquard looms replaced expert needleworkers in the 19th century, but these remarkable devices—programmed with punch cards for a myriad of fabric patterns—didn’t spell doom for dressmakers and tailors. Until the mid-20th century we relied on our best and brightest to do arithmetic—being a “calculator” used to be a highly respected profession. Now that comparably capable devices are given away as promotional trinkets at trade shows, the mathematically minded among us can focus on tasks that require broader skills, like statistical analysis. Soon, your car will be able to drive you to the office upon command, but you don’t have to worry about it signing up with Uber to make a few extra bucks for gas while you’re in a staff meeting (unless you instruct it to).

AI makes use of some powerful technologies, but they don’t fit together as well as you might expect. Early researchers focused on ways to manipulate symbols according to rules. This was useful for tasks such as proving mathematical theorems, solving puzzles, or laying out integrated circuits. But several iconic AI problems—such as identifying objects in pictures and converting spoken words to written language—proved difficult to crack. More recent techniques, which go under the aspirational banner of machine learning, proved much better suited for these challenges. Machine-learning programs extract useful patterns out of large collections of data. They power recommendation systems on Amazon and Netflix, hone Google search results, describe videos on YouTube, recognize faces, trade stocks, steer cars, and solve a myriad of other problems where big data can be brought to bear. But neither approach is the Holy Grail of intelligence. Indeed, they coexist rather awkwardly under the label of artificial intelligence. The mere existence of two major approaches with different strengths calls into question whether either of them could serve as a basis for a universal theory of intelligence.

For the most part, the AI achievements touted in the media aren’t evidence of great improvements in the field. The AI program from Google that won a Go contest last year was not a refined version of the one from IBM that beat the world’s chess champion in 1997; the car feature that beeps when you stray out of your lane works quite differently than the one that plans your route. Instead, the accomplishments so breathlessly reported are often cobbled together from a grab bag of disparate tools and techniques. It might be easy to mistake the drumbeat of stories about machines besting us at tasks as evidence that these tools are growing ever smarter—but that’s not happening.

Public discourse about AI has become untethered from reality in part because the field doesn’t have a coherent theory. Without such a theory, people can’t gauge progress in the field, and characterizing advances becomes anyone’s guess. As a result the people we hear from the most are those with the loudest voices rather than those with something substantive to say, and press reports about killer robots go largely unchallenged.

I’d suggest that one problem with AI is the name itself—coined more than 50 years ago to describe efforts to program computers to solve problems that required human intelligence or attention. Had artificial intelligence been named something less spooky, it might seem as prosaic as operations research or predictive analytics.

Perhaps a less provocative description would be something like “anthropic computing.” A broad moniker such as this could encompass efforts to design biologically inspired computer systems, machines that mimic the human form or abilities, and programs that interact with people in natural, familiar ways.

We should stop describing these modern marvels as proto-humans and instead talk about them as a new generation of flexible and powerful machines. We should be careful about how we deploy and use AI, but not because we are summoning some mythical demon that may turn against us. Rather, we should resist our predisposition to attribute human traits to our creations and accept these remarkable inventions for what they really are—potent tools that promise a more prosperous and comfortable future.

Jerry Kaplan teaches the social and economic impact of AI at Stanford University. His latest book is Artificial Intelligence: What Everyone Needs to Know, from Oxford University Press.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.