Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

In a recent Wired magazine article, Bill Joy argued that the consequences of research on robotics, genetic engineering and nanotechnology may lead to ” knowledge-enabled mass destruction…hugely amplified by the power of self-replication.” His medicine: “relinquishment…by limiting our pursuit of certain kinds of knowledge.” I don’t buy it.

What troubles me with this argument is the arrogant notion that human logic can anticipate the effects of intended or unintended acts, and the more arrogant notion that human reasoning can determine the course of the universe. Let me explain and offer some alternatives.

We are seldom able to assess where we are headed. In 1963, when we built time-shared computers, we did it to spread the cost of a $2 million processor among many users. In 1970, when DARPA pioneered the Arpanet, it did so to avoid buying expensive computers for its contractors, who were told to share their networked machines. Both efforts succeeded, not for these goals, but because they enabled people to share information. The Internet was launched to interconnect networks of computers-no one anticipated that its biggest application would be the Web. Radar was designed for war but ended up as a cornerstone of air transportation. Nuclear weapons research put nuclear medicine on the map. Thousands of innovations all share the same pattern-the early assessment is unrelated to the outcome.

So limited is our ability to assess consequences that it’s not even helped by hindsight: On balance, are cars a good or bad thing for society? How about nuclear power, or nuclear medicine? We are unable to judge whether something we invented more than 50 years ago is good or bad for us today. Yet Joy wants us to make these judgments prospectively, to determine which technologies we should forgo!

Developments that today seem fearful may turn into mirages. Take the spiritual machines of Ray Kurzweil that concern Bill Joy. I have a lot of respect for Ray, and I welcome his ideas, as I do Bill’s, however outlandish or controversial they may be. But we should draw a clear line between what is imagined and what is likely. To blur this line is tantamount to quackery. Just because chips and machines are getting faster doesn’t mean they’ll get smarter, let alone lead to self-replication. If you move your arms faster you won’t get brighter. Despite fashionable hyperventilation about intelligent agents, today’s computer systems are not intelligent in the normal sense of the word. Nor do we see on the research horizon the critical technologies that would lead them there. Should we stop computer science and AI research in the belief that intelligent machines someday will reproduce themselves and surpass us? I say no. We should wait to find out whether the potential dangers are supported by more than our imagination.

0 comments about this story. Start the discussion »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me
×

A Place of Inspiration

Understand the technologies that are changing business and driving the new global economy.

September 23-25, 2014
Register »