Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Artificial intelligence has been obsessed with several questions from the start: Can we build a mind out of software? If not, why not? If so, what kind of mind are we talking about? A conscious mind? Or an unconscious intelligence that seems to think but experiences nothing and has no inner mental life? These questions are central to our view of computers and how far they can go, of computation and its ultimate meaning–and of the mind and how it works.

They are deep questions with practical implications. AI researchers have long maintained that the mind provides good guidance as we approach subtle, tricky, or deep computing problems. Software today can cope with only a smattering of the information-processing problems that our minds handle routinely–when we recognize faces or pick elements out of large groups based on visual cues, use common sense, understand the nuances of natural language, or recognize what makes a musical cadence final or a joke funny or one movie better than another. AI offers to figure out how thought works and to make that knowledge available to software designers.

It even offers to deepen our understanding of the mind itself. Questions about software and the mind are central to cognitive science and philosophy. Few problems are more far-reaching or have more implications for our fundamental view of ourselves.

The current debate centers on what I’ll call a “simulated conscious mind” versus a “simulated unconscious intelligence.” We hope to learn whether computers make it possible to achieve one, both, or neither.

I believe it is hugely unlikely, though not impossible, that a conscious mind will ever be built out of software. Even if it could be, the result (I will argue) would be fairly useless in itself. But an unconscious simulated intelligence certainly could be built out of software–and might be useful. Unfortunately, AI, cognitive science, and philosophy of mind are nowhere near knowing how to build one. They are missing the most important fact about thought: the “cognitive continuum” that connects the seemingly unconnected puzzle pieces of thinking (for example analytical thought, common sense, analogical thought, free association, creativity, hallucination). The cognitive continuum explains how all these reflect different values of one quantity or parameter that I will call “mental focus” or “concentration”–which changes over the course of a day and a lifetime.

Without this cognitive continuum, AI has no comprehensive view of thought: it tends to ignore some thought modes (such as free association and dreaming), is uncertain how to integrate emotion and thought, and has made strikingly little progress in understanding analogies–which seem to underlie creativity.

My case for the near-impossibility of conscious software minds resembles what others have said. But these are minority views. Most AI researchers and philosophers believe that conscious software minds are just around the corner. To use the standard term, most are “cognitivists.” Only a few are “anticognitivists.” I am one. In fact, I believe that the cognitivists are even wronger than their opponents usually say.

But my goal is not to suggest that AI is a failure. It has merely developed a temporary blind spot. My fellow anticognitivists have knocked down cognitivism but have done little to replace it with new ideas. They’ve showed us what we can’t achieve (conscious software intelligence) but not how we can create something less dramatic but nonetheless highly valuable: unconscious software intelligence. Once AI has refocused its efforts on the mechanisms (or algorithms) of thought, it is bound to move forward again.

Until then, AI is lost in the woods.

Pages

70 comments. Share your thoughts »

Credit: Eric Joyner

Tagged: Communications

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me