MIT Technology Review Subscribe

What It Will Take for Computers to Be Conscious

The world’s best-known consciousness researcher says machines could one day become self-aware.

Is a worm conscious? How about a bumblebee? Does a computer that can play chess “feel” anything?

Christof Koch
Christof Koch

To Christof Koch, chief scientific officer of the Allen Institute for Brain Science in Seattle, the answer to these questions may lie in the fabric of the universe itself. Consciousness, he believes, is an intrinsic property of matter, just like mass or energy. Organize matter in just the right way, as in the mammalian brain, and voilà, you can feel.

Advertisement

Koch, now 57, has spent nearly a quarter of a century trying to explain why, say, the sun feels warm on your face. But after writing three books on consciousness, Koch says researchers are still far from knowing why it occurs, or even agreeing on what it is. It’s a difficult problem (see “Cracking the Brain’s Codes”). That is one reason that Koch left his position at Caltech in 2011 to become part of a $500 million project launched by the billionaire Paul Allen, Microsoft’s cofounder.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

The Allen Institute’s goal is to build a detailed atlas of every neuron and synapse in the mammalian brain. That would give neuroscience a firehose of data similar to what the Human Genome Project achieved.

But Koch hasn’t given up his search for a grand theory that could explain it all. In fact, he thinks consciousness could be explained by something called “integrated information theory,” which asserts that consciousness is a product of structures, like the brain, that can both store a large amount of information and have a critical density of interconnections between their parts.

To Koch, the theory provides a means to assess degrees of consciousness in people with brain damage, in species across the animal kingdom, and even, he says, among machines. We asked Koch about computer consciousness last week during MIT Technology Review’s EmTech conference.

Will discovering the biological basis of consciousness be dehumanizing in some way? What if it’s all just an illusion?

I find this view of some people that consciousness is an illusion to be ridiculous. If it’s an illusion, then it’s the most powerful illusion we have. I mean, the most famous deduction in Western philosophy is what? “I think. Therefore I am.” The fact that you have conscious experience is the one undeniable certainty you have.

If scientists discover the basis of consciousness, what kinds of technologies could result from that?

We could have a test to say who has consciousness and who doesn’t. We have very emotional debates in this country about abortion. I would like to have some objective way to test at what point a fetus actually begins to have conscious sensation. Or whether a patient [in a coma] is conscious or not. Often, you just don’t know. These are questions that people have asked since historic times, but once we have a theory, and a widely accepted theory, we could answer them. Also, if I wanted to build a machine that would be conscious, it would give me a blueprint.

Advertisement

So you think a computer can be conscious?

I gave a lecture [last week] at MIT about Integrated Information Theory, developed by Giulio Tononi at the University of Wisconsin. This is a theory that makes a very clear prediction: it says that consciousness is a property of complex systems that have a particular “cause-effect” repertoire. They have a particular way of interacting with the world, such as the brain does, or in principle, such as a computer could. If you were to build a computer that has the same circuitry as the brain, this computer would also have consciousness associated with it. It would feel like something to be this computer. However, the same is not true for digital simulations.

If I build a perfect software model of the brain, it would never be conscious, but a specially designed machine that mimics the brain could be?

Correct. This theory clearly says that a digital simulation would not be conscious, which is strikingly different from the dominant functionalist belief of 99 percent of people at MIT or philosophers like Daniel Dennett. They all say, once you simulate everything, nothing else is required, and it’s going to be conscious.

I think consciousness, like mass, is a fundamental property of the universe. The analogy, and it’s a very good one, is that you can make pretty good weather predictions these days. You can predict the inside of a storm. But it’s never wet inside the computer. You can simulate a black hole in a computer, but space-time will not be bent. Simulating something is not the real thing.

It’s the same thing with consciousness. In 100 years, you might be able to simulate consciousness on a computer. But it won’t experience anything. Nada. It will be black inside. It will have no experience whatsoever, even though it may have our intelligence and our ability to speak.

I am not saying consciousness is a magic soul. It is something physical. Consciousness is always supervening onto the physical. But it takes a particular type of hardware to instantiate it. A computer made up of transistors, moving charge on and off a gate, with each gate being connected to a small number of other gates, is just a very different cause-and-effect structure than what we have in the brain, where you have one neuron connected to 10,000 input neurons and projecting to 10,000 other neurons. But if you were to build the computer in the appropriate way, like a neuromorphic computer [see “Thinking in Silicon”], it could be conscious.

If I were to put you in a room with a computer from the future, would you be able to determine if it’s conscious?

Advertisement

I couldn’t from the outside. I would have to look at its hardware.

What about the Turing test?

The question Turing asked is “Can machines think?” But ultimately it’s an operational test for intelligence, not for consciousness. If you have a clever conversation with some guy in another room and after half an hour you can’t decide if it is a computer or a human, well, then you say it’s as intelligent as a human. But the Turing test would not tell me if the machine experiences anything. I could ask “Are you conscious?” and the machine could say “Yes, I am fully conscious. And why are you claiming I am not? I am insulted.” But I couldn’t really know. I’d have to say, “Sorry, I have to take you apart and understand how you are made and how you actually generate these different physical states.”

Isn’t there some trick question you could ask, that only a conscious being could answer?

A very good question. In humans we have practical tests for consciousness. If you have a bad accident and go to the ER, they will ask you: Can you move your eyes? Can you move your limbs? Can you talk? If you can talk, do you know what year it is? Do you know who the president is?

But how do I really know you are conscious? This is the problem of solipsism. In the last analysis I do not know. But I know your brain is very similar to mine. I have put a lot of people into scanners, and I know they all have a brain, and their brains behave similar to mine. So there is a perfectly reasonable inference that you too are conscious.

But the more these systems differ from me, the more difficult it is to make that step by inference. For instance, take a bee. Does it feel like something to be a bee and fly in the golden sun rays and have nectar? I find it very difficult to know if a bee is conscious or not conscious. And a computer is even more radically different. There is no behavior I can judge it by. I would have to look at its underlying hardware.

Do you think we will ever build conscious machines?

Advertisement

I’m not sure why we would. But there is no question in my mind that we will build smart machines that can pass the Turing test well before we understand the true biological basis of human intelligence. And I think there are dangers associated with that which most people, being blithe optimists, completely ignore.

What dangers?

Don’t you watch science fiction movies? “Runaway AI” of course. Think about the financial market, all those trading machines, flash crashes. People are going to abuse computer intelligence, blindly maximize for some goal. It’s going to lead to more and more concentration of power among fewer and fewer people. We see this already, it’s going to lead to massive unemployment. And maybe 30 or 40 years on I think there is really an existentialist danger to the species, at the level of nuclear weapons or a meteorite strike.

All without the machine being conscious? In the movies, the moment the AI goes nuts is the same moment that it gains conscious self-awareness.

That’s because people want to make an engaging story. If the enemy doesn’t feel anything, if there isn’t anything there, it doesn’t make a good opponent.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement