Searle asks us to imagine a program that can pass a Chinese Turing test–and is accordingly fluent in Chinese. Now, someone who knows English but no Chinese, such as Searle himself, is shut up in a room. He takes the Chinese-understanding software with him; he can execute it by hand, if he likes.
Imagine “conversing” with this room by sliding questions under the door; the room returns written answers. It seems equally fluent in English and Chinese. But actually, there is no understanding of Chinese inside the room. Searle handles English questions by relying on his knowledge of English, but to deal with Chinese, he executes an elaborate set of simple instructions mechanically. We conclude that to behave as if you understand Chinese doesn’t mean you do.
But we don’t need complex thought experiments to conclude that a conscious computer is ridiculously unlikely. We just need to tackle this question: What is it like to be a computer running a complex AI program?
Well, what does a computer do? It executes “machine instructions”–low-level operations like arithmetic (add two numbers), comparisons (which number is larger?), “branches” (if an addition yields zero, continue at instruction 200), data movement (transfer a number from one place to another in memory), and so on. Everything computers accomplish is built out of these primitive instructions.
So what is it like to be a computer running a complex AI program? Exactly like being a computer running any other kind of program.
Computers don’t know or care what instructions they are executing. They deal with outward forms, not meanings. Switching applications changes the output, but those changes have meaning only to humans. Consciousness, however, doesn’t depend on how anyone else interprets your actions; it depends on what you yourself are aware of. And the computer is merely a machine doing what it’s supposed to do–like a clock ticking, an electric motor spinning, an oven baking. The oven doesn’t care what it’s baking, or the computer what it’s computing.
The computer’s routine never varies: grab an instruction from memory and execute it; repeat until something makes you stop.
Of course, we can’t know literally what it’s like to be a computer executing a long sequence of instructions. But we know what it’s like to be a human doing the same. Imagine holding a deck of cards. You sort the deck; then you shuffle it and sort it again. Repeat the procedure, ad infinitum. You are doing comparisons (which card comes first?), data movement (slip one card in front of another), and so on. To know what it’s like to be a computer running a sophisticated AI application, sit down and sort cards all afternoon. That’s what it’s like.
If you sort cards long enough and fast enough, will a brand-new conscious mind (somehow) be created? This is, in effect, what cognitivists believe. They say that when a computer executes the right combination of primitive instructions in the right way, a new conscious mind will emerge. So when a person executes the right combination of primitive instructions in the right way, a new conscious mind should (also) emerge; there’s no operation a computer can do that a person can’t.