True, there’s no doubt that investment in research and development has a different profile in the two cases; Kasparov has methods of extracting good design principles from past games, so that he can recognize, and decide to ignore, huge portions of the branching tree of possible game continuations that Deep Blue had to canvass seriatim. Kasparov’s reliance on this “insight” meant that the shape of his search trees–all the nodes explicitly evaluated–no doubt differed dramatically from the shape of Deep Blue’s, but this did not constitute an entirely different means of choosing a move. Whenever Deep Blue’s exhaustive searches closed off a type of avenue that it had some means of recognizing, it could reuse that research whenever appropriate, just like Kasparov. Much of this analytical work had been done for Deep Blue by its designers, but Kasparov had likewise benefited from hundreds of thousands of person-years of chess exploration transmitted to him by players, coaches, and books.
It is interesting in this regard to contemplate the suggestion made by Bobby Fischer, who has proposed to restore the game of chess to its intended rational purity by requiring that the major pieces be randomly placed in the back row at the start of each game (randomly, but in mirror image for black and white, with a white-square bishop and a black-square bishop, and the king between the rooks). Fischer Random Chess would render the mountain of memorized openings almost entirely obsolete, for humans and machines alike, since they would come into play much less than 1 percent of the time. The chess player would be thrown back onto fundamental principles; one would have to do more of the hard design work in real time. It is far from clear whether this change in rules would benefit human beings or computers more. It depends on which type of chess player is relying most heavily on what is, in effect, rote memory.
The fact is that the search space for chess is too big for even Deep Blue to explore exhaustively in real time, so like Kasparov, it prunes its search trees by taking calculated risks, and like Kasparov, it often gets these risks precalculated. Both the man and the computer presumably do massive amounts of “brute force” computation on their very different architectures. After all, what do neurons know about chess? Any work they do must use brute force of one sort or another.
It may seem that I am begging the question by describing the work done by Kasparov’s brain in this way, but the work has to be done somehow, and no way of getting it done other than this computational approach has ever been articulated. It won’t do to say that Kasparov uses “insight” or “intuition,” since that just means that Kasparov himself has no understanding of how the good results come to him. So since nobody knows how Kasparov’s brain does it–least of all Kasparov himself–there is not yet any evidence at all that Kasparov’s means are so very unlike the means exploited by Deep Blue.