In some ways, Hawkins and the Pilot are a typical Silicon Valley story-years of hardscrabble technical work followed by a sudden leap into the financial stratosphere. Indeed, Hawkins soon did what successful computer pioneers often do: he left Palm in 1998 to create another new company. The secretive enterprise, called Handspring, has said only that it plans next year to introduce new hardware products based on Pilot software.
In other ways, though, Hawkins’ story is different. Soon after graduating from Cornell’s engineering school in 1982, he landed at Grid Systems, one of the first companies to make laptop computers. But all the while he was falling under the spell of another, wholly different field: neuroscience. His fascination grew so intense that in 1985 he abruptly left Grid and enrolled at the University of California at Berkeley as a graduate student in the field. Two years later, he returned to Grid with equal abruptness-but carried with him some ideas from neuroscience that he thought could have a big impact in the computer world. Indeed, the PalmPilot, which recognizes patterns written by a pen or stylus, is a direct spinoff from Hawkins’ work in theoretical neuroscience. Grid’s corporate parent, Tandy, became one of the original investors in Palm, which is now owned by 3Com.
Charles C. Mann, a frequent contributor to TR, started his interview with Hawkins by asking why he quit graduate school.
HAWKINS: I hated academia. I just couldn’t take the culture. I would make appointments with professors and they wouldn’t show up-and wouldn’t even apologize. So I went back into business.
TR: What triggered your decision?
HAWKINS: I wrote a PhD thesis proposal to the chairman of the graduate group in neurobiology. He said, “This is great. But there’s nobody at Berkeley who is doing this work, and you have to work for a professor, so you can’t do it.” He recommended spending four years getting my doctorate in neurobiology, doing research in a related but different area. And then maybe as a postdoc, I could work on what I wanted to. But I had left my job to pursue specific ideas I had about intelligence and neurobiology, not to pursue someone else’s research.
TR: Why didn’t you study this when you were young, and it would have been OK to be a grad student?
HAWKINS: I grew up in a family of engineers. My father is an engineer, my brothers are engineers. I was a happy-go-lucky kid who just went with the flow. In my family, that meant becoming an engineer.
TR: It doesn’t sound like your heart was in computers. But you still went back to them after quitting Berkeley.
HAWKINS: I asked myself what I should do with my understanding of neurobiology and intelligence. I decided that I would go back to work and hopefully achieve some wealth and notoriety from my computer work. I would then use those resources to promote my ideas about neural function in a scientific and popular fashion. I created Palm Computing and Handspring primarily so that in the not-too-distant future I will be in a position to develop and promote my ideas about intelligence and how certain parts of the brain work.
TR: So you’re not at Handspring to be in business?
HAWKINS: I love handheld computers and I love building businesses, but those are not the main reasons I do what I do. I plan to use the money that I am making to fund research on the human brain.
TR: What do you want to add to the tremendous amount of neuroscience research that’s already being done?
HAWKINS: In reading about the brain, I found that what was conspicuously absent was any sort of overarching theory to explain it. I noticed brain research was paying little attention to certain things. For instance, look at the cerebral cortex, or neocortex. It’s essentially a big sheet of neurons several millimeters thick. Although there are areas dedicated to vision, speech, touch and motor output, it’s a remarkably uniform structure-the areas that deal with vision are almost identical to those dealing with hearing. This similarity implies that the same basic mechanism underlies all sensory processing. This is a remarkable finding-yet it has been generally ignored.
TR: Why is this discovery so important?
HAWKINS: Because it helps explain how the brain processes all the information it receives. The major inputs to the brain are the optic nerve, the spinal cord-touch, if you will-and the auditory nerve. However, there’s really only one thing coming into the brain: patterns of neural firings. Now think about what these neural patterns are really like. First, your eyes are moving all the time. While you’re looking at my face, your eyes are doing these little dance movements called saccades. Combine this with the fact that a large portion of the fibers coming in at the optic nerve represent a small central portion of the visual field-the fovea. With every eye movement, the neural pattern in the optic nerve changes. This means that vision is not just a problem of spatial pattern recognition, but of time-based patterns. The temporal nature of vision has been ignored by almost all theories dealing with vision. The key to understanding vision is to understand the importance of the time-varying patterns. By the way, hearing and touch work in the same way as well.
TR: Hearing seems clearly related to time-based patterns. But touch?
HAWKINS: Sure-the role of the fovea is played by your fingertips and the role of the saccade is played by the movement of your fingers over an object. Feeling an object creates a time-varying pattern. As the neocortex suggests, a common mechanism underlies vision, touch and hearing.
TR: How does this fit in with your model of the brain?
HAWKINS: You have to consider it together with the dominant nature of feedback. People tend to view the brain as a sort of input-output box. The input comes in, it gets processed, and out pops the result and you do the right thing. Well, if you look at the interconnections in the brain, there are many more fibers feeding backward than feeding forward. There’s more information traveling toward the input areas than there is toward the output areas-the ratio can be as high as 10 to 1. This is again something that is well known, but generally ignored because people don’t know what to make of it.
TR: OK-what should we make of it?
HAWKINS: One of the biggest implications is that parts of the brain look like what are called autoassociative memories. This is a type of memory that was partially inspired by neural architectures. It means that you provide part of what you’re looking for and you get the rest of it back. Clearly, that’s something brains are good at-memory is aided to a huge extent by context. You’re given a clue to something-say a taste or smell or image-and then you follow this progression of autoassociative recall.
TR: And you see this as leading to a theoretical model of how the brain functions?
HAWKINS: Yes, but there are problems. People who have studied the mathematics of autoassociative memory structures have found that if you make big autoassociative memories, they can’t store enough data. That is, if I make the memory 10 times larger, I can’t put 10 times as many data items in it. I can put bigger data items in it, but I can’t put more data items in it. So people have struggled with autoassociative memories as a model for brain function, because they have too limited a capacity.
TR: So why do you want to go back to them?
HAWKINS: Because I had a different approach. The earlier studies had been trying to apply autoassociative memory only to spatial data. But if you apply autoassociative memories to time-based data, you might be able to overcome their limitations. Remember, when you have bigger and bigger autoassociative memories, you can’t store more items-but you can store bigger items. If I view those bigger items as time-based data constructs, then I may not know a tremendous number of things, but I know a tremendous number of temporally connected things.
TR: What does all this have to do with intelligence?
HAWKINS: It goes back to my view that the brain is not just an input-output box. I think that intelligence is an ability of the organism to make successful predictions about its input. Intelligence is an internal measure of sensory prediction, not an external measure of behavior. When you look at my face, your eyes don’t just go randomly around. They look at very specific things. Typically they will look from eye to eye to nose to mouth. What your brain is doing during this process is saying, essentially: I see a pattern here that might be a face, and this might be an eye. And if I see an eye here, there should be another eye over there. It’s expecting a certain neural firing pattern at that instant. If you were to look at a face, and see a nose where an eye should be, then you’d know immediately something was amiss.
TR: So we have fundamental assumptions about things that help us make sense of the world.
HAWKINS: Say I moved the doorknob on your front door up an inch. Now when you come home, you’d reach out for the doorknob, and it wouldn’t be in the right spot. You’d notice that immediately, a misprediction. What if I made the doorknob a little wider or narrower? What if I made it stickier or heavier? I can think of a thousand changes I could make to your door and you’d notice them all. Now, the approach to this in traditional artificial intelligence (AI) research is to create a door database or door schema-a compilation of all the door’s properties. Then the AI machine would test every one of those properties, one after another.
TR: And you’re saying this is not how real brains work?
HAWKINS: I can guarantee you that. Your brain has no door database. We have to have a mechanism that tests all these door attributes at once. Autoassociative memories naturally make predictions about all their inputs. They are a great candidate mechanism. In a nutshell, intelligence is the ability of a system to make these low-level predictions about its input patterns. The more complex patterns you can predict over a longer time, the more you understand your environment and the more intelligent you are.
TR: How did these ideas lead to the PalmPilot?
HAWKINS: I was at Berkeley in the mid-1980s, which was just when neural networks were becoming fashionable again. A company called Nestor was trying to sell a neural-network pattern analyzer to do handwriting recognition-for $1 million. I thought, there have got to be easier, better ways of doing this. I took some of the math I was working on and designed a pattern classifier, which I received a patent on.
TR: What did you do with it?
HAWKINS: Just for fun, I built a hand-printed-character recognizer. Then I thought about building a computer that could use it. This started me down the path of building pen-based computers, first the GridPad and eventually the PalmPilot. The pattern recognizer in today’s Palm products is based on the same recognition engine I created 12 years ago. It was inspired by the work I was doing in autoassociative memory.
TR: So the PalmPilot was just a byproduct, not a goal.
HAWKINS: Yes. I figured I could be successful building little computers that used my recognizer. It would give me some time to think about how I would get other people interested in autoassociative memories. Originally, I thought I would build portable computers for four or five years, make a name for myself, and then work full time on neurobiology.
TR: It has been almost 15 years.
HAWKINS: Yes, but that’s still my intent. In the next couple of years I hope to start spending more time on autoassociative memories. If I get to my deathbed and I haven’t made a significant contribution to the theory of how the brain works, I’ll be disappointed.
TR: Meanwhile, might your ideas about brain function lead to other commercial possibilities?
HAWKINS: I wouldn’t be surprised. One way to progress a science very rapidly is to find a commercial application for it. There is nothing like commercial success to get more people working on a problem.
TR: What sort of products do you imagine?
HAWKINS: Building autoassociative memories will be a very large business-some day more silicon will be consumed building such devices than for any other purpose. The amount of storage in a human brain is extremely large. It is impractical to use current memory technology to build memories anywhere near this capacity. Fortunately autoassociative memories are fundamentally different than the kinds of memories we put in computers. When you build memory chips, their capacity is limited by the physical size of the die. Since silicon will have a certain number of defects per square millimeter, if you start making the chips too big, you’ll get a lot of chips with defects. Eventually the yield of good devices becomes unacceptably low-you have to throw away too many chips, driving the cost up.
TR: But this won’t be true with autoassociative memories?
HAWKINS: Right-they are naturally fault-tolerant. If some percentage of the cells don’t work properly, it doesn’t really matter. Autoassociative memory chips will be very large and relatively cheap.
TR: What would they be used for?
HAWKINS: This is a little like asking in 1948 what the transistor would be used for. I believe autoassociative memories, like transistors, will be an enabling technology. The early applications will be modest. Ask what problems can benefit from a system that understands its environment, can predict what ought to be happening next and can recognize unexpected and undesirable events. Any human job that requires lots of attention to patterns and few motor skills is a candidate. Security surveillance could be an interesting market to start with.
TR: There are a lot of applications like that.
HAWKINS: How you get there in five steps, I really don’t know. What drives me is my absolute certainty that this is the right approach to how brains are built.
Forget dating apps: Here’s how the net’s newest matchmakers help you find love
Fed up with apps, people looking for romance are finding inspiration on Twitter, TikTok—and even email newsletters.
How AI is reinventing what computers are
Three key ways artificial intelligence is changing what it means to compute.
These weird virtual creatures evolve their bodies to solve problems
They show how intelligence and body plans are closely linked—and could unlock AI for robots.
We reviewed three at-home covid tests. The results were mixed.
Over-the-counter coronavirus tests are finally available in the US. Some are more accurate and easier to use than others.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.