Skip to Content

The Shaman’s Vision Stone

The Pattern on the Stone: The Simple Ideas that Make Computers Work
November 1, 1998

Given how little most of us understand about the insides of our computers, the etchings on the silicon chips at their cores can look as cryptic and occult as the tracery on a shaman’s vision stone. Danny Hillis, an alumnus of the MIT Artificial Intelligence Lab, founder of Thinking Machines, and now a researcher at Disney, wastes no time bemoaning this widespread ignorance about computers. Near the end of The Pattern on the Stone, in fact, he suggests that no one is smart enough to understand all the things computers can do. But Hillis nonetheless sets out to dispel the computer’s undeserved mystique using a series of equally nimble comparisons.

There’s nothing special about silicon, Hillis wants the reader to know. The universal building blocks of computation-simple, logical functions such as “and,” “or,” and “not”-can be implemented using rods and springs, water pipes and hydraulic valves, and many other physical systems. All decisions can be broken down into combinations of these simple functions, and computer programs are simply vast trees made up of such decisions. Hillis goes on to explain, plainly and concisely, how programming languages, algorithms and heuristics, memory and encryption, and other arcana are abstractions building upon each other and on the basic building blocks.

With all the blocks in place, Hillis is able to turn to his real passions: parallel computing, neural networks and the possibility of machine intelligence. Computers with hundreds or even thousands of processors are useful for certain large computational jobs such as weather simulations, which can be decomposed into many small sections, he explains. In a few painless pages, he also clarifies how parallel processors acting as self-organizing networks of artificial neurons can “learn” any logical operation, through a trial-and-error method in which the neurons that get the right answer are rewarded with increased influence over their neighbors in the course of the next trial.

The brain must be a self-organizing, massively parallel computer, Hillis argues. But this is where the tower of building blocks topples. Human consciousness cannot necessarily be broken down into the same logical operations that underlie computer programming, Hillis cautions. And if intelligence ever arises in a computer, he predicts, it will probably be an “emergent” property of neural networks competing for survival in artificial-selection experiments, not something planned or understood by the machine’s designers. At some level, Hillis seems to be saying, thought may indeed be a kind of magic.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.