The Shaman’s Vision Stone
Given how little most of us understand about the insides of our computers, the etchings on the silicon chips at their cores can look as cryptic and occult as the tracery on a shaman’s vision stone. Danny Hillis, an alumnus of the MIT Artificial Intelligence Lab, founder of Thinking Machines, and now a researcher at Disney, wastes no time bemoaning this widespread ignorance about computers. Near the end of The Pattern on the Stone, in fact, he suggests that no one is smart enough to understand all the things computers can do. But Hillis nonetheless sets out to dispel the computer’s undeserved mystique using a series of equally nimble comparisons.
There’s nothing special about silicon, Hillis wants the reader to know. The universal building blocks of computation-simple, logical functions such as “and,” “or,” and “not”-can be implemented using rods and springs, water pipes and hydraulic valves, and many other physical systems. All decisions can be broken down into combinations of these simple functions, and computer programs are simply vast trees made up of such decisions. Hillis goes on to explain, plainly and concisely, how programming languages, algorithms and heuristics, memory and encryption, and other arcana are abstractions building upon each other and on the basic building blocks.
With all the blocks in place, Hillis is able to turn to his real passions: parallel computing, neural networks and the possibility of machine intelligence. Computers with hundreds or even thousands of processors are useful for certain large computational jobs such as weather simulations, which can be decomposed into many small sections, he explains. In a few painless pages, he also clarifies how parallel processors acting as self-organizing networks of artificial neurons can “learn” any logical operation, through a trial-and-error method in which the neurons that get the right answer are rewarded with increased influence over their neighbors in the course of the next trial.
The brain must be a self-organizing, massively parallel computer, Hillis argues. But this is where the tower of building blocks topples. Human consciousness cannot necessarily be broken down into the same logical operations that underlie computer programming, Hillis cautions. And if intelligence ever arises in a computer, he predicts, it will probably be an “emergent” property of neural networks competing for survival in artificial-selection experiments, not something planned or understood by the machine’s designers. At some level, Hillis seems to be saying, thought may indeed be a kind of magic.
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.