Skip to Content

Turing’s Enduring Importance

The path computing has taken wasn’t inevitable. Even today’s machines rely on a seminal insight from the scientist who cracked Nazi Germany’s codes.

When Alan Turing was born 100 years ago, on June 23, 1912, a computer was not a thing—it was a person. Computers, most of whom were women, were hired to perform repetitive calculations for hours on end. The practice dated back to the 1750s, when Alexis-Claude ­Clairaut recruited two fellow astronomers to help him plot the orbit of Halley’s comet. ­Clairaut’s approach was to slice time into segments and, using Newton’s laws, calculate the changes to the comet’s position as it passed Jupiter and Saturn. The team worked for five months, repeating the process again and again as they slowly plotted the course of the celestial bodies.

Today we call this process dynamic simulation; Clairaut’s contemporaries called it an abomination. They desired a science of fundamental laws and beautiful equations, not tables and tables of numbers. Still, his team made a close prediction of the perihelion of Halley’s comet. Over the following century and a half, computational methods came to dominate astronomy and engineering.

By the time Turing entered King’s College in 1931, human computers had been employed for a wide variety of purposes—and often they were assisted by calculating machines. Punch cards were used to control looms and tabulate the results of the American census. Telephone calls were switched using numbers dialed on a ring and interpreted by series of 10-step relays. Cash registers were ubiquitous. A “millionaire” was not just a very rich person—it was also a mechanical calculator that could multiply and divide with astonishing speed.

All these machines were fundamentally limited. They weren’t just slower, less reliable, and dramatically poorer in memory than today’s computers. Crucially, the calculating and switching machines of the 1930s—and those that would be introduced for many years to come—were each built for a specific purpose. Some of the machines could perform manipulations with math, some could even follow a changeable sequence of instructions, but each machine had a finite repertoire of useful operations. The machines were not general-purpose. They were not programmable.

Things Reviewed

  • Turing’s Cathedral: The Origins of the Digital Universe

    George Dyson
    Pantheon Books, 2012

  • When Computers Were Human

    David Alan Grier
    Princeton University Press, 2005

  • Alan Turing: 
The Enigma

    Andrew Hodges
    Simon & Schuster, 1983

Meanwhile, mathematics was in trouble.

In the early 1920s the great German mathematician David Hilbert had proposed formalizing all of mathematics in terms of a small number of axioms and a set of consistent proofs. Hilbert envisioned a technique that could be used to validate arbitrary mathematical statements—to take a statement such as “x + y = 3 and x – y = 3” and determine whether it was true or false. This technique wouldn’t rely on insight or inspiration on the part of the mathematician; it had to be repeatable, teachable, and straightforward enough to be followed by a computer (in Hilbert’s sense of the word). Such a statement-­proving system would be powerful stuff indeed, for many aspects of the physical world can readily be described as a set of equations. If one were able to apply a repeatable procedure to find out whether a mathematical statement was true or false, then fundamental truths about physics, chemistry, biology—even human society—would be discoverable not through experiments in the lab but by mathematicians at a blackboard.

But in 1931, an Austrian logician named Kurt Gödel presented his devastating incompleteness theorem. It showed that for any useful system of mathematics, it is possible to create statements that are true but cannot be proved. Then came Turing, who drove the final stake through Hilbert’s project—and in so doing, set the path for the future of computing.

As Turing showed, the issue is not just that some mathematical statements are unprovable; in fact, no method can be devised that can determine in all cases whether a given statement is provable or not. That is, any statement on the blackboard might be true, might be false, might be unprovable … and it is frequently impossible to determine which. Math was fundamentally limited—not by the human mind but by the nature of math itself.

The brilliant, astonishing thing was the way Turing went about his proof. He invented a logical formalism that described how a human computer, taught to follow a complex set of mathematical operations, would actually carry them out. Turing didn’t understand how human memory worked, so he modeled it as a long tape that could move back and forth and on which symbols could be written, erased, and read. He didn’t know how human learning worked, so he modeled it as a set of rules that the human would follow depending on the symbol currently before her and some kind of internal “state of mind.” Turing described the process in such exact detail that ultimately, a human computer wasn’t even needed to execute it—a machine could do it instead. Turing called this theoretical entity the “automatic machine” or a-machine; today we call it a Turing machine.

In a 1936 paper, Turing proved that the a-machine could solve any computing problem capable of being described as a sequence of mathematical steps. What’s more, he showed that one a-machine could simulate another a-machine. What gave the a-machine this power was that its tape could store both data and instructions. In the words of science historian George Dyson, the tape held both “numbers that mean things” and “numbers that do things.”

Turing’s work was transformative. It made clear to the designers of early electronic computers that calculating machines didn’t need a huge inventory of fancy instructions or operations—all they needed were a few registers that were always available (the “state of mind”) and a memory store that could hold both data and code. The designers could proceed in the mathematical certainty that the machines they were building would be capable of solving any problem the humans could program.

These insights provided the mathematical formulation for today’s digital computers, though it was John von Neumann who took up Turing’s ideas and is credited with the machines’ design. Von Neumann’s design had a central core that fetched both instructions and data from memory, performed mathematical operations, stored the results, and then repeated. The machine could also query the contents of multiple locations in memory as necessary. What we now call the von Neumann architecture is at the heart of every microprocessor and mainframe on the planet. It is dramatically more efficient than the a-machine, but mathematically, it’s the same.

Incidentally, this essential feature of computers helps explain why cybersecurity is one of the most troubling problems of the modern age. For one thing, Turing showed that all a-machines are equivalent to one another, which is what makes it possible for an attacker to take over a target computer and make it run a program of the attacker’s choosing. Also, because it’s not always possible to discern what can be proved, a Turing machine cannot—no matter how much memory, speed, or time it has—evaluate another Turing machine’s design and reliably determine whether or not the second machine, upon being given some input, will ever finish its computations. This makes perfect virus detection impossible. It’s impossible for a program to evaluate a previously unseen piece of software and determine whether it is malicious without actually running it. The program might be benign. Or it may run for years before it wipes the user’s files. There is no way to know for sure without running the program.

In 1938 Turing began working with the British government and ultimately helped design a series of machines to crack the codes used by the Germans in World War II. The best source for that story is Andrew ­Hodges’s biography Alan Turing: The Enigma. Unfortunately, some details about ­Turing’s wartime work were not declassified until 2000, 17 years after Hodges’s book (and nearly 50 years after Turing committed suicide). As a result, his full contributions have not been well told.

Many histories of computing give the impression that it was a straightforward set of engineering decisions to use punch cards, then relays, then tubes, and finally transistors to build computing machines. But it wasn’t. General-purpose machines required Turing’s fundamental insight that data and code can be represented the same way. And keep in mind that all of today’s computers were developed with the help of slower computers, which in turn were designed with slower computers still. If Turing had not made his discovery when he did, the computer revolution might have been delayed by decades.

TR contributing editor Simson L. Garfinkel is an associate professor of Computer Science at the Naval Postgraduate School. His views do not represent the official policy of the United States government or the Department of Defense.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.