You’ve likely heard stories about the birth of the PC: of Xerox PARC as the Mecca of computing; of its creation of the Alto, Ethernet, and the laser printer; of the Homebrew Computer Club, the MITS Altair, Bill Gates and the theft of his Micro-soft Basic; of Steve Jobs and Stephen Wozniak, the founding of Apple, and the Jobs visit to PARC that inspired the Macintosh.
But what you may not know about is the really early history. The stories of Doug Engelbart and John McCarthy, of the Augmentation Research Center, and of the early days of the Stanford University AI Lab (SAIL) are not well known. Yes, you may have heard that Engelbart invented the mouse, and that SAIL and Stanford led to companies like Sun and Cisco. But there are better stories, great and old ones from the early days of computing, about the events that led to personal computing as we know it.
In his wonderful new book, What the Dormouse Said…, John Markoff tells these stories. Markoff was born in Oakland, CA, and has been covering Silicon Valley for the New York Times for more than a decade. From a distinctly West Coast perspective, Dormouse chronicles the origins of the personal computer and its place in the Bay Area culture of the 1960s. Having lived, intensely, the later part of this story, I am fascinated by the great back stories of people I came to know and, often, work with. Many of these stories were only vaguely familiar; many more, I’d never heard.
The central figure in Dormouse is Doug Engelbart, whose long-time passion was to build a working version of Vannevar Bush’s “Memex” machine. In the 1940s, while working in Washington, DC, as director of the Pentagon’s Office of Scientific Research and Development, Vannevar Bush had imagined a “machine that could track and retrieve vast volumes of information,” and he wrote about his idea in the July 1945 issue of the Atlantic Monthly:
“Consider a future device for individual use, which is a sort of mechanized private file and library. It needs a name, and, to coin one at random, ‘memex’ will do. A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.”
Engelbart encountered the idea of the Memex while serving as a radar technician in the U.S. Navy during World War II. It took root in his imagination and, in 1950, he had an epiphany, one that guided him and his work for the next two decades. Markoff writes that Engelbart “saw himself sitting in front of a large computer screen full of different symbols….He would create a workstation for organizing all of the information and communications needed for any given project….he saw streams of characters moving on the display. Although nothing of the sort existed, it seemed the engineering should be easy to do and that the machine could be harnessed with levers, knobs or switches. It was nothing less than Vannevar Bush’s Memex, translated into the world of electronic computing.”
Engelbart earned a PhD in electrical engineering from the University of California, Berkeley, in 1955, and was soon working at the Stanford Research Institute (SRI). There, he came across a paper called “Shrinking the Giant Brains for the Space Age,” which had been presented at a conference in June 1959. Its author was Jack Staller of the aerospace firm American Bosch ARMA, who had written, prophetically, “The problem is to compress a room full of digital computation equipment into the size of a suitcase, then a shoe box, and finally small enough to hold in the palm of the hand….Forming on the horizon are solid state circuits or the growing of the whole circuit on a single small solid-state wafer and molecular film techniques where films millionths of an inch thick and equally narrow conductors are built up layer over layer to form whole sections or perhaps complete computers in fractions of cubic inches.”
Then, as Markoff relates, in February 1960, five years before Gordon Moore published an article in Electronics magazine whose assertions would become known as “Moore’s Law,” Doug Engelbart came to the same conclusion that Moore would: that a relentless and inevitable increase in computing capacity would result from the continuous shrinking of the transistor. And he saw that with this increase in capacity, computers would soon be powerful enough to augment the human intellect. This dream – Engelbart’s dream – has led to computing as we know it.
Engelbart found funding from visionary program managers in the federal government, people such as the U.S. Defense Advanced Research Project Agency’s J. C. R. Licklider, who envisioned computers as a communications tool, and NASA’s Bob Taylor, who later assembled and led the great group of computer scientists who headed Xerox PARC. With their support, Engelbart, from 1960 to 1968, led a team at SRI that implemented a prototype system demonstrating his ideas.
The high point of Dormouse is Markoff’s recounting of Engelbart’s first public presentation, in December 1968, of his “oNLine System” (NLS). Markoff writes, “In one stunning ninety-minute session, [Engelbart] showed how it was possible to edit text on a display screen, to make hypertext links from one electronic document to another, and to mix text and graphics, and even video and graphics. He also sketched out a vision of an experimental computer network to be called ARPAnet and suggested that within a year he would be able to give the same demonstration remotely to locations across the country. In short, every significant aspect of today’s computing world was revealed in a magnificent hour and a half.
“There were two things that particularly dazzled the audience:…First, computing had made the leap from number crunching to become a communications and information-retrieval tool. Second, the machine was being used interactively with all its resources appearing to be devoted to a single individual! It was the first time that truly personal computing had been seen.”
The 1960s: Drugs and Protest
Dormouse describes how political, social, and cultural forces came together to shape the early personal-computer industry on the West Coast: Engelbart and his colleagues were part of a community that included early experimenters with LSD and leaders of the antiwar movement.
Despite today’s conservative backlash against much of what the 1960s’ countercultural movement stood for, the Internet and the personal computer have been accepted, and they give us great tools to spread awareness. Though these tools can also be used to amplify propagandizing, there is reason to believe that they will ultimately give advantage to the truth. In this, the spirit of the 1960s’ struggle lives on.
Some who read Markoff’s book may feel nostalgic for the drug culture that developed alongside the personal computer, but I do not. For me, the stories about drug experimentation are sad stories of a quest gone awry. The promise was that LSD and other drugs would expand our creativity. But like other abused substances, including alcohol and, now, in America, even food, they have largely brought us personal tragedy. In the end, drugs such as LSD and marijuana give most users, not new creativity, but merely the personal and temporary presumption of the new, and at great personal cost.
The personal-computing and Internet revolutions have produced much of what the drug experimenters were seeking. They have given people long-sought enhancements of the ability to communicate and to learn. And now, with so much accessible to so many people through the Internet, we see hope for the expansion of creativity itself, and for the raising of collective consciousness. The Internet promotes creativity not through solitary, short-lived experiences, but through the use of a real, permanent, and shareable medium. It offers new awareness through access to the firsthand truth about what is going on in the world – if its users take the time to separate the truth from the flood of mass media and junk that the Internet also brings.
Dormouse tells the important story of what the Bay Area did for computing. But as I read the book, I found myself thinking about other early history, stories not centered on the West Coast. While the PC was born in California, its conception required important contributions from other parts of the country.
Today, PCs are highly networked, run multiple applications at the same time (much as the time-sharing computers of the 1960s and 1970s supported multiple users), and have virtual memory to support large applications. These and many other key technical capabilities originated not in the counterculture of the West Coast, but in the great universities and research labs on the East Coast, in England, and even in the upper Midwest, where I grew up.
Around the time of Engelbart’s NLS presentation, a practical implementation of a different set of groundbreaking computing concepts, far beyond a mere demonstration, appeared in the form of the Michigan Terminal System (MTS) operating system.
MTS was written for a mainframe – the IBM 360/67 – that was one of the first computers to have virtual memory. IBM had 300 programmers writing a new operating system for this computer, but they were far behind schedule. So the staff at Michigan wrote MTS, which featured time-sharing, support for virtual memory, file sharing with protection, and many other functions in new combinations that were eventually to become key parts of the PC.
By 1967, MTS was up and running on the newly arrived 360/67, supporting 30 to 40 simultaneous users. Fully a year before MTS was finished, in 1966, Michigan began a related project, the Merit network, which would provide a way to network multiple systems. Like the early ARPAnet, Merit used minicomputers – Digital Equipment Corporation’s PDP-11s–to connect larger machines to each other.
By the time I arrived as an undergraduate at the University of Michigan in 1971, MTS and Merit were successful and stable systems. By that point, a multiprocessor system running MTS could support a hundred simultaneous interactive users, as well as remote graphics applications on computers such as the DEC 8/338 and 9/339 – pioneering minicomputers with interactive vector graphics displays. MTS served as a campuswide network for these machines, and Merit soon connected the computers of the University of Michigan with those at other universities.
Similarly powerful systems were built on Digital Equipment PDP-10s at MIT, Stanford (SAIL), and Carnegie Mellon University, often, like Engelbart’s NLS, with support from federal research funds. Markoff recounts in passing what I had forgotten (if I ever knew it) – that Steve Jobs and Steve Wozniak were hanging out at SAIL long before the famous Jobs visit to PARC. SAIL, and similar systems, had much greater importance in the birth of the PC than is generally acknowledged. In my view, these systems underpin, as much as Engelbart’s work does, personal computing.
Engelbart’s dream came true because Moore’s Law held. Those who believed in the law often succeeded. They saw, as Engelbart did, that computing was destined to become cheap and therefore widely available. It was these people who gave rise to a new wave in computing: the PC industry. Those people who did not foresee the impact of the relentless miniaturization fared less well; thus nearly all of the companies in the previous wave – the minicomputer industry – failed or were acquired.
Most of today’s best thinkers on the subject agree that Moore’s Law has 10 or more years yet to run. If they’re right, transistor density will in 10 years be about 100 times what it is now. In thinking about the future of computing, in hoping for further augmentation of the human intellect, do we understand what another 100-fold increase in computing power will mean? It should enable big new dreams. Let me suggest some, which might fuel the next part of the story of personal computing.
Engelbart imagined a figure called an “augmented architect”:
“Let us consider an ‘augmented’ architect at work. He sits at a working station that has a visual display screen some three feet on a side; this is his working surface and is controlled by a computer (his ‘clerk’) with which he can communicate by means of a small keyboard and other devices….Every person who does his thinking with symbolized concepts…should be able to benefit significantly.”
Are we taking full advantage of the power of computers to augment our intellects? I don’t think so. Computers are currently unaware of their environments – of the people and objects around them. The computer does not have cameras to see what we see, to know what books and papers are in the room. We don’t interact with the computer in natural ways – for instance, by drawing on paper (while the computer watches with its camera) or on electronic paper (on which the computer could draw too). We don’t talk, listen, or gesture to computers the way we do to each other.
And we’re no better at entering into the computer’s environment than it is at understanding ours. The best commonly available immersive technology we have today is the video game, not the architectural design package. We, sadly, spend much more of our collective energy and focus on virtual reality for entertainment than for education and augmentation.
Worst of all, computer software doesn’t really interact with us. It executes what we request but doesn’t initiate actions on its own. Our computers do not understand the goals of the projects we’re working on. They don’t think ahead and work, unprompted, in concert with us toward those goals. In reality, we work alone.
We have, or will soon have, sufficient computing power to build interactive, immersive, and aware software, so that the rooms in which we work, as architects or engineers, scientists or students, can routinely become immersive and interactive environments. We need to sponsor the hard research needed to make this dream a reality – to find and to fund the dreamers.
Your (Pocket) Personal Computer
Nearly 50 years ago, J. C. R. Licklider imagined computers as a communications device. When we look at today’s smart mobile devices, the BlackBerries and the Treos and the Nokia Communicators, we underestimate their importance. Their capabilities are relatively limited. Compared to phones, they’re big and bulky, but compared to notebook computers, they have frustratingly small screens and keyboards. Few people have them. They don’t really feel like our most personal computers.
But I think they are. The power of such devices will grow rapidly, as did the power of the PC. And they will become intensely personal, because they will be able to do more for you than anything that is as portable. They will thus naturally become the focus of improvements in connectivity and communication.
Much as the Google query you make from your home runs on machines located elsewhere, software run on behalf of your pocket PC could reside in remote server farms, on computers you time-share with others – but that you don’t have to maintain.
Does this mean that desktop PCs as we know them will disappear? I’m not suggesting that. Rather, I think, we will find that these larger computers with keyboards become less personal, become shared devices. In my household, many of us have accounts on several different computers, which share our personal information among them. None of these is “my” computer, yet all are, when they need to be. The individual machines are becoming access points to my presence on the network.
Your smart phone will benefit greatly from the next 100-fold improvement bestowed by Moore’s Law. It can acquire more sensors, becoming a personal medical scanner, tricorder, translator, recorder, and interpreter. There are many worthy dreams for such devices!
Note to Government: Think Big
Engelbart’s research found strong support from the government. But that was a long time ago. Federal funding for speculative research has now, largely, dried up; agencies looking for short-term paybacks now typically sponsor work on specific problems rather than the kinds of pure research, of unfettered thinking, that leads to the birth of whole new industries, as Engelbart’s did.
During the Clinton administration, I served as cochair of the President’s Commission on Information Technology (PITAC). Fellow members of the committee and I recommended that the government think big and recognize that computers will be key to all economic growth in the future, not just the growth of the computer industry itself. We argued that there were industries where, without new computer applications, the United States would become substantially less competitive.
Historically, the most cutting-edge research in computing was sponsored for national defense, with a very long-term view. We recommended that the government fund, in a similar way, a number of large computing projects. Each of these projects would cut across disciplines and make different assumptions (call them guesses) about what the future would be like. Each would create an imagined environment and determine what it would be like to live in it. The projects would result, we hoped, in inspirational prototypes, NLS-like demonstrations of how the great advances in computing and communication, the next 100-fold improvement, could be put to use by the next generation of Engelbarts.
The committee’s recommendations were not followed. Though a President Gore would have been supportive of them, the current administration has not been, and the long-term trend toward a short-term focus in government-sponsored research continues. The young Doug Engelbarts of today will be hard pressed to find support for their dreams.
What a shame. It’s possible now, more than ever, to augment human intellect. We should boldly set our sights on Engelbart’s goal. John Markoff has done us all a great service by writing a book that reminds us of the great value of thinking big.
Bill Joy was the architect of Berkeley Unix and a cofounder of Sun Microsystems. He is now a partner at venture-capital firm Kleiner, Perkins, Caufield, and Byers.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.