Skip to Content

Creating The People’s Computer

One of the nation’s foremost computer scientists, exasperated by the unfriendliness of today’s computer systems, suggests what designers can do to make machines serve human needs–rather than the other way around.

It is a few days before Christmas. I am out shopping at a well-known upscale department store in the Greater Boston area. I take nine items to the cash register. The cashier passes her magic wand over each package to read the bar code, and the impact printer rattles away as it prints a description and price for each item. I am getting ready to pull out my credit card when the woman turns to the cash register beside her and, horror of horrors, starts keying in the exact same information manually, reading the numbers off each package in turn.

She is on package number six when I clear my throat conspicuously and, with the indignation of a time-study specialist, ask her why in the world she is duplicating the work of the bar-code reader. She waves me to silence with the authority of one accustomed to doing so. “Please, I have to finish this,” she says politely. I tell her to take her time, even though my muscles are tightening up and my brain is engaging in vivid daydreams of punitive acts.

She finishes the last package, ignores my pointed sigh, reaches for a pencil, and … starts all over again! This time she is writing in longhand on the store’s copy of the receipt a string of numbers for every package. I am so shocked by this triple travesty that I forget my anger and ask her in true wonder what she is doing. Once more she waves me to silence so she can concentrate, but then obliges: “I have to enter every part number by hand,” she says. “Why?” I ask, with a discernible trembling in my voice. “Because my manager told me to,” she replies, barely suppressing the urge to finish her sentence with the universal suffix “stupid.” I could not let this go. I called for the manager. He looked at me knowingly and said with a sigh, “Computers, you know.”

I told him that this looked a bit more serious than that, and he proceeded to explain in slow, deliberate phrasing that the central machine didn’t work, so a duplicate had to be entered by hand.

“Then, why enter it at all into the computer?” I ventured hopefully.

“Because it is our standard operating procedure, and when the central machine comes back, we should be in a position to adjust our records for inventory changes.” Hmm.

“Then why in the world is she both keying in the numbers and entering them with the bar-code reader?” I countered.

“Oh. That’s the general manager’s instruction. He is concerned about our computer problems and wants to be able to verify and cross-check all the departmental entries.”

I quietly walked out, stunned.

After I got over my shock at the absurd waste of time this store’s procedures caused for the cashier-and me-I began to marvel at how the great promise that computers would improve human productivity is more easily discussed than implemented. Indeed, the topic of whether computers are raising human productivity has generated a great deal of controversy. Technology detractors will point to such encounters and say, “See, computers don’t help us.” And it’s true that information technology does hurt productivity in some cases; it takes longer to wade through those endless automated phone-answering menus than it does to talk to a human operator. If technology is not used wisely, it can make us less productive instead of more so.

But computers can also be incredibly helpful. Used properly, they help ring up prices faster, track inventory, and handle price changes. Productivity will rise in the Information Age for the same reason it did in the Industrial Age: the application of new tools to relieve human work.

Some people dismiss productivity concerns, arguing that computers make possible things we couldn’t do otherwise. Certainly that is true, as the World Wide Web, special effects in movies, and credit cards have shown us. But to ignore the computer’s fundamental ability to help humans do their brain work is at best perverse and at worst irresponsible. Productivity is the yardstick by which socioeconomic revolutions are measured. That was the case with plows, engines, electricity, and the automobile. If there is to be a true information revolution, computers will have to repeat the pattern with information and information work.

As we try to anticipate how computers might be used in the twenty-first century we are bombarded with unparalleled confusion and hype-a faster Web/Internet, network computers, intranets, cyberspace, 1,000 video channels, free information, telework, and much more. To my thinking, this future world can be described simply and crisply as an “information marketplace,” where people and their interconnected computers are engaged in the buying, selling, and free exchange of information and information work.

Many issues surround the information marketplace: the technology of its underlying information infrastructures; its uses in commerce, health, learning, the pursuit of pleasure, and government; and the consequences of these new activities for our personal lives, our society, and our history. Here we will focus on a small but crucial aspect of this rich ensemble-ensuring that tomorrow’s information marketplace will help us in our eternal quest to get more results for less work.

Misused and Abused

Let’s begin by examining a series of “faults”-ways in which computers are misused today because of either technological or human foibles. The first step toward improving our productivity will be to correct these faults. Next, we’ll explore how to begin automating human work through computer-to-computer exchanges. The final and perhaps most vital step will be to make computers truly easier to use.

The additive fault: The ridiculous duplication of effort that I ran into at the department store happens often and in many different settings. We’ll call this failure the additive fault, because in these cases people are doing everything they used to do before computers plus the added work required to keep computers happy or to make people appear modern. In anybody’s book, this is a mindless productivity decrease. It should be stopped cold in whatever setting it raises its ugly head. And while we are at it, let’s recognize this particular problem is not caused by technology but by our own misuse of technology.

The ratchet fault: Some time after my encounter with the cashier, the same gremlins that seem to run ahead of me to set up challenging situations must have surely visited the airline clerk I encountered at Boston’s Logan Airport. When I handed him my ticket to New York and asked him to replace it with one to Washington, D.C., he said, “Certainly, sir,” and bowed to his terminal, as if to a god. As a seasoned observer of this ritual, I started recording his interactions. Bursts of keystrokes were followed by pensive looks, occasionally bordering on consternation, as with hand-on-chin he gazed motionless at the screen, trying to decide what to type next. A full 146 keystrokes later, grouped into 12 assaults demarcated by the Enter key, and after a grand total of 14 minutes, I received my new ticket.

What makes this story interesting from a productivity perspective is that any computer-science undergraduate can design a system that does this job in 14 seconds. You simply shove your old ticket into the slot, where all its contents are read by the machine. You then type or speak the “change” command and the new destination, and you get the revised ticket printed and shoved back in your hand. Because 14 minutes is 60 times longer than 14 seconds, the human productivity improvement with such a box would be 60 to 1, or 6,000 percent!

Something is terribly wrong here. People run to buy a new computer because it is 20 percent faster than the one they have, and we are talking here about a 6,000 percent improvement. So why aren’t the airlines stampeding to build this box? For one thing, if they did this for every one of the possible requests, they would have to build a few thousand different boxes for each terminal. All right then, why don’t they reprogram their central computers to do this faster? Because that would cost a billion dollars. Why? Because the airlines have been adding so many software upgrades and changes to their systems that after 20 years they have built up a spaghetti-like mess that even they cannot untangle. In effect, they cannot improve their system without starting from scratch.

We’ll call this the ratchet fault of computer use because it’s like a ratcheting tire jack: every time a new software modification is added the complexity of the system rises, but it never comes down unless a precipitous event, like a total redesign, takes place. This problem is more a consequence of inadequate technology than of unsound human practice. If we had a software technology that could let us gracefully update our systems to suit our changing needs while maintaining their efficiency, then we wouldn’t be in this bind.

The excessive-learning fault: One-tenth of my bookshelf is occupied by word-processing manuals. Add the manuals for spreadsheets, presentations, and databases, and they easily fill half a shelf. Because I use graphics and do a bit of programming, I need a few more manuals. This brings the total length of my computer guidebooks to one EB-one (printed) Encyclopaedia Britannica. We’ll simply call this the excessive-learning fault-the expectation that people will learn and retain an amount of knowledge much greater than the benefits they’d get from using that knowledge. Imagine requiring people to digest an 850-page manual in order to operate a pencil. We laugh at the thought, but we accept it readily in the case of a word-processing program. I have little doubt that the first half of the twenty-first century will be spent getting rid of fat manuals and making computers much easier and more natural to use.

The feature-overload fault: Bloated is perhaps a more accurate adjective to describe the feature-packed programs hitting the market in the late-1990s. Vendors do so in part to cover their bets and to be able to charge higher average prices. Buyers are fascinated by the potential uses of their computers and value their prerogative to command their machines to do thousands of different things. Of course, in practice they end up doing only a few tasks and forget what features they have bought or how to use them. A top-selling “suite” of office software comes on a CD-ROM or 46 diskettes that require half a day to load into your machine. This is not productive. And it is caused by us, not technological weaknesses. Consumers and corporate executives should declare birth control on the overpopulation of excessive and often useless features.

The fake-intelligence fault: My car has a fancy phone that was advertised as “intelligent” because when it makes a phone connection it automatically mutes the volume of the car radio to ensure a quiet environment. I found this feature delightful until one afternoon when I heard a good friend being interviewed on the radio. I immediately called a mutual friend so she could listen along with me over the phone and share in the excitement. This, of course, was impossible, because the phone muted the radio and I couldn’t override it. Welcome to the fake-intelligence fault. It crops up in many situations where a well-meaning programmer puts what he or she believes is powerful intelligence in a program to make life easier for the user. Unfortunately, when that intelligence is too little for the task at hand, as is always the case, the feature gets in your way. Faced with a choice between this kind of half-smart system and a machine with massive but unpretentious stupidity, I would opt for the latter, because at least then I could control what it could do.

As users striving to improve our productivity, we must always ask whether a new program offers enough value through its purported intelligence to offset the headaches it will inadvertently bring about. And suppliers of these ambitious programs should endow them with a Go Stupid command that lets users disable the intelligent features.

The machine-in-charge fault: It is 2:00 a.m., and I just got home. My Swissair flight from Logan was canceled because of trouble in the motor controlling the wing flaps. Some 350 passengers whose plans were thwarted were bombarding every available clerk at the airport. I abandoned that zoo, rushed home, switched on my computer, and tried to connect to the Easy Sabre do-it-yourself airline-reservation service offered by Prodigy to search for an alternative ticket for a morning flight out of either Boston or New York. I had to find out before going to sleep if this was possible. But before I had a chance to enter a single keystroke, Prodigy seized control of my screen and keyboard. It informed me that to improve my system’s use of its online services, it would take a few moments (meaning a half-hour minimum) to download some im-proved software.

There was nothing I could do to stop Prodigy from “helping me” in its own murderous way. A meager piece of anonymous software was in full control of this situation, while I, a human being, was pinned against the wall. Meanwhile, I knew that with each passing minute, another of those frantic nomads at the airport would take another of the rapidly vanishing seats on the next morning’s few flights. I gladly would have used software that was several generations old to get my job done sooner. I felt I was drowning in shallow surf from a stomach cramp while the lifeguard on the beach was oblivious to my screams because he was using his megaphone to inform me and all the other swimmers of improved safety procedures.

This is exactly the same fault that requires precious humans to spend valuable time executing machine-level instructions dispensed by hundred-dollar automated telephone operators, with their familiar “If you want Marketing, please press 1. If you want Engineering …” A good part of this machine-in-charge fault must be attributed to human failure in allowing such practices to continue without objection, but programmers must also take some of the blame. They often deliberately commit this fault because it’s simpler, therefore cheaper, to program a computer to interrogate the user and not let go until all questions have been answered in one of a few fixed ways than to allow the user to do any one of several things with the assurance that the computer will pay attention.

Of course, interactions controlled by the machine are not always undesirable. A mistaken command by you to erase everything inside your computer should not be casually executed. However, 95 percent of the overcontrolling interactions on the world’s computers don’t involve such grave situations. The sooner these software crutches vanish and the user is given control, the sooner machines will serve humans rather than the other way around.

The excessive-complexity fault: I am at my office, it is almost noon, and I discover with considerable panic that I forgot to retrieve from my home computer the crucial overheads I need for an imminent lunch meeting. No sweat. I’ll call home and have them ship-ped electronically to my office. As luck would have it, though, the only one home is the electrician, but he is game. “Please turn the computer on by pushing the button on top of the keyboard,” I say. He is obviously a good man, because I hear the familiar chime through the phone. During the two minutes the machine takes to boot up, the electrician asks why the machine doesn’t come on instantly, like a light bulb.

I refrain from telling him that I share his consternation. For three years I have been trying to interest sponsors and researchers in a project that would address this annoying business in which a human respectfully begs permission from a computer’s software to turn the machine on or off. Instead, I explain that the machine is like an empty shell and must first fill itself with all the software it needs to become useful. “Okay,” I say, “pull down the Apple menu and select the Call Office command,” which I had providentially defined some time back. He complies, and I hear my home modem beeping as it dials my office modem. On the second ring I hear the office modem next to me answer. We are almost there, I muse hopefully.

“Do you see the message that we are connected?” I ask.

“Nope,” he responds. Another minute goes by and he reads me an alert message that has appeared on my home computer’s screen. I know what happened. The modems latched correctly and can send signals to each other but for some unknown reason the software of the two machines cannot communicate. I ask him to hold while I restart my machine. Like many people, and all computer professionals, I know that restarting with a clean slate often solves problems like this one, even though I have no idea what actually caused the problem.

As I guide the electrician through rebooting my home computer, I get angry, because these problems would be reduced if my office computer were calling my home machine rather than the other way around. But my home machine has only “remote client” software, meaning that it can call out but cannot receive calls. This distinction between clients and “servers” is a residue of corporate computing and the time-shared era’s central machines, which dispensed lots of data to the dumber terminals. The distinction must vanish so that all computers, which I’d coin clervers, can dish out and accept information equally, as they must if they are going to be able to support the distributed buying, selling, and free exchange of information that will take place in the information marketplace.

When my home machine has again booted up, we go through the modem dance once more, and this time the software latches. I ask the electrician to select the Chooser command and click on the Appleshare icon and then to click on the image of my office machine. Now he needs my password, which I give him promptly. He reports activity on his screen that I interpret as success. I tell him how to locate the precious file I need and send it to me. In two and a half more minutes the overhead images arrive safely in my machine. I thank the electrician profusely and send the images to my printer, now filled with blank transparency sheets, and I’ve got them. I arrive at the meeting 30 minutes late.

Why couldn’t I simply give my home computer in one second a single command like “Send the overheads I created last night to my office” and have them arrive three minutes later? Fellow techies, please don’t tell me it can be done with a different kind of machine or a different operating system, macros, agents, or any other such tools, because I know and you know better. This simple act just cannot be carried out easily and reliably with today’s computers.

As system designers we must begin the long overdue corrective actions against the excessive-complexity fault by simplifying options, restricting them, and, most important, reversing a design point of view rooted in decades-old habits. We should tailor computer commands and options to user’s needs, rather than tailoring them to existing system and subsystem needs and expecting users to obediently adapt. We must do for computer systems what we have done for cars-get away from giving people controls for the fuel mixture, ignition timing, and the other subsystems, and give them a steering wheel, a gas pedal, and a brake for driving the car.

Electronic Bulldozers

One of the biggest roadblocks to building an effective information marketplace is the inability of interconnected computer systems to easily relieve us of human work. This is because today’s networked computer systems have no way of understanding one another, even at a rudimentary level, so they can carry out routine transactions among themselves. Yet the potential for automating information work is huge-one-half of the world’s industrial economy comprises office work, suggesting how huge this new socioeconomic movement could be. To off-load human brainwork, we must develop tools that let computers work with one another toward useful hu-man purposes. I call the tools that will make this possible “automatization” tools to distinguish them from the automation tools of the Industrial Revolution that offloaded human musclework.

Today we are so excited by e-mail and the Web that we plunge in with all our energy to explore the new frontier. If we stop and reflect for a moment, however, we will realize that human productivity will not be enhanced if we continue to use our eyes and brains to navigate through this maze and understand the messages sent from one computer to another. Imagine if the companies making the first steam and internal combustion engines of the Industrial Revolution made them so that they could work together only if people stood beside them and continued to labor with their shovels and horse-drawn plows. What an absurd constraint. Yet that is what we do today-expend a huge amount of human brainwork to make our computers work together. It’s time to shed our high-tech shovels and build the electronic bulldozers of the Information Age. That’s what the automatization tools are all about.

Achieving some basic degree of understanding among different computers to make automatization possible is not as technically difficult as it sounds. But it does require one very difficult commodity: human consensus. One simple way to achieve automatization is to use electronic forms (e-forms), where each entry has a pre-agreed meaning that all participating computers can exploit through their programs. Suppose that I take 3 seconds to speak into my machine the command, “Take me to Athens next weekend.” My machine would generate the right e-form for this task and ping-pong back and forth with the reservation computer’s e-form before finding an acceptable date and class and booking the flight. Since it would have taken me 10 minutes to make an online reservation myself, I could rightfully brag that my productivity gain was 200 to 1 (600 seconds down to 3 seconds), or 20,000 percent!

Thus we can imagine that in the information marketplace, common interest groups will establish e-forms to specify routine and frequently recurring transactions in their specialty, whether those entail buying oranges wholesale or routing around x-rays among different medical departments. If the members of such a group can agree on an e-form, especially one that represents a laborious transaction, then they will achieve substantial automatization gains. Computer programs or people interested in doing that kind of business would be able to look up the agreed upon e-form and use it in their computer, toward the same gains but with much less effort.

Quite a few computer wizards, and people who are averse to standards, believe that common conventions like e-forms resemble Esperanto, the ill-fated attempt to create a universal spoken language among all people. They argue that attempts at shared computer languages will suffer the same ills. Instead, they advocate that the only way our computers will get to understand each other will be by translating locally understandable commands and questions among their different worlds, just as people translate between English and French.

This argument is faulty because shared concepts are required even in translating among different languages. Whether you call an object chair or chaise, it is still the thing with four legs on which people sit. It is that shared concept base, etched somehow in your brain, that makes possible a common understanding of the two different words in English and French. Without it, no amount of inter-conversion can lead to comprehension, simply because, as in the case of computers, there is nothing in common on either side to be comprehended.

If we can form a consensus within and across specialties concerning the most basic concepts computers should share, then even if we end up with different languages and dialects, software developers will be able to write programs and ordinary users will be able to write scripts that install useful computer-to-computer automatization activities-searching for information on our behalf, watching out for events of interest to us, carrying out transactions for us, and much more.

Gentle-Slope Systems

Automating computer-to-computer transactions and fixing problems in present computer systems are good steps toward making computers and the information marketplace serve us. But designing systems that are inherently easier to use is the really big lever. I believe that this endeavor will consume our attention for a good part of the next century.

In the last decade, anyone who has uttered the phrase “user friendly” in my presence has run the risk of physical assault. The phrase has been shamelessly invoked to suggest that a program is easy and natural to use when this is rarely true. Typically, “user friendly” refers to a program with a WIMP interface, meaning it relies on windows, icons, menus, and pointing along with an assortment of pretty colors and fonts that can be varied to suit users’ tastes. This kind of overstatement is tantamount to dressing a chimpanzee in a green hospital gown and earnestly parading it as a surgeon. Let’s try to penetrate the hype by painting a picture of where we really are with respect to user friendliness and where the true potential for ease of use lies.

It is sometime in the late 1980s. A friend approaches you, excited by his ability to use spreadsheets. You ask him to explain how they work. He shows you a large grid. “If you put a bunch of numbers in one column,” he says, “and then below them put the simple command that adds them up, you will see their total in the bottom cell. If you then change one of the numbers, the total will change automatically.” The friend rushes on, barely able to control his exuberance: “And if you want to make the first number 10 percent larger, you just put in the cell next to it the simple command that multiplies it by 1.1.” His expression becomes lustful: “Do you want to increase all the numbers by 10 percent? Just drag your mouse down like this, and they will all obey.”

He takes in a deep breath, ready to explode once more, when you stop him cold. “Thank you. Now go away,” you say. “You have taught me enough to do all my accounting chores.” This is how millions of people today use their spreadsheet programs like Microsoft Excel and Lotus 1-2-3. They hardly know more than a tenth of the commands yet they get ample productivity gains.

You are happy with your newly acquired knowledge until one day you discover that you need to do something a bit more ambitious, like repeat over an entire page all the laborious operations you have set up but with a new set of initial numbers. Perplexed, you go back to your friend, who smiles knowingly and tells you that you must now learn about macros. His explanations are no longer as simple as before, and you just can’t get the spreadsheet to do what you want. This is where most of the millions who use spreadsheets give up. But instead you fight on, eventually mastering the mysteries of the macro. It’s really a computer program written in an arcane programming language that replaces you in commanding the spreadsheet program to do things you would have done manually.

You sail along for the next six months until you develop the need to do an even more ambitious task that involves designing a human-machine interface that will make your program more useful. You go back to your friend, who tells you that you have become too good for the limited capabilities of this spreadsheet application, and that you must now learn how to use a real programming language like C++. Unaware of what lies behind these three innocent symbols but unwilling to give up, you press on. This costs you your job, because you must now devote full time to a colossal new learning endeavor. Yet you are so enamored with programming that you don’t mind. In fact, you like the idea. Two years later, having harnessed C++ and a few more programming languages and operating systems, you begin a career as a successful independent software vendor and eventually become wealthy.

This happy ending cannot hide the barriers that you have had to overcome along the way. You decide to graph the effort you expended versus the ability you gained. The result is a line starting at the left and moving along to the right. There is a long slowly rising portion and then a huge hill where you had to learn a lot of new stuff in order to move further right. Then there are more slowly rising lines and more huge hills, like a mountain chain where each new mountain gets higher. You wish that someone would invent an approach with a gentler slope, one where you get ever-greater returns as you increase your learning effort, without the impossible cliffs that you had to climb. I predict that such “gentle-slope systems,” as I like to call them, will appear and will mark an important turning point of the Information Age.

The gentle-slope systems will have a few key properties. First and foremost, they will give incrementally more useful results for incrementally greater effort. They will be able to automate any activity you do that is repetitive. They will be graceful, in the sense that incomplete actions or errors on your part will result in reasonable degradations of performance rather than catastrophes. Finally, they will be easy to understand-no more complicated than reading a cookbook recipe.

Conceptually Challenged

One reason it is difficult for nonprogrammers to tell computers what to do is that the software systems that surround us are preoccupied with the structure rather than the meaning of information. We can program them to do anything we want, but they are unaware of the meaning of even the simplest things we are trying to do. Let me illustrate.

It takes me 17 seconds to say to a programmer, “Please write me a program that I can use to enter onto my computer the checks I write, along with the categories of each expenditure-food, recreation, and so forth. And do this so that I can ask for a report of the checks that I have written to date, listed chronologically or by category.”

I have given this assignment several times to different people. Master programmers invariably decline to play and tell me to go buy this program because it’s commercially available. Good programmers will say they can meet the request in a couple of hours-and end up taking a day or two to develop a shaky prototype. Inexperienced programmers will say cockily that they can write the program in a few minutes as a spreadsheet macro-and are generally unable to deliver anything at all. The company Intuit, which developed the very successful Quicken program that does this job and more, took two years and many millions of dollars to develop, test, document, and bring to market.

Why can I “program” a human being to understand the above instruction in 17 seconds, while it takes a few thousand to a few million times longer to program a computer to understand the same thing? The answer surely lies in the fact that humans share concepts like check, category, report, and chronological, while computers do not. The machine is so ignorant of these concepts that programmers must spend virtually all of their programming time teaching the computer what they mean. If, however, I had a computer that already understood some of these “concepts,” then I might be able to program it to do my job in a very short time. This is an important way in which computers could increase our productivity in the twenty-first century: by being made to better understand more human concepts in better ways.

For computers to be truly easier to use, technologists will have to shift their focus away from the twentieth-century preoccupation with the structures of information tools like databases, spreadsheets, editors, browsers, and languages. In their early stage, computers became ubiquitous because this focus allowed these common tools to be used equally in thousands of applications, from accounting to engineering to art. Yet that same generality is what makes them ignorant of the special uses they must ultimately serve and ultimately less useful than they should be-much like a dilettante jack-of-all-trades.

What we need now, to boost utility further, is a new breed of software systems like a spreadsheet that an accountant can easily program and that already “understands” higher-level repetitive tasks like setting up charts of accounts, doing a cash reconciliation, and pulling trial balances.

Freed from the tyranny of generality, these specialized programming “environments” will rise toward offering a lot more of the basic information and operations of their specialty. The time has come for computer technologists to abandon the “generalist” orientation that served people well for the first four decades of the computer era and shift their focus from the structure to the meaning of information.

Everyone a Programmer

The biggest promise of the Information Age is the great and still unrealized potential of tailoring information technology to individual human needs. Today’s applications programs are like ready-made clothes-one size fits all. So most are ill-fitting, and we have to contort ourselves to improve the fit. Another potential outcome of this practice for business is that if every company used the same set of canned programs, they would follow more or less the same procedures, and no company would stand out against the competition. Shrink-wrapped, ready-made software is good enough for the state of information technology at the end of the twentieth century. But it won’t be as good in tomorrow’s information marketplace.

Great gains will be achieved when individuals and businesses can bend and fashion information tools to do exactly what they want them to do, rather than bending themselves to what the tools can do. This quest for customizable information tools with specialized knowledge will be no different than the current trend toward customized manufacturing. It could well be that by the close of the twenty-first century, a new form of truly accessible programming will be the province of everyone and will be viewed like writing, which was once the province of the ancient scribes but eventually became universally accessible.

This isn’t as absurd as it sounds. We invented writing so that we could communicate better with one another. Tomorrow we’ll need to communicate better with our electronic assistants, so we’ll extend our “club” to include them as well. Everyone will then be a “programmer,” not just the privileged few. And none of them will be conscious of it. In fact, this is already happening on a small scale among the millions of people who use spreadsheets and who would be very surprised to learn that they are programmers.

When I say people will program, I am not talking about writing the detailed code and instructions that make computers run. That will still constitute the bulk of a software program and will indeed be created by professional programmers, who will fashion the many larger building blocks that we will use. Each individual’s “programming” will account for a very small fraction of the software code, maybe 1 percent. But it will be the crucial factor that gives the program its specificity. It will be like building a model railroad; you don’t make all the track or engines or cars, but you do arrange the pieces to create your own custom railway patterns.

We can increase the usefulness of our machines in the emerging information marketplace by correcting current human-machine faults, by developing automatization tools, and by creating a new breed of gentle-slope software systems that understand specialized areas of human activity-and that can be easily customized by ordinary people to meet their needs. Pursuing these directions should get us going on our quest, which I expect will last well into the twenty-first century, to harness the new technologies of information for the fulfillment of ancient human purposes.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.