Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo


Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Misused and Abused

Let’s begin by examining a series of “faults”-ways in which computers are misused today because of either technological or human foibles. The first step toward improving our productivity will be to correct these faults. Next, we’ll explore how to begin automating human work through computer-to-computer exchanges. The final and perhaps most vital step will be to make computers truly easier to use.

The additive fault: The ridiculous duplication of effort that I ran into at the department store happens often and in many different settings. We’ll call this failure the additive fault, because in these cases people are doing everything they used to do before computers plus the added work required to keep computers happy or to make people appear modern. In anybody’s book, this is a mindless productivity decrease. It should be stopped cold in whatever setting it raises its ugly head. And while we are at it, let’s recognize this particular problem is not caused by technology but by our own misuse of technology.

The ratchet fault: Some time after my encounter with the cashier, the same gremlins that seem to run ahead of me to set up challenging situations must have surely visited the airline clerk I encountered at Boston’s Logan Airport. When I handed him my ticket to New York and asked him to replace it with one to Washington, D.C., he said, “Certainly, sir,” and bowed to his terminal, as if to a god. As a seasoned observer of this ritual, I started recording his interactions. Bursts of keystrokes were followed by pensive looks, occasionally bordering on consternation, as with hand-on-chin he gazed motionless at the screen, trying to decide what to type next. A full 146 keystrokes later, grouped into 12 assaults demarcated by the Enter key, and after a grand total of 14 minutes, I received my new ticket.

What makes this story interesting from a productivity perspective is that any computer-science undergraduate can design a system that does this job in 14 seconds. You simply shove your old ticket into the slot, where all its contents are read by the machine. You then type or speak the “change” command and the new destination, and you get the revised ticket printed and shoved back in your hand. Because 14 minutes is 60 times longer than 14 seconds, the human productivity improvement with such a box would be 60 to 1, or 6,000 percent!

Something is terribly wrong here. People run to buy a new computer because it is 20 percent faster than the one they have, and we are talking here about a 6,000 percent improvement. So why aren’t the airlines stampeding to build this box? For one thing, if they did this for every one of the possible requests, they would have to build a few thousand different boxes for each terminal. All right then, why don’t they reprogram their central computers to do this faster? Because that would cost a billion dollars. Why? Because the airlines have been adding so many software upgrades and changes to their systems that after 20 years they have built up a spaghetti-like mess that even they cannot untangle. In effect, they cannot improve their system without starting from scratch.

We’ll call this the ratchet fault of computer use because it’s like a ratcheting tire jack: every time a new software modification is added the complexity of the system rises, but it never comes down unless a precipitous event, like a total redesign, takes place. This problem is more a consequence of inadequate technology than of unsound human practice. If we had a software technology that could let us gracefully update our systems to suit our changing needs while maintaining their efficiency, then we wouldn’t be in this bind.

The excessive-learning fault: One-tenth of my bookshelf is occupied by word-processing manuals. Add the manuals for spreadsheets, presentations, and databases, and they easily fill half a shelf. Because I use graphics and do a bit of programming, I need a few more manuals. This brings the total length of my computer guidebooks to one EB-one (printed) Encyclopaedia Britannica. We’ll simply call this the excessive-learning fault-the expectation that people will learn and retain an amount of knowledge much greater than the benefits they’d get from using that knowledge. Imagine requiring people to digest an 850-page manual in order to operate a pencil. We laugh at the thought, but we accept it readily in the case of a word-processing program. I have little doubt that the first half of the twenty-first century will be spent getting rid of fat manuals and making computers much easier and more natural to use.

The feature-overload fault: Bloated is perhaps a more accurate adjective to describe the feature-packed programs hitting the market in the late-1990s. Vendors do so in part to cover their bets and to be able to charge higher average prices. Buyers are fascinated by the potential uses of their computers and value their prerogative to command their machines to do thousands of different things. Of course, in practice they end up doing only a few tasks and forget what features they have bought or how to use them. A top-selling “suite” of office software comes on a CD-ROM or 46 diskettes that require half a day to load into your machine. This is not productive. And it is caused by us, not technological weaknesses. Consumers and corporate executives should declare birth control on the overpopulation of excessive and often useless features.

The fake-intelligence fault: My car has a fancy phone that was advertised as “intelligent” because when it makes a phone connection it automatically mutes the volume of the car radio to ensure a quiet environment. I found this feature delightful until one afternoon when I heard a good friend being interviewed on the radio. I immediately called a mutual friend so she could listen along with me over the phone and share in the excitement. This, of course, was impossible, because the phone muted the radio and I couldn’t override it. Welcome to the fake-intelligence fault. It crops up in many situations where a well-meaning programmer puts what he or she believes is powerful intelligence in a program to make life easier for the user. Unfortunately, when that intelligence is too little for the task at hand, as is always the case, the feature gets in your way. Faced with a choice between this kind of half-smart system and a machine with massive but unpretentious stupidity, I would opt for the latter, because at least then I could control what it could do.

As users striving to improve our productivity, we must always ask whether a new program offers enough value through its purported intelligence to offset the headaches it will inadvertently bring about. And suppliers of these ambitious programs should endow them with a Go Stupid command that lets users disable the intelligent features.

The machine-in-charge fault: It is 2:00 a.m., and I just got home. My Swissair flight from Logan was canceled because of trouble in the motor controlling the wing flaps. Some 350 passengers whose plans were thwarted were bombarding every available clerk at the airport. I abandoned that zoo, rushed home, switched on my computer, and tried to connect to the Easy Sabre do-it-yourself airline-reservation service offered by Prodigy to search for an alternative ticket for a morning flight out of either Boston or New York. I had to find out before going to sleep if this was possible. But before I had a chance to enter a single keystroke, Prodigy seized control of my screen and keyboard. It informed me that to improve my system’s use of its online services, it would take a few moments (meaning a half-hour minimum) to download some im-proved software.

There was nothing I could do to stop Prodigy from “helping me” in its own murderous way. A meager piece of anonymous software was in full control of this situation, while I, a human being, was pinned against the wall. Meanwhile, I knew that with each passing minute, another of those frantic nomads at the airport would take another of the rapidly vanishing seats on the next morning’s few flights. I gladly would have used software that was several generations old to get my job done sooner. I felt I was drowning in shallow surf from a stomach cramp while the lifeguard on the beach was oblivious to my screams because he was using his megaphone to inform me and all the other swimmers of improved safety procedures.

This is exactly the same fault that requires precious humans to spend valuable time executing machine-level instructions dispensed by hundred-dollar automated telephone operators, with their familiar “If you want Marketing, please press 1. If you want Engineering …” A good part of this machine-in-charge fault must be attributed to human failure in allowing such practices to continue without objection, but programmers must also take some of the blame. They often deliberately commit this fault because it’s simpler, therefore cheaper, to program a computer to interrogate the user and not let go until all questions have been answered in one of a few fixed ways than to allow the user to do any one of several things with the assurance that the computer will pay attention.

Of course, interactions controlled by the machine are not always undesirable. A mistaken command by you to erase everything inside your computer should not be casually executed. However, 95 percent of the overcontrolling interactions on the world’s computers don’t involve such grave situations. The sooner these software crutches vanish and the user is given control, the sooner machines will serve humans rather than the other way around.

The excessive-complexity fault: I am at my office, it is almost noon, and I discover with considerable panic that I forgot to retrieve from my home computer the crucial overheads I need for an imminent lunch meeting. No sweat. I’ll call home and have them ship-ped electronically to my office. As luck would have it, though, the only one home is the electrician, but he is game. “Please turn the computer on by pushing the button on top of the keyboard,” I say. He is obviously a good man, because I hear the familiar chime through the phone. During the two minutes the machine takes to boot up, the electrician asks why the machine doesn’t come on instantly, like a light bulb.

I refrain from telling him that I share his consternation. For three years I have been trying to interest sponsors and researchers in a project that would address this annoying business in which a human respectfully begs permission from a computer’s software to turn the machine on or off. Instead, I explain that the machine is like an empty shell and must first fill itself with all the software it needs to become useful. “Okay,” I say, “pull down the Apple menu and select the Call Office command,” which I had providentially defined some time back. He complies, and I hear my home modem beeping as it dials my office modem. On the second ring I hear the office modem next to me answer. We are almost there, I muse hopefully.

“Do you see the message that we are connected?” I ask.

“Nope,” he responds. Another minute goes by and he reads me an alert message that has appeared on my home computer’s screen. I know what happened. The modems latched correctly and can send signals to each other but for some unknown reason the software of the two machines cannot communicate. I ask him to hold while I restart my machine. Like many people, and all computer professionals, I know that restarting with a clean slate often solves problems like this one, even though I have no idea what actually caused the problem.

As I guide the electrician through rebooting my home computer, I get angry, because these problems would be reduced if my office computer were calling my home machine rather than the other way around. But my home machine has only “remote client” software, meaning that it can call out but cannot receive calls. This distinction between clients and “servers” is a residue of corporate computing and the time-shared era’s central machines, which dispensed lots of data to the dumber terminals. The distinction must vanish so that all computers, which I’d coin clervers, can dish out and accept information equally, as they must if they are going to be able to support the distributed buying, selling, and free exchange of information that will take place in the information marketplace.

When my home machine has again booted up, we go through the modem dance once more, and this time the software latches. I ask the electrician to select the Chooser command and click on the Appleshare icon and then to click on the image of my office machine. Now he needs my password, which I give him promptly. He reports activity on his screen that I interpret as success. I tell him how to locate the precious file I need and send it to me. In two and a half more minutes the overhead images arrive safely in my machine. I thank the electrician profusely and send the images to my printer, now filled with blank transparency sheets, and I’ve got them. I arrive at the meeting 30 minutes late.

Why couldn’t I simply give my home computer in one second a single command like “Send the overheads I created last night to my office” and have them arrive three minutes later? Fellow techies, please don’t tell me it can be done with a different kind of machine or a different operating system, macros, agents, or any other such tools, because I know and you know better. This simple act just cannot be carried out easily and reliably with today’s computers.

As system designers we must begin the long overdue corrective actions against the excessive-complexity fault by simplifying options, restricting them, and, most important, reversing a design point of view rooted in decades-old habits. We should tailor computer commands and options to user’s needs, rather than tailoring them to existing system and subsystem needs and expecting users to obediently adapt. We must do for computer systems what we have done for cars-get away from giving people controls for the fuel mixture, ignition timing, and the other subsystems, and give them a steering wheel, a gas pedal, and a brake for driving the car.

0 comments about this story. Start the discussion »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives


Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me