The Computer That Wouldn’t Disappear
While continuous computing is now a practical reality, it has been a long time coming. The first serious work on it began 17 years ago at Xerox’s famed Palo Alto Research Center (PARC). That’s where computer scientist Mark Weiser set out to study the notion of ubiquitous computing,
Ubiquitous computing: Weiser’s original Web pages on the subject are preserved at www.ubiq.com/hypertext/weiser/UbiHome.html.
which he defined as “activating the world” – creating networks of small, wireless computing devices that permeated the physical structures around us, where they would supposedly anticipate our needs and act without requiring our attention. Weiser’s earliest experiments, funded by the U.S. Department of Defense, involved a network of infrared sensors scattered around PARC. The sensors communicated with prototype “tabs” – small, wireless displays that functioned as labels or sticky notes – and with tablet-sized handheld computers and large display boards. Weiser envisioned hundreds of these devices installed in rooms, homes, and office complexes, where they would eventually become “invisible to common awareness,” as he predicted in a 1991 article for Scientific American. “People will simply use them unconsciously to accomplish everyday tasks,” he wrote.
Tragically, Weiser died of cancer in 1999, at age 46. But by then, others had taken up his call, including the famed product-design consultant Donald Norman, who squeezed an entire thesis into the title of his 1998 book, The Invisible Computer: Why Good Products Can Fail, the Personal Computer Is So Complex, and Information Appliances Are the Solution. People might be more efficient if their spaces, work flows, and communications were fully digitized, but this wouldn’t happen until improved technology relieved them of the sense that they were interacting with “computers” at all, Norman argued. He called for a new generation of “information appliances” that would facilitate specific activities – such as teleconferencing, shopping, photography, or exercise – without calling attention to themselves. Echoing Weiser, Norman wrote that these appliances would “become such an intrinsic part of the task that it will not be obvious that they are there. They will be invisible
Invisible: Blog reader Gardner Campbell comments: “These are compelling essays and concepts, but a small worry persists: will the grail of invisible, continuous, ubiquitous computing turn out to be a cognitive deadener, too? Some things work best when they’re visible and a little recalcitrant: writing, for example, or thinking, for another example. If we use symbols effortlessly, there’s a risk we’ll settle for the path of least resistance automatically rather than go for the more ambitious and difficult goals, the computer equivalent of a set of grunts and gestures instead of a language, which involves a fair amount of work to acquire and use well but has rich payoffs in terms of semantic density.”
Author’s response: I agree. That’s why I point out in this section and elsewhere that continuous computing is not about making computers invisible.
like the embedded processors in the automobile or microwave oven.”
Researchers got busy building these appliances at places like MIT’s Laboratory for Computer Science and Artificial Intelligence Laboratory (since folded into one large lab). In 2000, the lab launched a five-year, industry-funded initiative called Project Oxygen,
Project Oxygen: See oxygen.lcs.mit.edu/Overview.html.
so named because the founding scientists believed that computation would eventually be “freely available everywhere, like batteries and power sockets, or oxygen in the air we breathe.” Like Weiser, the Oxygen researchers have focused on a combination of handheld devices and networks of sensing and communications equipment embedded in the environment – cameras, microphones, displays, wireless transmitters and receivers, and the like. Their most famous prototype is the Intelligent Room, a conference room rigged with sensors and displays that responds to voice commands, saves audio records of users’ discussions, and calls up presentations or recordings of prior meetings. The idea, according to the MIT researchers, is to automate as many aspects of human collaboration as possible.
Ubiquitous-computing research continues at PARC, where researchers are working on technologies such as embedded sensors trained to zero in on specific conversations in busy rooms so that people watching by videoconference can join in. And in Europe, a three-year, $28 million “Disappearing Computer” initiative from 2001 to 2003 resulted in several ongoing projects on “ambient computing,” the idea of augmenting everyday objects with small, wirelessly networked sensors.
But here’s the surprise: the tools that are actually bringing us continuous computing aren’t invisible. In fact, they are the very technologies Weiser and his successors were trying to sideline: off-the-shelf computing devices such as laptops and cell phones,
Cell phones: They’re now constant companions for 1.7 billion people worldwide. According to market research firm IDC, more than 690 million phones were shipped in 2004 alone. In the first quarter of 2005, vendors shipped 8.4 million “converged mobile devices,” meaning phones that also function as PDAs and can run many types of software applications – an increase of 134 percent over the first quarter of 2004. More than 182 million people in the United States subscribe to cellular services, and in 2004 they spent more than a trillion minutes using their phones.
both of which allow users to tap into Web-based social-software systems built in a largely unplanned way by people using common programming languages and shared, open communications protocols and development tools. These systems don’t have to be designed as unified, integrated systems, like Project Oxygen’s Intelligent Room, in order to be useful tools for social computing; they can just as well emerge from the bottom up, the way peer-to-peer networks and the Web itself did. (Indeed, one reason that projects at PARC, Project Oxygen, and other labs have never really blossomed into commercial systems may be that they are too heavily engineered
Too heavily engineered: Blog reader Gene Becker comments: “I agree with your assessment and would add that in many cases, they are technology solutions in search of a problem. What is the question to which ‘ubicomp’ is the best answer?”
for preconceived uses.) And we don’t really need computers to disappear into the woodwork, or to have elaborate spoken-word interfaces. In fact, today’s social-software boom rests on common devices such as mobile phones, computers, digital cameras, and portable music players.
“One of the things that really blew my mind was a trip last year at Christmastime to a mall in the DC suburbs,” saysThomas Vander Wal,
Thomas Vander Wal: Best known for popularizing two concepts, the “infocloud” (the aggregate of one’s personal digital data, which increasingly resides on networks rather than on desktop PCs or permanent media) and “folksonomies” (the knowledge structures that emerge in place of hierarchical taxonomies when groups of people tag digital data using an -unconstrained vocabulary).
an Internet-application designer whose writings are widely followed by developers of social-software applications. “Which is, as places go, a little bit more technically advanced than the more rural areas at the center of the U.S., but it’s still not the Bay Area or New York. But I was seeing people 50 and older waiting in line to get their packages wrapped and staring at their mobile devices. I don’t know if they were text-messaging their kids or browsing the Web or what, but their mobile devices were being used for more than just calling somebody. It was at that point that I thought, ‘We’re almost there’ – wherever ‘there’ is.”