Skip to Content

Computing Goes Everywhere

The dream of “ubiquitous computing” has been around for a while. Now it’s serious enough that a company like IBM is willing to throw $500 million at it.
January 1, 2001

Not far from the modest office where, 30-odd years ago, Douglas Engelbart invented the mouse, multiple-window screens and other mainstays of personal computing, an SRI International computer scientist approaches a mock-up of a white convertible-representing the car of the future. He plugs a notepad-sized computer into the dash, and at once the vehicle’s 1,400-odd computerized systems become accessible through a simple user interface. Using voice commands, he demonstrates how he can request a CD track, link wirelessly to his office to check voice mail or have his e-mail read aloud by a speech synthesizer. One message is from his refrigerator asking whether he’d like to pick up orange juice on his way home. “Show me the grocery stores,” he orders the car. The vehicle quickly accesses the Internet and relays directions to the nearest supermarkets.

Shopping done, our motorist arrives at his apartment, where the Collaborative Home e-Fridge (CHeF) is waiting for the OJ it requested. The juice is duly logged in, but when lemonade is removed, the fridge announces it’s now out of lemonade-and asks whether the item should be added to the shopping list. Chef even knows the pantry contents. So when asked to suggest something for dinner, it flashes the recipe for a chicken dish on its screen: in-stock ingredients are highlighted in green, those missing appear in red, while absent items already on the shopping list are rendered in blue.

Ah, the future of computing. Whether it’s with refrigerators, in cars, around the office or on the high seas, powerful new systems that you can access through words and maybe even gestures-and which will then scurry to invisibly do your bidding-are promising to friendly-up the world. The dream is called “ubiquitous” or “pervasive” computing-and it’s fast becoming the hottest thing in computer science. The ultimate aim is to seamlessly blend the analog human world with all things digital. That way, either by carrying computing and communications power with you, or by accessing it through an infrastructure as widespread as electric power is today, you will tap into this world on your terms and in your language, not a machine’s.

Less than a decade ago, such dreams were confined to far-out future factories such as SRI, Xerox Corporation’s Palo Alto Research Center (PARC) and MIT’s Media Lab. But recent advances in computing power, storage, speech recognition and especially wired and wireless networking, coupled with the rise of the World Wide Web, are bringing the dream within grasp of the real world. That essential truth explains why Microsoft and Intel, which built their fortunes on the stand-alone personal computer, are shifting gears toward this new, mobile, networked world. IBM has just committed nearly $500 million over the next five years to study pervasive computing and create the hardware and software infrastructure to support it. Other players include Sony, Sun Microsystems, AT&T, Hewlett-Packard (HP) and just about every corporate or university computer lab worldwide.

Uncertainties abound. Fights are under way over competing technologies and standards; and no one even knows how many computing devices people will want to carry in the future, let alone what type. Still, the field is maturing rapidly. Researchers agree more uniformly than ever on where technology is headed-or at least on which main paths it’s likely to take. This allows what was previously a hodgepodge of visions and predictions about the future to now be classified into three broad technological frameworks: 24/7/360; who, what, when, where; and the digital companion.

While these categories-signifying the importance of pervasiveness, awareness and personalization-don’t capture every aspect of ubiquitous computing, they do describe its essence. And just by walking into computer labs these days, you get the strong sense that the progress made in addressing these challenges has computer scientists convinced a major breakthrough is within their grasp. “Ubiquitous computing is viable-and will soon be commercially practical,” asserts William Mark, SRI’s vice president of Information and Computing Sciences. “The revolution is about to happen.”

24/7/360

The widely acknowledged father of ubiquitous computing was the late PARC computer scientist Mark Weiser, who coined the term in 1988. Weiser described a world where each person would share thousands of highly distributed but interconnected computers. This computing power, he argued, should blend into the background, hidden from people’s senses and attention.

In the early ’90s, PARC researchers created ParcTab, a handheld display that connected via infrared signals to a network computer so researchers could access files without being tied to their desktops. Other trailblazing work took place at the Olivetti Research Laboratory in Cambridge, England (now AT&T Laboratories Cambridge), which pioneered the Active Badge. The badge transmitted an infrared signal that allowed people to be tracked throughout a building via wall-mounted sensors-among other things, enabling phone calls to be forwarded automatically to their location. And then there was the ultimate popularizer-MIT’s Media Lab. Researchers at this largely industry-funded lab spread the word about concepts such as news-gathering software agents that would tailor each morning’s electronic newspaper to an individual’s tastes.

These early steps have now loosed a flood of innovation and promise at computer labs worldwide. Today, it is a fundamental tenet of ubiquitous computing that computational power and services will be available whenever they’re needed-that’s the 24/7 part. And not just throughout a building, but everywhere-that’s the 360, as in degrees around the globe. Under the 24/7/360 umbrella, however, lie two radically different approaches. One continues the drive to push computational power into objects with ever smaller footprints-via souped-up laptops, handhelds and wearables. The other holds that tomorrow’s computing resources will not be carried on specific devices. Instead, they will live on networks. In this view, much as people tap electric power by plugging into any outlet, so should applications and files be reachable from any display or information appliance-be it in a car, hotel or office. The network, to paraphrase the folks at Sun, becomes the computer.

This utility-like model of computing is catching fire at companies that build the backbone for the Internet and for enterprise computing networks-the communications, applications, storage and services associated with corporate computer systems. Indeed, of IBM’s recent $500 million commitment to pervasive computing, $300 million will go toward building an “intelligent infrastructure” of chips, mainframes, servers, databases and protocols for supporting the data-rich, mobile future.

Sun’s take on this idea is evidenced in its four-year-old Public Utility Computing (PUC) project. The aim is to create dynamic virtual networks, or supernets. Each supernet would be assigned a public Web address that its members contact. After authenticating themselves through a password or smart card, users would receive the encryption keys and addresses for entering the private supernet-where they could securely retrieve files and collaborate in real time. With PUC, there is “no distinguishable difference between being in HP’s conference room or in my office, or at home, or at the beach, or in New York,” asserts senior manager Glenn Scott.

PUC technology could also allow organizations to store and retrieve data and access sophisticated computational services, such as database software that analyzes customer trends. Only instead of purchasing these expensive systems, companies would pay solely for what they used. This might be ideal for small businesses, argues Scott. Imagine a 10-person operation that wants to tap big accounting software requiring a high-powered machine that the outfit can’t afford. Under the PUC concept, he says, the firm could simply “rent” the application as needed, perhaps once a week for 10 minutes. Since PUC works at the network level rather than inside the software, any application can be easily brought into the supernet. This, says Scott, makes it far more powerful than the pay-as-you-go systems offered by today’s applications service providers.

The catch comes in making everything secure. Scott says field trials last year validated the concept for communications and storage, which are mainly concerned with encryption of the data-both when it is being transmitted and once it is stored. But providing secure computation-assuring users their data isn’t inadvertently copied, for instance-is more dicey. Any solution will likely involve securing both hardware and software–a tricky combination Sun is only just exploring. Still, Scott believes PUC is the way of the future; and Sun has filed 13 patents around the technology.

This utility concept looks years ahead-but others are taking more immediate aim at a scaled-back form of 24/7/360. Since 1998, what is now AT&T Laboratories Cambridge has made its Virtual Network Computing software available free for download. VNC turns any Web browser into a remote display for a desktop computer, allowing people to access files and applications from just about any device-laptop to PC, Mac to Palm. What’s more, it works on standard telephone lines and cell phones-lightening the data stream by transmitting only the bits or pixels that change from second to second.

It’s the same principle as PUC-on a more personal level. The reason people carry bulky laptops is not to have all their data at hand, argues AT&T researcher Quentin Stafford-Fraser. “What you really want to carry around with you when you’re going somewhere is your environment,” he says. That means your sets of preferences, dates, desktop and so on. With VNC, he notes, “I can pretty much go anywhere in the world and be connected through to my machine that is sitting on the desk here.”

The system isn’t secure, and it doesn’t offer the file-sharing capabilities of PUC. Still, its cross-platform capability is compelling-as AT&T researchers found when one corporate user’s network server crashed while its systems administrator was off camping. Reached on his cell phone, the technician was told to return 250 kilometers to the office. Instead, he whipped out his Palm Pilot, called up his VNC-enabled desktop and fixed the problem-all without leaving his tent.

Stafford-Fraser reports there are as many as 10,000 VNC downloads a day-with about a million machines running the software. But that’s a blip on the screen compared with what AT&T and others believe might be the prime player in 24/7/360 for years to come: the already ubiquitous telephone. This idea is embodied in AT&T’s VoiceTone project, which seeks to replace a normal dial tone with an automated version of yesteryear’s know-everything switchboard. “AT&T, how may I help you?” the voice tone might inquire. Thanks to speech recognition, speedy processing, the Web presence of just about everything, and technologies such as text-to-speech synthesis, callers can ask for messages and traffic reports, check the weather and sports scores, or make restaurant reservations-all in normal language and without logging on in the conventional way.

AT&T is developing some of these services itself. However, many will be provided through voice services concerns such as Tellme Networks of Mountain View, Calif., in which AT&T has invested $60 million. Tellme and competitors such as Santa Clara-based BeVocal seek to turn ordinary telephones into gateways to the Web. At Tellme, for example, callers dial an 800 number, then navigate the system with spoken commands such as “Restaurants,” “Boston, Massachusetts,” “Chinese.” They then get a list of candidates-and can even hear Zagat reviews. If they wish to make a reservation, they’re connected to the restaurant free of charge.

Tellme co-founders Angus Davis and Mike McCue left Netscape to pursue the vision of telephone-as-computer-interface. “We were these browser guys, and we thought it was cool that there were 150 million Web browsers,” explains Davis, Tellme’s director of production. “But we thought, wouldn’t it be really cool if we could build a user interface to the Internet that reached two billion people? And that’s what made the phone exciting.”

Who, What, When, Where?

Computing by the billions may be too much to hope for in the near future. Still, it’s already clear that more and more computing power and services will reside in networks, and that these services will be increasingly accessible-through wires and wireless networks, and via myriad devices. Emerging software technologies such as Sun’s Jini and Microsoft’s Universal Plug and Play promise to allow systems and services to be accessed no matter what operating system or programming language they employ. On the hardware front, Dallas market research firm Parks Associates estimates that 18.1 million information appliances-things like handheld computers and Internet-connected TVs, mobile phones, car navigation systems and game consoles-shipped last year. Nascent wireless standards, such as Bluetooth for short-range radio communications, will add more flexibility for linking between devices and networks.

But before even a few folks have the benefit of truly ubiquitous computing, great strides must be made toward creating technology that serves people rather than the other way around. That means objects and services must sense and respond to what is going on around them, so that they can automatically do the right thing-hold a routine call if you’re busy, let you know if your flight’s delayed, or inform you of a traffic jam and suggest a better route. Such feats are increasingly known as context-aware computing. However, to do this job to the utmost, networks must know something about the people using them, often including their identity and location. This will force a choice: do people want to periodically cede privacy in exchange for better service?

A lot of the effort to track people and devices-and coordinate their interaction-dates back to Olivetti’s (now AT&T’s) Active Badge program. The latest twist is called “sentient computing,” which replaces the infrared-emitting active badges with ultrasound transmitters, dubbed “bats.” Since ultrasound provides far more precise positioning data than does infrared, bats make it possible to construct a computer model that follows people, objects and their relation to each other. The computer, explains researcher Pete Steggles, creates a “circle around me that’s about a foot in radius-and there’s another little circle around this device. And when the one is contained in the other, then I’m in a sense the owner of that device, and appropriate things happen” (see “Sentient Computing,” sidebar).

Another way to track objects is through radio-frequency identification tags, like those used to monitor livestock. These “e-tags” range in size from a grain of rice to a quarter and so can conceivably be embedded in everyday objects. Most rely on inductive coupling, like that used in the bulkier tags placed on clothes to deter shoplifting. Unlike bats, e-tags have no internal power source that needs periodic replacement. Instead, a signal from a tag reader induces a current in the implant, which consists of a coil attached to a silicon chip. Energy captured by the coil is stored in a capacitor that powers the chip and causes it to transmit a unique identifier to the reader. From there, the data is relayed wirelessly to the Internet or company intranet–summoning more information relating to the tagged item.

Last year, PARC researchers e-tagged everything from paper to books to copier machines around the lab. That way, anyone carrying a tablet computer equipped with a reader could access additional information and services associated with the tagged item. Say, for example, a person approached a flyer announcing a lecture. By positioning the computer near the title, he or she could call up the talk abstract. Holding it near the date and time announcement, where a separate tag was embedded, would schedule the event in an electronic calendar. Even better, many tagged items activated services associated with their physical form. In one demonstration, bringing a tagged French dictionary near a computer summoned a French version of the English document then on the screen. Roy Want, who led the project but has since left Xerox for Intel, describes e-tags as “an evolution of the bar code. I think in the future almost anything that is manufactured and traded will contain an electronic tag.” Such tags, he adds, will link to the Internet to provide information about the item’s origin, history and ownership.

Although a world populated by bats and e-tags promises to extend computing to almost anything, it does not address one of the biggest hopes for ubiquitous computing-that sensors, effectors and actuators can also be incorporated into devices, making systems capable of both processing information and responding to it. Former PARC director John Seely Brown, for example, foresees a world where millions of networked sensors are placed in roadways, using information about traffic to ease congestion and thereby “harmonize human activity with the environment.”

The Digital Companion

While promising to add great utility to people’s lives, most context-aware technologies depend on direct communication between humans and a known device or application. In reality, whether at home or on the road, people will also need help tapping services unknown to them-and with which they won’t ever want to interact directly.

Enter a third major aspect of ubiquitous computing: software agents, or bots, that root around behind the scenes to find services and generally get things done without bothering humans with the details. Many bots are already on the market, cataloging the Web for Internet portals or tracking customer preferences for e-tailers. But a new generation is at hand. Some bots are specific to individual devices or applications. Others are more like executive assistants-looking for bargains, negotiating deals and rounding up dozens of services into larger, coordinated actions.

Among the first bots to hit the market could be context-aware applications that seek to prevent information overload by filtering e-mail, phone calls and news alerts. Many firms are tackling this problem. At Microsoft, software agents-under-development make these decisions based on such factors as message content, the kinds of communiqus users read first or delete without opening, and the message writer’s relationship with the reader or position in a company organization chart. Agents can then determine whether to interrupt or not by correlating that information-with the help of desktop sensors such as microphones and cameras-with whether the person is on the phone, busy at the keyboard or meeting with someone. If the person is out, the agents can even decide whether to track him or her down via pager or cell phone.

Even this, though, is merely an appetizer for an idea, still without concrete embodiment, which SRI calls the “digital companion.” Much like Microsoft’s statistically based filters, it envisions agents that adapt to human needs-only on a much larger scale, as the OAA facilitator idea is extended to include personalized agents that will stay with people for years or even decades. Just as a good secretary learns a boss’s preferences and even comes to anticipate his or her needs, so will a digital companion serve its human masters.

“Think of it as a PDA (personal digital assistant) on steroids,” relates SRI’s Mark. “It is your assistant, it is your broker to this set of services and devices available in the network.” Your companion, he says, will authenticate your identity and pay your bills. It will make travel arrangements based on your preferences–and will even see to it that the rental car’s radio is set to your desires. Can’t remember the wine you drank at a restaurant last week? Just ask your companion: It will reference your bills and maybe the restaurant’s wine list to find out. In short, says Mark, a digital companion will be a person’s “universal remote for the world.”

The ubiquitous-computing vision remains in many senses just that: a vision. Beyond the immense technological challenges of building a public utility infrastructure and creating digital companions loom mind-staggering issues that run from programming for the networked world to real fears of Big Brother-like invasions of privacy. Jeffrey Kephart, who heads the Agents and Emerging Phenomena group at IBM’s Thomas J. Watson Research Center in Hawthorne, N.Y., even foresees the billions of agents that will soon be out there setting prices, bidding and making purchasing decisions as an economic wild card with potentially immense ramifications. “What we’re talking about is the introduction into the economy of a new economic species,” he says. “Heretofore we’ve only had humans.” He’s working to model and study the dynamics of such a system-and divine ways to avoid price wars and generally help prevent things from getting out of control.

No one yet knows the solution to such puzzles-nor are the answers even evident in today’s mishmash of efforts. All of which means that truly ubiquitous computing could still be decades off.

Steadily, though, the major pieces seem to be coming together, giving rise to a view among some in the industry that the new day is at hand. SRI’s Mark is one such optimist. So, too, is Jim Waldo, chief engineer of Sun’s Jini effort, which, by removing many of the barriers that exist between systems based on different operating systems and languages, marks a big step toward the dream.

“My feeling about the whole ubiquitous computing thing is it’s getting to the point of almost being a supersaturated solution-and at some point, the crystal’s going to form. And when it does, it’s going to happen really fast,” Waldo asserts. “There’s going to be lots of this base work. It’s going to be going nowhere-and all of a sudden it’s just going to be there.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.