Rethinking the Computer
Project Oxygen is turning out prototype computer systems
Howie Shrobe’s light fixtures are misbehaving this morning. When the principal research scientist in the Computer Science and Artificial Intelligence Laboratory instructs the system that automates parts of his office to “stay awake,” a voice emanating from a set of speakers obediently replies, “At your service.” And when Shrobe, SM ‘75, PhD ‘78, tells the system, “Close the drapes,” they magically glide shut, blocking out all light from the seemingly normal office. But when he says, “Turn on the lights,” nothing happens. Shrobe leans a little closer to the microphone array that sits on his desktop computer and repeats the command a little louder. When the room gives him the silent treatment again, he quickly types something on a keyboard; the lights turn on. He smiles and admits that he was playing around with the system a little this morning, which might explain why it’s acting up. After all, it is a work in progress.
Shrobe’s computerized office is just one of dozens of pervasive-computing technologies being developed as part of Project Oxygen, the lab’s five-year, $50 million effort to design computer systems that are as ubiquitous as the air we breathe and as easy to communicate with as other people. The end result, as originally envisioned by Michael Dertouzos, PhD ‘64, the late director of the Laboratory for Computer Science, is expected to be a collection of technologies embedded in workplaces and homes working together seamlessly-and often behind the scenes-to help us go about our daily lives. More than 150 MIT researchers have contributed to the effort, as well as staff from the project’s six industrial partners, which include Nokia and Hewlett-Packard. Now in its fourth year, the project is turning out working prototypes, including workspaces that adjust themselves according to their inhabitants’ habits, location-aware sensors that help people find their way around buildings, and computer chips that configure themselves to best suit different applications. In the process, the project has brought together researchers from many disciplines who may not have otherwise collaborated, often with unexpected results.
When Project Oxygen began in 2000, one of its first undertakings was to further Shrobe’s prior work on an intelligent conference room that helps people run more efficient meetings. The latest version of the room can, when prompted by spoken commands, show agenda items on a wall display, transcribe and save participants’ comments, or find pertinent video clips from previous meetings.
Over the past four years, the intelligent-room project has expanded to include other places where people share ideas-even the vicinity of the water cooler. Shrobe’s group has designed kiosks to tuck into these informal meeting spaces so researchers can record casual work-related conversations and technical scribblings. Today the group is working to couple intelligent spaces with a software platform that will allow people in different locations to share and display data with whatever gadgets happen to be handy-perhaps cell phones, or a projector in a meeting room.
The researchers are also considering problems that will arise and creating solutions as they go. What happens, Shrobe asks, if you’re in a meeting and don’t want to be disturbed, and then “I just start blasting bits onto your [personal digital assistant] screen?” Shrobe’s group suggests managing such requests according to the cultural rules whereby organizations already govern access to their members. If you wanted to show a business contact a presentation, you probably couldn’t just march into her office unannounced and take over her computer; you’d have to schedule a time to meet with her or find out how best to send her the presentation. Likewise, a ubiquitous system would need to coordinate with other organizations’ systems-behind-the-scenes digital receptionists that would tell it how particular people could be reached.
Sound and Vision
In the future, members of Project Oxygen say, computing power will cost next to nothing. That means that computation-heavy technologies, such as vision systems and software that understands spoken requests, will be able to replace standard mouse-and-keyboard interfaces. “We have to extend the modality beyond pointing and clicking,” says Victor Zue, ScD ‘76, codirector of the lab and-along with Anant Agarwal and Rodney Brooks-one of the leaders of Project Oxygen. Instead of being tethered to a desktop and other stand-alone devices, people should be able to interact with computers easily and naturally, from a distance, through conversation or gesture.
As a first step, principal research scientist James Glass, SM ‘85, PhD ‘88, is creating language-processing systems that go beyond simple speech recognition and “track some sort of meaning, to understand the content and context of the conversation,” he says. His group created a system that allows someone to inquire over the phone about restaurants in the Boston area. The system analyzes each sentence using grammatical rules to figure out what information the caller needs, then searches a database that includes information about local restaurants-their locations, phone numbers, types of cuisine, and price ranges. Since this database is constantly changing, Glass says, it’s difficult for the program to learn every restaurant’s name. So instead, it assumes that unknown words are probably restaurant names and searches the database for likely matches. Then the system reprocesses the question and finds the phone number in a matter of seconds.
But speech is just one mode of communication. “One of the things about Oxygen is that it’s not trying to develop [stand-alone] technologies in networking, speech, and vision,” says Zue. “Increasingly, it’s the integration of these technologies.” Glass’s group and associate professor Trevor Darrell, SM ‘90, PhD ‘96’s vision group are collaborating on a system that combines speech and vision technologies. The system allows someone standing in front of a projected wall display to create and manipulate geometric shapes by gesturing and giving spoken commands such as “add a yellow pyramid here,” or “resize this.” The system tracks the person’s movements through a stereo camera and captures his or her voice through a nearby microphone array. Although the prototype is fairly simple, Darrell imagines that future systems may be used in physical-therapy programs or video games.
In some cases, people won’t need to give commands because computers embedded in their offices will anticipate their needs. The groups headed by Shrobe and Darrell have developed prototype offices that can learn their occupants’ patterns of behavior. Stereo cameras first track how a subject uses the space. Once the system understands how people’s locations correspond to their needs, computers, lights, and even radios can react to their movements. “A normal computer is blind to whether I’m sitting in front of it, sitting on the couch, or off in the kitchen making coffee,” says Darrell. But a vision-enabled room could direct a cell-phone call to voice mail if it recognized that the recipient was sitting at a table with three other people and, therefore, likely having a meeting.
Location, Location, Location
Some Oxygen researchers created new hardware devices and even processors to help realize the pervasive-computing dream. Associate professor Hari Balakrishnan built beacons and receivers that work together to pinpoint a user’s location. Called Cricket, his system uses wall- and ceiling-mounted beacons that simultaneously send out radio and ultrasound signals. When the radio signal-which travels faster than the ultrasound signal-reaches a receiver installed in a badge or handheld device, it starts a timer that runs until the ultrasound signal catches up. The receiver then calculates the distance between the beacon and itself. When the receiver determines the distance to three beacons, it can pinpoint its location (and the user’s) to within a couple centimeters.
When Balakrishnan started working on the project in 1999, he thought it would be useful for handheld applications. Four years later, his research has taken a different turn. Now he thinks that its “killer application” may be in wireless sensor networks. “If you put a sensor out and it starts telling you something about the environment, unless it tells you where it’s coming from, it’s useless,” he says. A second version of the system, which combines Cricket with sensors that run on a special operating system, will be commercially available later this year.
Another new piece of hardware can separate and amplify a speaker’s voice from within a crowd of chattering people. The one-by-two-meter array of more than 1,000 microphones delays the signal from each of the microphones depending on how far it is from the speaker, then combines all the signals so that only the waves from one particular point in the room are amplified. At the same time, the waveforms from other noise in the room cancel each other out and are dampened. The system now works only with a stationary speaker, but the researchers plan to integrate it with vision technology, allowing it to track a professor and amplify his voice as he moves around an auditorium.
The array runs on a processor, designed by Agarwal and known as the Raw chip, that can reconfigure itself to suit many applications. Traditionally, hardware manufacturers will hand-design a chip wire by wire to make sure that signals get to the right place at the right time, depending on how the chip is to be used. For computers to reach their processing potential, they often need special add-on cards designed for specific applications. The Raw chip allows software to control the paths signals take on the wires, so it can customize itself to handle many different tasks. A handheld device powered by the Raw chip can just as easily run graphics software as make a cellular-phone call; such versatility would previously have required two separate chips, increasing costs and taking up valuable real estate on small devices.
The Future of Oxygen
Although the goal of Oxygen is to make interacting with computers far easier, the first generation of pervasive computers is bound to come with a new set of problems. “If you think computers are frustrating now, just wait,” says principal research scientist Larry Rudolph, who heads the Oxygen Research Group, which is working to answer some fundamental questions about pervasive computing. For example, what happens when someone without an advanced degree in computer science asks an intelligent office to open the drapes, and it doesn’t respond? When a computer freezes, rebooting often does the trick, but you can’t exactly reboot drapes. One solution is instant messaging, which would allow a user to converse with the system to diagnose what might be wrong and learn how to fix it. “It’s very similar to what happens today if your Internet service doesn’t work at home. You call up Verizon and say, Is the system okay?’” Rudolph says. But instead of talking with a technician, you would talk directly with the system.
Another forward-looking Oxygen project asks the question, if we incorporate hundreds or thousands of small, independent, and often unsupervised computing devices into our homes and workplaces, how can we be sure that they won’t be hacked? Srini Devadas, professor of electrical engineering and computer science, proposes using the unique physical properties of chips to serve as a sort of password. Chips that appear to be identical actually have minute differences, which can be measured by timing how long signals take to pass through certain paths on the chip. These delays can be recorded when the chip is created, stored in a central database, and used to create a chip-specific key. The idea is that in order for a computer, sensor, or smart card containing the chip to run certain software or authenticate a purchase, it would have to have the correct key.
The chip was a serendipitous discovery, Devadas says. When he originally joined Oxygen, he was interested in task automation. But through conversations with other researchers, he realized that security was a big issue in ubiquitous computing. “This is a prime example of the Oxygen project bringing together people from different disciplines and creating something that really wouldn’t have happened unless a hardware guy got together with a computer security person,” Devadas says.
It may be five or ten years before many of these technologies start making it into homes and offices, and it may be quite a while after that before they are integrated into the “well-oiled, humming whole” that Dertouzos envisioned when he first launched the program. Nevertheless, judging from the first prototypes coming out of Project Oxygen, it’s clear that the winds of change are beginning to stir the drapes.
AI and robotics are changing the future of work. Learn from the humans leading the way at EmTech Next 2019.Register now