In the computing industry’s decades-old arms race against hackers and pirates, the bad guys continually find and exploit holes in security software, and the good guys rush in to patch them. Now for the first time, companies are rolling out a hardware-based security technology that promises to change the fundamental architecture of the personal computer. Whether the security technology threatens users’ control over their own software and data, however, remains a hotly contested concern.
An industry consortium that includes IBM, Intel, Hewlett-Packard, and Microsoft has created specifications for a new microchip that-independent of a computer’s main processor-would store special keys for encrypting and decrypting data. Keys stored on a separate chip are beyond the reach of hacker software, so they can keep encrypted data secure. “It’s like having a little safe inside your PC,” says Bob Meinschein, an engineering manager at Intel Research and member of the technical committee of the companies’ Trusted Computing Platform Alliance, formed in 1999.
Since last June, IBM has been selling computers that incorporate the chips, and the company expects that the chips will eventually be in smaller computing devices such as personal digital assistants and cell phones. Microsoft has gone a step further and is developing a related but independent approach dubbed Palladium. That technology incorporates both Microsoft’s own designs for special hardware and a new “nexus,” a trusted suboperating system that will run programs configured to take advantage of the hardware. It will be included in future versions of the company’s Windows operating system.
The heart of both schemes is a special microchip, a tiny Fort Knox for secret data, that includes mathematical keys to encrypt and decrypt information so that no one but the machine’s authorized user can read it. (Computers today routinely handle such encryption when they send credit card information over the Web, but most computers store keys on their hard drives, which are highly vulnerable to hackers.) And this chip doesn’t simply store secrets; it also takes over basic cryptographic operations, so software configured to take advantage of the chip’s capabilities can ask the chip to encrypt data on its computer’s hard drive. Because each chip would come with unique encryption keys, encrypted information would be accessible only to the program and the computer that originally sealed it.
To keep hackers from commandeering machines remotely, both plans include features that allow data access only to a user who is physically present. Palladium, for instance, checks whether commands are really coming from the computer’s keyboard. The systems, which also scan for changes in the hardware and software since the previous time the computer booted up, can block access to sealed data if there are signs of tampering or if unauthorized software such as a virus tries to access cryptographic functions. And both Palladium and the Alliance scheme allow other similarly equipped computers to ask the system whether its machine is in a configuration they can trust; the chip always answers truthfully. This capability could allow a computer to query, say, an online banking site to make sure its security is satisfactory before starting an exchange of sensitive information. The overall goal, says Microsoft software architect Paul England, is to have computers “unconditionally protected against software attack.”
But could the cure be worse than the disease? Proponents argue that the technology will at last keep data such as passwords, financial and medical records, and proprietary secrets safe from theft, while also preventing damage to computer networks from viruses and other software attacks. Critics, however, point out that the new chips can be used also to give software makers and content providers an unprecedented degree of control over users’ machines. Opponents raise the specter of censorship of material on personal computers. They also warn about anticompetitive practices and draconian copyright enforcement through digital rights management. Trusted computing “could be good, could be bad,” says Seth Schoen, at the Electronic Frontier Foundation, a nonprofit organization in San Francisco, CA. “It allows a lot of things to happen, which in some sense couldn’t have happened before.”
Ross Anderson, a University of Cambridge computer scientist and an outspoken opponent of both plans, says trusted computing “can be useful, but it’s a lot harder to do right than you might think.” The systems could be used for remote detection and deletion of illegal software copies and even to exact payment each time a user plays a downloaded song, Anderson says. He also suggests that governments could use such features to actively censor documents and photos deemed politically sensitive or morally offensive: a state could require the chip to check online for valid licenses or lists of prohibited material and authorize it to delete data it deemed illicit.
Anderson and Schoen worry also that the new security technologies could be used to stifle competition. A maker of word processing software, for instance, could build in cryptographic keys other companies wouldn’t have, preventing competitors’ document formats from working on any machine that runs its software. That “would be a disadvantage for users,” says Schoen. “But if a software publisher has a lot of market power, they can get away with including features that are a disadvantage to users.”
Microsoft and the Alliance deny that their systems would restrict the kind of software or documents computer owners could use on their machines, and they emphasize that their only goal is to protect users’ data. They also dismiss claims by critics who say the systems could be used for invasive procedures such as remote deletion of files. “The notes that I’ve seen posted on the Web, I think, were pretty far-fetched and many of them impossible,” says Clain Anderson, director of security solutions for IBM’s PC division.
England at Microsoft, Meinschein at Intel, and Anderson at IBM all acknowledge, however, that the security chips will make digital rights management far more effective by allowing software makers and providers of online content including music, movies, and books to put more elaborate restrictions on the way computer owners use data. An operating system with a special security chip “could implement policy on top of it that users may like or that users may not like,” says Meinschein.
The decision to buy a PC with such a security chip and even whether to enable the chip, however, will still belong to consumers, Meinschein notes. “We’ve taken, I think, the necessary initial steps to try to ensure that these technologies can be used in reasonable ways and that users have control of their privacy and of the device. But some of this is frankly going to be evolution, and we as users and as a community, we’re going to have to work through this.” Computer owners, in other words, will have to decide whether they really trust the good guys more than they fear the bad guys.
Become an MIT Technology Review Insider for in-depth analysis and unparalleled perspective.Subscribe today