Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

To keep hackers from commandeering machines remotely, both plans include features that allow data access only to a user who is physically present. Palladium, for instance, checks whether commands are really coming from the computer’s keyboard. The systems, which also scan for changes in the hardware and software since the previous time the computer booted up, can block access to sealed data if there are signs of tampering or if unauthorized software such as a virus tries to access cryptographic functions. And both Palladium and the Alliance scheme allow other similarly equipped computers to ask the system whether its machine is in a configuration they can trust; the chip always answers truthfully. This capability could allow a computer to query, say, an online banking site to make sure its security is satisfactory before starting an exchange of sensitive information. The overall goal, says Microsoft software architect Paul England, is to have computers “unconditionally protected against software attack.”

But could the cure be worse than the disease? Proponents argue that the technology will at last keep data such as passwords, financial and medical records, and proprietary secrets safe from theft, while also preventing damage to computer networks from viruses and other software attacks. Critics, however, point out that the new chips can be used also to give software makers and content providers an unprecedented degree of control over users’ machines. Opponents raise the specter of censorship of material on personal computers. They also warn about anticompetitive practices and draconian copyright enforcement through digital rights management. Trusted computing “could be good, could be bad,” says Seth Schoen, at the Electronic Frontier Foundation, a nonprofit organization in San Francisco, CA. “It allows a lot of things to happen, which in some sense couldn’t have happened before.”

Ross Anderson, a University of Cambridge computer scientist and an outspoken opponent of both plans, says trusted computing “can be useful, but it’s a lot harder to do right than you might think.” The systems could be used for remote detection and deletion of illegal software copies and even to exact payment each time a user plays a downloaded song, Anderson says. He also suggests that governments could use such features to actively censor documents and photos deemed politically sensitive or morally offensive: a state could require the chip to check online for valid licenses or lists of prohibited material and authorize it to delete data it deemed illicit.

Anderson and Schoen worry also that the new security technologies could be used to stifle competition. A maker of word processing software, for instance, could build in cryptographic keys other companies wouldn’t have, preventing competitors’ document formats from working on any machine that runs its software. That “would be a disadvantage for users,” says Schoen. “But if a software publisher has a lot of market power, they can get away with including features that are a disadvantage to users.”

0 comments about this story. Start the discussion »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me