the security war can seem like an infinite standoff; for every new defense researchers devise, invaders develop countermeasures, leading to counter-countermeasures, and so on. Fortunately, defenders don’t have to make it impossible to break into networks; they only have to make getting in so difficult, or so fraught with the risk of being tracked down, that the bad guys think twice.
Consider, for example, the most common means of breaking into a computer system: stealing passwords. Since employees often use a word or proper name as a password, would-be intruders can turn to any of several automated password-guessing programs freely available on the Web (try a search on “LOphtCrack,” for example) to run through a dictionary full of guesses. “It just takes one user with a bad password to compromise a system,” says Dorothy Denning, a computer scientist at Georgetown University.
To fight back, organizations can enlist software that automatically rejects passwords based on words or names and forces users to change their passwords regularly to limit potential damage. Even safer are security “tokens”-devices from keychains that plug into computers to small liquid-crystal displays-which make stolen passwords less valuable. Tokens like those made by Symantec and San Jose, CA-based Secure Computing dynamically generate a new password each time a user needs to log in; a version made by RSA Security of Bedford, MA, generates a new password every minute or so in synchronization with servers. But even these precautions won’t stop highly motivated malicious agents. They can fast-talk employees out of passwords by posing as systems administrators over the phone or simply walk through the offices, where they can often spot passwords that are written down. And acquiring a token can be as simple as stealing a purse.
A growing number of companies and government agencies are also turning to smart cards to limit illicit entry into their systems. Smart cards have embedded computer chips containing code that identifies the holder. Passed through a reader that can be attached to any computer, the smart card authorizes the holder to use that computer to access the network: the network will reject commands from a computer that hasn’t been presented with an authorized smart card. Smart cards can also contain the “keys” required to read or send encrypted data. Unlike encryption keys stored on a PC, keys encoded on a smart card can’t be stolen via the network. Even tighter access control can be engineered by combining smart cards with “biometric signatures” like fingerprints or voiceprints. RSA Security, Luxembourg’s Gemplus and the Datacard Group in Minnetonka, MN, are among the vendors already selling smart cards; Siemens offers smart cards tied to a fingerprint, and Domain Dynamics of Swindon, England, is prototyping cards encoded with voiceprints.
Of course, smart cards can be stolen, too, and though tamper-resistant, the code on embedded chips can in theory be cracked once a card falls into the wrong hands. One way around this weakness is to build the authorization chips into the innards of the computer itself. This way, bad guys must physically get their hands on an authorized computer to crack a network-a dicey proposition that even if successful isn’t likely to go unnoticed for long. IBM, Intel, Hewlett-Packard, Microsoft and Compaq Computer founded the Trusted Computing Platform Alliance, now 170-plus members strong, to push for the development of such chips. The technology could be used in conjunction with smart cards and other security devices. “It puts a hardware barrier in front of a malicious software attack,” says David Safford, manager of IBM Research’s Global Security Analysis Laboratory. Safford estimates that in three to five years, every computer built will include the chips. IBM Research has also developed a tamperproof device that can be installed in servers, similar to the chips endorsed by the Trusted Computing Platform Alliance.
Eventually, though, the chip has to talk to software, and some security experts peg that as the weak point of the Trusted Computing Platform Alliance’s scheme. And once logged into a system, intruders can send commands that might coax the operating system-whether it’s Unix, Microsoft Windows or Sun Solaris-into granting them systems administrator privileges. That typically includes the ability to examine server files, gain access to other servers, install “back doors” that allow easy future entry and cover their tracks by altering the system’s logs.
Operating systems can be “tightened down” to prevent this sort of manipulation, but most systems administrators aren’t familiar with the approximately 300 manual programming routines the procedure requires. Even if they are, malicious parties can exploit newly discovered holes (an average of 10 new Windows vulnerabilities, for example, circulate around the Web each month) unless systems administrators are unusually diligent about updating security features. “The machines get worse just sitting there,” notes Dan Farmer, a security consultant who has worked extensively for Sun Microsystems.
A terrorist or industrial spy doesn’t have to be proficient in the nuts and bolts of security hole exploitation to capitalize on these weaknesses. Software penetration “tool kits” that automate the process of invading and taking over a system can be downloaded from thousands of sites on the Web.