Technology's Frontier Syndrome
We are constantly being promised deliverance to greener pastures, where today’s problems will plague us no more. But new perils seem always to lie in wait.
Remember when Microsoft promised that Windows NT would solve our computer security problems? This was back in the early 1990s, when most of the PC world was using Windows 3.1. Computer viruses were rampant. And as near as anybody could tell, things were only going to get worse.
But the word on the street back then was that we shouldn’t worry-Microsoft was developing a new operating system that would make everything better. Unlike previous Microsoft systems, NT would employ the most sophisticated security measures available to stop viruses dead in their tracks.
Alas, things didn’t work out that way. It’s true that Windows NT had advanced features like memory protection and separation of privileges-features designed to prevent one program from modifying other programs or the operating system itself. But those mechanisms only protected the operating system against unprivileged users, not against system administrators. And because of the way that Windows NT was built, many programs required that users log in as an “administrator” in order to get any work done at all. So in the end, the Windows NT security mechanisms, even though they alleviated some problems, didn’t address the operating system’s underlying susceptibility to viruses.
Indeed, by the mid 1990s, the virus problem had gone from bad to worse. The problem was no longer the operating system-it was application programs. In 1996 the very first Word macro virus appeared. “Concept,” as it was called, was a new kind of virus. Instead of infecting the operating system or programs, Concept infected Microsoft Word documents. This was a threat that Windows NT’s security model was utterly unprepared to handle. Concept spread like wildfire. We hadn’t solved the virus problem-we had just moved it somewhere else.
This experience with Windows NT is an example of a phenomenon that comes up again and again in the computer industry. I call it the “Frontier Syndrome.” Researchers, engineers, or whole companies get excited about a new technology-a computational or architectural “frontier” where no one has gone before. These visionaries make up lots of stories about how the frontier is a better, cleaner, simpler place, with none of the problems that we face today. And then they set off, usually with millions of dollars in capital, to turn their vision into a reality.
These cycles become pretty easy to recognize once you’ve lived through a few of them. One reason is that they almost always end up the same way. The frontier is exciting as long as it’s mostly filled with pioneers-people who are willing to live with the rustic leading edge of technology. But as soon as new people move into town-when the roads get paved and the housing projects get put up-we discover that the problems on the now-conquered frontier aren’t all that different than the problems that we thought we left behind.
The Java gold rush was an example of the Frontier Syndrome in the extreme. Java, a fundamentally new computer language, burst onto the scene in 1995. Billed as a language that was designed for the Internet, Java was going to simultaneously wipe away the security problems of C++ and make the Macintosh a viable platform in Apple’s competition against Microsoft. What’s more, Java showed up just in time to help solve the Y2K crisis: Instead of trying to upgrade millions of lines of COBOL code, businesses could instead rewrite their systems using sweet and simple “Enterprise Java Beans.”
Java certainly worked its magic, offering employment to a whole new crop of programmers and buoying Sun Microsystems’ stock price to new heights, but what do we have now? Most of the projects aimed at rewriting desktop applications into Java failed. Java wasn’t the savior for the Mac after all: as most Macintosh users know, many Java programs that run well under Windows don’t work on the Mac-in fact, they crash so often that many Web sites no longer send the Java programs to people who aren’t running Microsoft Windows. And as far as those legacy COBOL programs go, an increasing number of banks and financial institutions are now faced with millions of lines of legacy Java programs!
Two contemporary examples of this syndrome are Microsoft’s Palladium initiative, now known as the “Next-Generation Secure Computing Base for Windows,” and the Trusted Computing Group. Both projects aim to make PCs more secure by making them resistant to viruses and other kinds of hostile code. We’ve heard this before, of course. But this time, instead of using mere memory protection, these new initiatives will use the power of cryptography to keep out bad code while allowing good code to flourish. Really!
Proponents and detractors of these new technologies both argue that they will give computers new capabilities-including the ability to restrict computers so that only programs that are certified with a digital signature can execute. That’s good if you are running a corporate network and don’t want hackers installing programs that silently “sniff” keystrokes and passwords, or computer viruses that send out copies of spam without the knowledge of the computer’s owner. But it’s bad if you are a home user and want to install software that hasn’t been pre-approved by the likes of Microsoft or Intel. But both of these scenarios assume that the underlying technology will actually work. That is, they assume that the new, pure, cryptographically enabled frontier will somehow be different than this dirty, messy world in which we all live. I’m not so sure that this will be the case: just as Word brought macro viruses to Windows NT, it’s quite possible that some new program running on tomorrow’s computers will enable a new generation of hostile code that hasn’t been anticipated today. Just because we fix today’s problems doesn’t mean that tomorrow’s computers will be more secure. In fact, they’ll probably be less secure-just in ways that we can’t imagine right now.
Indeed, the real reason that new computing platforms are usually more secure than old ones is that nobody has written attack programs for them yet. (Incidentally, that’s also the reason that there are no viruses for MacOS X yet: As a new and relatively insignificant platform, it hasn’t attracted the interest of virus writers.) The frontier mentality is fueled by the deceptive phenomenon that moving to a new patch of ground really does get one away from the old problems-at least for a while What then happens, time after time, is that new versions of the old problems crop up in this new, supposedly superior territory.
The Frontier Syndrome is also what’s behind a series of calls to created “Gated Internet Communities,” or GICs, to keep out spammers. Proponents of these gated communities say that they will be able to create new Internets that are spammer-free. Apparently they have forgotten that the Internet itself was once spam-free-back when it was on the frontier. People adopted e-mail because it didn’t have telemarketers who called during dinner, or junk mail and catalogs that filled up the mailbox by the curb. Spam appeared on the Internet as the number of users shot up and the riff-raff came to town. The same thing will happen to these GICs if they achieve any sort of critical mass.
The computer industry thrives on the Frontier Syndrome. By promising that the next generation of machines will be fundamentally different from the current one, this mindset lets routine hardware upgrades masquerade as capital expenditures. IT managers can thus amortize their equipment purchases over several years, in effect giving them a bigger slice of the corporate pie. And journalists buy into the Frontier Syndrome as well: after all, it’s always more fun to talk about revolutionary new technology than to observe that we’re being fed more of the same old same old.
Learn from the humans leading the way in intelligent machines at EmTech Next. Register Today!
June 11-12, 2019