When the Web Was New
Ten years on, Technology Review’s first article about the Web is a painfully amusing read – and reminds us how much has changed.
When I wrote this article, I had just finished graduate school at MIT and was a struggling neophyte in technology journalism. I knew that there was something important about this Web stuff; a lot of MIT students were already building their own pages, some faculty were posting their course notes, and you could even order flowers online.
But virtually none of the technologies that make the Web so powerful today had yet been conceived. So, please read this with a forgiving eye. Some of the locutions I used to describe basic Web concepts are laughably baroque – but at the time, remember, the vocabulary that we all know by heart today didn’t exist.
Just one month before this article appeared, “Jerry’s Guide to the World Wide Web” had incorporated as one of the first dotcoms, under its new name: Yahoo!. Even words like “page” and “link” and “windows” were so new that my editor and I felt compelled to put quote marks around them.
How easy it is to forget that many of the technologies that structure our lives today are less than a decade old. – By Wade Roush.
Spinning a Better Web
Technology Review, April 1995
Opening a booth in the vast electronic mall known as the World Wide Web is fast becoming one of the hippest ways to reach customers and constituents, to judge by the actions of a growing cadre of businesses, government agencies, universities, and other organizations. The newest segment of the global Internet, the Web lets users wander by clicks of a computer mouse among thousands of custom-designed multimedia documents stored in linked computers. But as the system grows, it’s encountering some very old-fashioned headaches: the mall’s parking lot is full, pickpocketing is a constant hazard, and there’s no directory for orienting oneself.
Worse still, the response to these difficulties could lead to broader problem: the development of software and data that don’t share underlying protocols. This would wall off certain portions of the Web to many users, even though the idea that all documents should be available to all users – in Web lingo, “interoperable” – is a key Web feature.
In 1990, researchers at the European Laboratory for Particle Physics (better known by its French acronym CERN) set up the Web as a way for high-energy physicists to keep abreast of one another’s progress. The idea that was one physics team might create a Web document, or “page,” of text using an article or a set of data, noting somewhere within the text the existence of, say, a corresponding graphic set up as a separate page in the system.
After starting up a system to browse for Web pages, a user could find and read the text and then retrieve the graphic by clicking on the “link” to it (the link, in the form of a word, phrase, or icon, would be highlighted). The user might wish to correspond about the information with the original team, or might develop additional Web documents – which could also take the form of color photographs, sound, and animation – that perhaps could be linked to the original text page by the same highlighting process.
The creation of Mosaic, a program that with colorful, “windows”-style graphics makes browsing easy and enjoyable, has fueled an explosion in Web use and development far beyond that envisioned by the original scientists. The public is starting to use the system to find documents posted by businesses and other organizations describing, say, how to order flower bouquets electronically or apply for admission to a particular university.
Realizing that the Web can be valuable in helping to make sales, many companies are creating online catalogs and advertising to entice thousands of computer-literate (and upscale) customers each day while avoiding the high costs of traditional marketing through print and broadcast media.
With the boom in use, the number of Web servers – the computers that handle requests for Web documents – has grown from only 130 in mid-1993 to well over 10,000 today.
But the rapid pace of development is leading to traffic-control and other problems. For example, while requests for materials are usually answered within seconds, popular Web documents – such as the White House home page, which includes photos of the First Family and a recorded message from President Clinton – sometimes take minutes to transmit or fail altogether.
In part, the slowness relates to the number of requests individual Web servers can handle at once. Also, the multimedia nature of many Web documents requires enormous amounts of data, making gridlock a bigger problem for Web users than for users of other parts of the fast-growing Internet.
There’s also mounting concern that the system needs programs to secure sensitive data, such as customers’ credit-card numbers, against interception and decryption by computer intruders. While malicious hacking has not yet become a problem on the Web, developers are acutely aware that no other part of the Internet has been spared the wrath of criminally motivated hackers.
Users also see the value of developing programs that essentially offer a “Yellow Pages” for the flood of information available on the Web, since searching it now for particular documents can be painful or even impossible. But the creation of various programs with different underlying protocols could result in the Web’s fragmentation and decay, according to Tim Berners-Lee, who created the Web’s original standards while a CERN member.
To address these difficulties, a group of software, electronics, and communications companies founded the World Wide Web Consortium (W3C) this past October. Headquartered at MIT’s Laboratory for Computer Science (LCS), the consortium, which now has about two dozen members including IBM, Digital Equipment, Hewlett-Packard, MCI, Lotus, and Microsoft, is using membership dues to fund work on a long list of technical protocols useful both for designers of Web documents and for software companies hoping to create new ways for users to retrieve and manipulate these documents.
“People were arriving unannounced in my office at CERN demanding that we form the consortium,” recalls Berners-Lee, who now directs W3C. “Companies investing larger and larger amounts of their own resources into the Web, or into work that relies on the Web, wanted to know that it would still be there, still interoperable, in 20 years.”
Berners-Lee says he chose to base the consortium at MIT because a large number of Web-related research projects are already underway there, and because the effort is similar to the X Consortium, an MIT project in which researchers worked with industry to develop and release at no cost X Windows, a widely adopted point-and-click user interface for workstation computers.
Similarly, starting in 1996 the W3C group plans to release its standards free of charge to Web document developers and software companies writing Web-related programs. Small task forces of specialists are beginning to device W3C’s work. Staff from the consortium’s corporate members plan to visit MIT’s computer-science lab frequently to monitor progress and contribute their own expertise
Setting up Traffic Rules
One W3C group hopes to alleviate traffic problems. The task force intends to develop protocols for storing frequently requested information at multiple locations and ensuring that data follow the shortest possible path to their destinations.
Albert Vezza, associate LCS director and one of W3C’s organizers, points out, “Right now, you can’t even tell that a request has come from a particular geographic area. It may go all the way around the world to get answered.”
He adds, “There have to be enough smarts in the protocol to know how to get it to the closest computer that can answer it.”
A group of prospective members of the consortium’s task force on security, privacy, and authentication has already met, concerned about devising protocols for programs that deal with securing commerce, such as online catalog shopping and orders for journal subscriptions over the Web.
The group also wants to develop standards related to software techniques for authenticating the identities of both buyer and seller. The prevent credit-card fraud and other forms of theft, the server should give verifying information about individuals, explains Berners-Lee. Information on possible underlying protocols isn’t available yet.
Another team will be charged with creating protocols for software to help people search for specific topics in Web pages. Standards are needed for programs such as several under development that, according to Suzana Lisanti, campus-wide information systems facilitator at MIT, generate and update daily for each Web page a customized index of subjects addressed. The index would function as a kind of electronic headline and could contain key words designed to help a person decide whether to read particular pages.
As W3C’s technical work gets under way, Berners-Lee says one of his most important tasks will be to balance the competing visions of its corporate members, each of whom has a financial stake – as a software developer or information provider – in the shape of the eventual Web standards.
For example, Mosaic Communications, a California firm that sells an enhanced version of the original Mosaic program, stands to gain a competitive advantage over other consortium members if the new protocols incorporate some of the company’s software innovations, while other members will be just as eager to see their ideas used.
But Berners-Lee says his experience at CERN in editing the original Web specifications has taught him to be optimistic that such conflicts can be overcome simply through directed discussion. The members will resolve their differences because, he says, everyone stands to gain from the system’s “overriding, essential nature” – that every Web document is available to every user.
Become an MIT Technology Review Insider for in-depth analysis and unparalleled perspective.Subscribe today