Skip to Content
Uncategorized

The Web’s Unelected Government

A little-known group that holds closed meetings is the closest thing the Web has to a central authority. TR, offers the first in-depth look at this crucial player in the Web’s future.

When you’re part of the group that runs the World Wide Web, it can be daunting to explain to your mother what you’ve done with your day. Take Paul Trevithick, chief technology officer for Bitstream, a Cambridge-based company that designs and sells computerized type fonts. Trevithick was in his hometown of San Jose for a meeting of the World Wide Web Consortium’s font group not too long ago, and after one day’s sessions he decided to visit his mother. “What did you do today, dear?” she asked.

“Well,” Trevithick responded, “we tried to define the future of how information will be published in any medium 10 years from now.”

That’s certainly a grand goal-and it’s business as usual for the World Wide Web Consortium. The consortium, known to insiders as W3C, has at last count 275 member organizations, including companies, nonprofit organizations, industry groups and government agencies from all over the world. This power-packed assemblage is the closest thing the famously decentralized Web has to a governing body. More than the U.S. government, whose funding created the Internet, and more than the telephone companies whose wires and fibers carry the Net’s digital traffic, it is the W3C that will largely determine the Web’s structure in the 21st century.

For a group with this much clout, W3C isn’t well known. Nor does it court fame. Its meetings are closed to outsiders. Although the consortium, based at MIT, has brought some order to the unruly thickets of the Web, it has plenty of critics who say the group has become a significant maker of public policy-and ought to start acting like one. They argue that the W3C should open its membership and meetings to broader, more democratic participation.

In addition, they say, the organization’s decisions and structure essentially reflect the personality of one man: Tim Berners-Lee, who invented the World Wide Web in the early 1990s while working at CERN, the European high-energy physics lab. Almost everyone involved with the Web has tremendous respect for Berners-Lee, now at the MIT Laboratory for Computer Science (LCS). But the consortium’s critics say that a body that has this much effect on a technology that affects us all can’t be the province of any one person. To test these claims and take our readers behind the scenes, Technology Review talked to the major players in the consortium and many of the group’s most thoughtful critics.

Real-life Sci-Fi

If Tim Berners-Lee has a huge influence on the operation of the W3C, it’s an influence earned by his role in the creation of the Web itself. Essentially, what Berners-Lee invented was a scheme for linking a document of any kind stored on any computer connected to the Internet. Dubbed the Universal Resource Locator, or URL, this innovation gave everything on the Internet its own unique address. Type a URL into a special program called a Web browser, and the program would go out to the Internet, fetch the information, and display it on your computer screen. Berners-Lee’s invention of the URL made possible the sci-fi vision of having all the world’s information available at the click of a button.

But CERN’s management saw the Web as a costly distraction, and its inventor soon found himself looking for a new job. Berners-Lee hit the research lab circuit. He spent a month at MIT’s LCS, then another at Xerox’s Palo Alto Research Center. He talked with a few people about starting a company called WebSoft to commercialize the technology, but quickly jettisoned that idea. “If I had tried to capitalize on the Web initially, it probably would not have taken off,” Berners-Lee says. “The Internet community would have dropped the Web like hotcakes.”

In 1992, Berners-Lee, still without a permanent professional home, was attending a conference in England, when on a bus he happened to sit next to MIT professor David Gifford of LCS. Gifford suggested that he e-mail Michael Dertouzos, director of LCS, who for more than a decade had been describing his vision of an “information marketplace”-in which computers conduct business electronically and act as research assistants for their human masters. It was a vision that was remarkably similar to Berners-Lee’s idea of the Web as a worldwide library. The next month, Dertouzos flew to Geneva and met with Berners-Lee. “He was into the gigantic brain thing,” recalls Dertouzos, “building a network to store all the world’s knowledge. He did not quite see the commercial aspects at the time. But one thing was obvious: the intersection of our views.”

Another thing was obvious as well: MIT, which already had experience in the consortium business, could become Berners-Lee’s new base of operations. For nearly a decade MIT had been the home of an academic/industry partnership called the X Consortium. The X Consortium had taken the X Window System, created at MIT’s Project Athena in the 1980s, and shepherded the program’s development as a core technology for Unix workstations. MIT held the copyright for the system and made the technology available to all, free of charge. Dertouzos suggested to Berners-Lee that MIT could do the same thing with CERN’s Web technology.

Berners-Lee convinced Dertouzos that the Web was too big for the consortium to be solely an MIT project. Instead, MIT would jointly host the World Wide Web Consortium with CERN. The consortium would be funded the way the X Consortium had been: Companies would pay a membership fee to get early access to the consortium’s technology and have the right to help direct its development. The consortium set up shop in October 1994. Two months later, CERN backed out and handed the Web’s European mantle to INRIA (the French National Institute for Research in Computer Science and Control). Jean-Francois Abramatic, W3C’s chairman, who is based at INRIA, explains the switch: “CERN is a Nobel Prize maker, and there is no Nobel Prize in computer science.”

Workgroups for Windows

The World Wide Web Consortium that grew out of these discussions is not a standards organization in the mold of such traditional outfits as the American National Standards Institute (ANSI) or the International Standards Organization (ISO). Think, instead, of W3C as a group of technologists who give advice to director Berners-Lee, who consults with the consortium’s members and then issues his recommendations. Legally speaking, Berners-Lee’s recommendations have no teeth; even consortium members are under no obligation to implement them. In practice, however, W3C’s recommendations carry a moral authority that is the closest thing the Internet has to law. Microsoft, Netscape and a host of other companies have pledged to implement the standards in their products. And this moral authority has given rise to the W3C’s technical work-which is almost universally praised-as well as its policy-making activities, which have generated considerable controversy.

Both the technical and policy activities take place within the same structure: a set of working groups, each set up to address a specific issue. The need for a working group is identified by either Berners-Lee, a staff member or an outside company. Berners-Lee approves the creation of the group and appoints a chair, who invites other members. The working group’s discussion takes place via electronic mail, weekly teleconferences and occasional face-to-face meetings. At some point, a person from the group volunteers to edit the final written specification. The document is distributed to the W3C membership, which votes on the recommendation. Following this final vote, the W3C’s director can accept the measure, and make it an official W3C technical recommendation, or pass.

To get a sense of the technical aspect of what these groups grapple with, consider the disarray over HyperText Markup Language (HTML), the set of codes that determine how Web pages look and behave. If you look at the raw text of a Web page, you’ll see plain text and many words in angle brackets, like <i>. These words in angle brackets are called “tags.” They tell a Web browser how to typeset and display information that it finds. (The <i> tag, for example, tells the browser to display text in italics.) For the Web to work properly, all Web browsers need to implement more or less the same set of tags. Unfortunately, the original Web browsers out of CERN and the National Center for Supercomputing Applications in Illinois omitted many useful features, such as the ability to display information in tables.

When Netscape released its first Web browser, the company added a few new HTML tags. When Microsoft followed, introducing its Internet Explorer browser with Windows 95, it adopted some of Netscape’s tags, rejected others, and introduced some new ones of its own. Both organizations said that they would work with the consortium to have their proprietary tags accepted into the HTML standard. But until such a standard was adopted, companies trying to publish information on the Web were in a quagmire. If they took advantage of the new advanced features in Netscape Navigator or Internet Explorer, their Web pages wouldn’t look right to somebody using the other browser.

To make matters worse, Netscape and Microsoft promoted themselves by celebrating their incompatibilities. Netscape peppered the Web with logos saying “Best Viewed with Netscape Navigator,” which meant the Web site used some feature that Microsoft didn’t support. Microsoft, meanwhile, allowed Web sites to display a “Best Viewed with Internet Explorer” logo if the Web site used a feature that Netscape’s browser lacked.

Into this mess stepped the W3C’s HTML working group. The group-with representatives from Netscape and Microsoft, among other companies-quickly agreed on a modest goal: codify into a single standard the HTML tags that were in use on the Internet at the time (May 1995). “We didn’t do a lot of design work,” recalls Dan Connolly, the W3C staff member who chaired the committee. “We just said what’s in, what’s out, let’s write it up.” The specification was written not so much for the people creating Web browsers as for companies authoring Web pages, so they could know what HTML tags they should use and which they should avoid.

The HTML working group moved fast; its final specification was adopted by the W3C in May 1997. And it did “enduring” work, at least by the standards of the Web: What the group settled on is pretty much what is used on the Web today. Meanwhile, the W3C recently finished the standardization of HTML 4.0-an excellent standard that will gain in use as more and more Web users upgrade to Netscape Navigator 4.0 and Internet Explorer 4.0.

On the whole, W3C’s members are pleased with the technical work the consortium has undertaken. They are particularly encouraged that W3C has managed to avoid some of the pitfalls that beset other standards organizations-projects that drag on because of infighting or because they are overly ambitious. “The W3C defines projects and sets goals that are relatively short-term,” says Don Wright, who sits on the W3C’s Advisory Council Committee on behalf of Lexmark, the printer manufacturer. “Something gets delivered.”

Porn and Privacy

While its technical work is widely admired, the W3C has raised eyebrows with projects that have more to do with regulating online society than managing the flow of bits. And no W3C initiative has caused more controversy than the consortium’s efforts in its “Technology and Society” domain. The consortium’s two most significant projects in this arena are PICS, a Web-based system designed to let parents and schools control the kind of information children can view on the Internet, and P3P, a system for controlling privacy and the spread of personal information. What has steamed some critics is the notion that, by creating these protocols, the consortium is actually setting social policy for cyberspace-and, in the process, usurping the role of democratically elected governments.

PICS (short for Platform Independent Content Selection) was conceived in August 1995 in response to what many at the W3C regarded as an impending political catastrophe. The U.S. Congress was about to enact a law that would have criminalized transmission of “indecent” material to minors over the Internet. PICS offered an alternative approach in which Web sites would rate themselves, saying whether they contained nudity, sexuality, violence, and if so, how much. Parents could then selectively block access to those Web sites using screening software they would install on their own computers.

The PICS working group operated in secret at breakneck speed, producing a specification and working code within three months, says Jim Miller, who led the effort while at the W3C and now works for Microsoft. PICS didn’t prevent Congress from passing the Communications Decency Act. But the federal court that held the act unconstitutional in 1996 cited PICS as evidence that the Web could police itself without external censorship.

Proponents of PICS say the system is policy-neutral: The W3C didn’t create a specific rating system or dictate what could or could not be seen by children. But others disagree, noting that PICS is ideally suited to enable a person’s Internet service provider to filter the material it allows its subscribers to have access to. This capability, inherent in the design of PICS, makes it easy to use the technology for censorship-something that it was purportedly designed to prevent. “I don’t like the ease with which upstream filtering can go on invisibly” with PICS, says Lawrence Lessig, professor of constitutional law at Harvard Law School, and an expert on cyberspace issues. With PICS, he says, “the code writers become important policy-makers.”

Lessig believes a more open process might have created a technology less liable to subvert basic freedoms. “Given that [the consortium] is a pretty powerful organization, it should be more open. If they want to do policy, they have to accept the constraints on a policy-making body, such as openness in who can participate.” But constraints like those are antithetical to W3C’s charter as an industry body, responsible first to the needs of its members-who pay its bills.

The W3C acknowledges this criticism, and says it is making an effort to do better. “The W3C has done a progressively better job of engaging outside constituencies and experts,” says Danny J. Weitzner, who was deputy director of the Center for Democracy and Technology (CDT) in Washington, D.C., until this fall, when he became leader of the W3C’s Technology and Society Domain. For example, says Weitzner, the CDT was involved in the “PICS process,” even though CDT was not a member organization of W3C at the time. (CDT has since joined the consortium.) From now on, Weitzner says he plans “to do everything I possibly can to engage people who are interested in these technology-and-society issues.”

One of the biggest issues on Weitzner’s agenda will be privacy-a concern that has a significant commercial impact that motivates W3C’s corporate members. According to some surveys, as many as 80 percent of Internet users who refuse to make purchases online base their decision in part on fear that their privacy might be violated via the information they surrender in making the transaction. Net users’ privacy angst is not fantasy; a trail of personal information gets left online by practically every Web user. Click into one Web site and you might be completely anonymous; on the other hand, a different site might secretly record your name, e-mail address, and everything you look at. The privacy problem is being exacerbated by political pressures. The Clinton administration has issued a warning to the Web community: adopt a model for self-regulation or be prepared for government intervention.

As the issue of privacy surfaced, the W3C working group dealing with PICS came to an intriguing realization. Rather than rating Web sites on the basis of their sexual content, PICS could rate a site’s privacy practices. Then, if you wanted to avoid sites that didn’t honor your privacy, your computer could automatically keep you out-just like the computer of a child whose parent didn’t want them to view pornography.

The idea of privacy software gained momentum during the summer of 1996, when the Federal Trade Commission held the first in series of hearings about online privacy. That winter, the CDT hosted a meeting of an ad hoc group it called the Internet Privacy Working Group. Invited guests included privacy activists as well as representatives from IBM, America Online and even the Direct Marketing Association. “They had a pretty diverse group,” says AT&T research scientist Lorrie Faith Cranor, who participated in several of the W3C working groups.

In the spring of 1997 this ad hoc group realized it “didn’t have enough expertise on the technical side,” says Cranor, so it asked W3C to take on the project. The W3C membership approved the idea in a nonbinding vote and Berners-Lee authorized the project. Roughly a year later, the group had created a draft recommendation called P3P, and companies such as Microsoft and Netscape were making formal commitments to implement the technology.

P3P, which stands for the Platform for Privacy Preferences Project, won’t by itself protect anybody’s privacy. That’s because the technology isn’t really designed to prevent Web sites from gathering information about a Web user, but rather to convey personal information explicitly from the Web user to the Web site-as long as the Web site promises to abide by certain privacy policies.

Here’s how P3P works. Each participating Web site publishes its privacy policy in machine-readable form. One Web site, for instance, might disclose that it records every page you look at, but uses the information only for research purposes. Another site might request your age and zip code so that it can present you with customized news reports. A third site may want to know your name, address and phone number, and sell this information to companies whose advertising subsidizes the site.

When your browser connects to a Web site, it looks at the privacy “proposal” the site provides, indicating which kind of personal information the site requests and what it intends to do with it. Your browser then looks at your preset privacy preferences. If there is a match-if you don’t mind your e-mail address being used for research purposes, for example-your browser can automatically provide the requested information. If in your view the site’s proposal constitutes a violation of privacy, however, the page won’t load and you’ll see a message on the screen explaining the mismatch.

So what happens, you ask, if a Web site lies about its privacy policy? Nothing. P3P lacks both auditing and enforcement measures. Its authors hope misrepresentations in privacy policies will be handled the same way fraudulent consumer advertising is dealt with: lawsuits and government enforcement. The system also has provisions for something like a “better business bureau” seal of approval; an organization’s privacy policy can be digitally signed by the secret key of another organization, and that signature can be digitally verified by consumers.

W3C director Berners-Lee acknowledges that this reliance on trust is a weakness in P3P: “I am concerned that we can make a beautiful protocol until we are blue in the face, but if it isn’t backed by legislation, there will be sites that simply don’t talk P3P. These sites may ask you for your mailing address and then may be abusing your privacy.”

Privacy advocates are split on the value of P3P. Some believe that while the technology isn’t perfect, it’s better than nothing. P3P can be used to create greater privacy than exists on the Web right now, says Ann Cavoukian, the privacy commissioner for Ontario who also participated in the P3P working group. “I support P3P and other technologies that will come along and empower the individual,” she adds.

Others, however, have sharply criticized P3P as being less a means to protect privacy and more a way for businesses to gather personal information from Web users. Marc Rotenberg, director of the Washington-based Electronic Privacy Information Center, says P3P in effect waives privacy rights that are unwaivable. Both U.S. and European privacy laws outlaw some kinds of privacy-violating transactions even if they are entered into voluntarily. For example, in the United States it is illegal for a video rental store to reveal the names of the movies that its customers rent. The video store may not say to its customers, “We will protect your privacy and charge you $5 per day, or you can give up your privacy and pay just $4 per day.” But that sort of deal could be both proposed and accepted using P3P.

“P3P reflects the Clinton administration’s enthusiasm with what are essentially notice-and consent’ techniques to resolve privacy issues,” says Rotenberg. Unfortunately, he says, this approach all too often becomes a take-it-or-leave-it dilemma for the consumer: accept that the business is going to violate your privacy, or go play somewhere else. “The emphasis in P3P is on negotiating the terms of privacy between a data subject and the data collector, but that really runs contrary to what privacy law and policy has always been about,” says Rotenberg. “P3P says that anything goes.”

If P3P is adopted, one critical question remains: What will be the default settings provided to users? Few computer users ever learn to change the preference settings on their software. Therefore, the way a Web browser equipped with P3P sets itself up by default is the way the majority of the Internet population will use it. “That’s where the public debate ought to be,” says Miller. “The marketing industry would want the defaults on the client to be set so that everything is preapproved; privacy advocates are going to say that the appropriate setting is that nothing is preapproved. My take is the W3C should not be involved in making that decision. That is a public policy debate.”

There might not be much of a debate, however. That’s because companies like Microsoft and Netscape, which both create Web browsers and run massive Web sites, are likely to establish their own settings-regardless of what the W3C recommends. This spring, for instance, Microsoft bought a company called Firefly, which had contributed heavily to the P3P standard. Since then, Firefly has become Microsoft’s “Privacy Czar,” says Thomas Reardon, Microsoft’s program manager for Internet architecture. Firefly is “the core of our entire [privacy] strategy,” Reardon says, guiding the software giant’s decisions about the commercial value of personal information collected from customers as well as “what is the right thing morally.” W3C’s influence is strong, but it only goes so far.

Is Top-Down a Downer?

Critics see the top-down, personalized structure of the consortium as a problem in policy-making. It may ultimately undermine the consortium itself-by causing members to lose interest.

At the June 1996 meeting of W3C’s General Assembly Advisory Committee, “110 out of 140 members” were present, says Carl Cargill, an independent standards consultant who sits on the consortium’s eight-person advisory board and was formerly Netscape’s representative to W3C. By December, when a meeting was held in England, membership in the W3C had risen to 170, but only 90 showed up. In June of 1997, only 70 out of 180 members showed up for the semiannual meeting held in Japan. At the end of that year, only 70 of 240 member organizations were represented at the meeting in Geneva. Cargill says he thinks companies have stopped sending people to meetings because they realize that the General Assembly’s Advisory Council Committee merely rubber-stamps what Berners-Lee wants to do.

At least one major player has gone further than merely not attending the assemblies. MCI recently withdrew from the consortium altogether, citing the costs in staff time of participating. “Effective participation in the work of W3C required a higher commitment of senior staff time than we could justify,” says Vint Cerf, an MCI senior vice president and one of the architects of today’s Internet.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.