Skip to Content
Uncategorized

What Comes After Web 2.0?

Today’s primitive prototypes show that a more intelligent Internet is still a long way off.
December 1, 2006

Many researchers and entrepreneurs are working on Internet-based knowledge-organizing technologies that stretch traditional definitions of the Web. Lately, some have been calling the technologies “Web 3.0.” But really, they’re closer to “Web 2.1.”

Typically, the name Web 2.0 is used by computer programmers to refer to a combination of a) improved communication between people via social-networking technologies, b) improved communication between separate software applications–read “mashups”–via open Web standards for describing and accessing data, and c) improved Web interfaces that mimic the real-time responsiveness of desktop applications within a browser window.

To see how these ideas may evolve, and what may emerge after Web 2.0, one need only look to groups such as MIT’s Computer Science and Artificial Intelligence Laboratory, the World Wide Web Consortium, Amazon.com, and Google. All of these organizations are working for a smarter Web, and some of their prototype implementations are available on the Web for anyone to try. Many of these projects emphasize leveraging the human intelligence already embedded in the Web in the form of data, metadata, and links between data nodes. Others aim to recruit live humans and apply their intelligence to tasks computers can’t handle. But none are ready for prime time.

The first category of projects is related to the Semantic Web, a vision for a smarter Web laid out in the late 1990s by World Wide Web creator Tim Berners-Lee. The vision calls for enriching every piece of data on the Web with metadata conveying its meaning. In theory, this added context would help Web-based software applications use the data more appropriately.

My current Web calendar, for example, knows very little about me, except that I have appointments today at 8:30 A.M. and 4:00 P.M. A Semantic Web calendar would not only know my name, but would also have a store of standardized metadata about me, such as “lives in: Las Vegas,” “born in: 1967,” “likes to eat: Thai food,” “belongs to: Stonewall Democrats,” and “favorite TV show: Battlestar Galactica.” It could then function much more like a human secretary. If I were trying to set up the next Stonewall Democrats meeting, it could sift through the calendars of other members and find a time when we’re all free. Or if I asked the calendar to find me a companionable lunch date, it could scan public metadata about the friends, and friends of friends, in my social network, looking for someone who lives nearby, is of a similar age, and appreciates Asian food and sci-fi.

Alas, there’s no such technology yet, partly because of the gargantuan effort that would be required to tag all the Web’s data with metadata, and partly because there’s no agreement on the right format for metadata itself. But several projects are moving in this direction, including FOAF, short for Friend of a Friend. FOAF files, first designed in 2000 by British software developers Libby Miller and Dan Brickley, are brief personal descriptions written in a standard computer language called the Resource Description Framework (RDF); they contain information such as a person’s name, nicknames, e-mail address, homepage URL, and photo links, as well as the names of the people that person knows.

I generated my own FOAF file this week using the simple forms at a free site called Foaf-a-matic and uploaded it to my blog site. In theory, other people using FOAF-enabled search software such as FOAF Explorer, or “identity hub” websites such as People Aggregator, will now be able to find me more easily.

Eventually, more may be possible. For example, I could instantly create a network of friends on a new social-networking service simply by importing my FOAF file. But for now, there aren’t a lot of ways to put your FOAF file to work.

Another project attempting to extract more meaning from the Web is Piggy Bank, a joint effort by MIT’s Computer Science and Artificial Intelligence Laboratory, MIT Libraries, and the World Wide Web Consortium. Piggy Bank’s goal is to lift chunks of important information in data-heavy websites from their surroundings, so that Web surfers can make use of these info chunks in new ways. For example, office address information extracted from LinkedIn, a professional networking site, could be fed into Google Maps, creating a map of my colleagues’ places of business.


In this way, the Piggy Bank researchers hope, Web users can begin to get a taste of the Semantic Web in action, without having to wait for the authors of the billions of documents on the Web to create metadata. The curious can download a Piggy Bank extension for the Firefox Web browser; once the extension is installed, users can choose from a number of “screenscrapers” that extract information from specific sites like LinkedIn and Flickr (a popular photo-sharing site). Piggy Bank stores this “pure information,” such as photos or contact names, inside the Web browser in RDF format, theoretically allowing users to mix data from independent sources to create their own “instant mashups” similar to the LinkedIn-Google Maps example.

Unfortunately, there aren’t yet any tools that make it easy for nonprogrammers to reuse the RDF data in such mashups. And in my own tests of Piggy Bank, the screenscrapers failed to activate. I’m sure that’s because I missed something in the instructions–but the problem does illustrate how much more work is needed before such tools will be ready for public consumption.

A second category of post-Web 2.0 projects focuses not on helping machines understand the meaning and the uses of existing Web content, but on recruiting real people to add their intelligence to information before it’s used. The best known example is Amazon Mechanical Turk, a kind of high-tech temp agency introduced by the online retailer in 2005. The service allows people with tasks and questions that computers can’t handle–for example, spotting inappropriate images in a collection of photos–to hire other Web users to help.

The employment is extremely temporary–less than an hour per task, in most cases–and the pay is ridiculously low: solutions typically earn the worker only a few cents. But the point isn’t to provide Internet addicts with a second income: it’s to harness users’ brainpower for a few spare moments to carry out simple tasks that remain far beyond the capabilities of artificial-intelligence software. (In fact, Amazon calls its project a form of “artificial artificial intelligence.”)

Some tasks are really marketing or product research in disguise. One questioner, for example, asks, “What would make your e-mail better?” Others offer better illustrations of the logic behind breaking up a big data-classification task and distributing it to hundreds of people. One task, apparently from someone trying to make it possible to share information between various Yellow Pages-style directories, asks users to match categories from one directory–say, “Delicatessens”–with the closest equivalents in another–for example, “Delis” or “Small Restaurants.” A computer couldn’t tackle such a task without years of training on the mundane facts of human existence, such as the fact that a delicatessen is indeed one form of a small restaurant. A human, however, can find the right matches in seconds.

Another project that attempts to persuade humans to add meaning to raw data is the Google Image Labeler. It entices users to label digital photographs according to their content by making the task into a simple game in which contestants must both collaborate and compete. Like Amazon Mechanical Turk, the Image Labeler has a community of fans who enjoy it as a game. And there’s nothing wrong with making potentially dull tasks entertaining, if that’s what it takes to motivate “workers.” But the Image Labeler and the Mechanical Turk will have to grow beyond their toylike demonstration stage before they have a real impact on the Web’s usability.

It’s not surprising that observers are reaching for new labels to describe the work going on beyond the boundaries of today’s Web 2.0. But most of these projects are so far from producing practical tools–let alone services that could be commercialized–that it’s premature to say they represent a “third generation” of Web technology. For that, judging from today’s state of the art, we’ll need to wait another few years.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.