This article appears in the March/April 2007 issue of Technology Review.
Last year, Eric Miller, an MIT-affiliated computer scientist, stood on a beach in southern France, watching the sun set, studying a document he’d printed earlier that afternoon. A March rain had begun to fall, and the ink was beginning to smear.
Five years before, he’d agreed to lead a diverse group of researchers working on a project called the Semantic Web, which seeks to give computers the ability–the seeming intelligence–to understand content on the World Wide Web. At the time, he’d made a list of goals, a copy of which he now held in his hand. If he’d achieved those goals, his part of the job was done.
Taking stock on the beach, he crossed off items one by one. The Semantic Web initiative’s basic standards were in place; big companies were involved; startups were merging or being purchased; analysts and national and international newspapers, not just technical publications, were writing about the project. Only a single item remained: taking the technology mainstream. Maybe it was time to make this happen himself, he thought. Time to move into the business world at last.
“For the Semantic Web, it was no longer a matter of if but of when,” Miller says. “I felt I could be more useful by helping people get on with it.”
Now, six months after the launch of his own Zepheira, a consulting company that helps businesses link fragmented data sources into easily searched wholes, Miller’s beachside decision seems increasingly prescient. The Semantic Web community’s grandest visions, of data-surfing computer servants that automatically reason their way through problems, have yet to be fulfilled. But the basic technologies that Miller shepherded through research labs and standards committees are joining the everyday Web. They can be found everywhere–on entertainment and travel sites, in business and scientific databases–and are forming the core of what some promoters call a nascent “Web 3.0.”
Already, these techniques are helping developers stitch together complex applications or bring once-inaccessible data sources online. Semantic Web tools now in use improve and automate database searches, helping people choose vacation destinations or sort through complicated financial data more efficiently. It may be years before the Web is populated by truly intelligent software agents automatically doing our bidding, but their precursors are helping people find better answers to questions today.
The “3.0” claim is ambitious, casting these new tools as successors to several earlier–but still viable–generations of Net technology. Web 1.0 refers to the first generation of the commercial Internet, dominated by content that was only marginally interactive. Web 2.0, characterized by features such as tagging, social networks, and user-created taxonomies of content called “folksonomies,” added a new layer of interactivity, represented by sites such as Flickr, Del.icio.us, and Wikipedia.
Analysts, researchers, and pundits have subsequently argued over what, if anything, would deserve to be called “3.0.” Definitions have ranged from widespread mobile broadband access to a Web full of on-demand software services. A much-read article in the New York Times last November clarified the debate, however. In it, John Markoff defined Web 3.0 as a set of technologies that offer efficient new ways to help computers organize and draw conclusions from online data, and that definition has since dominated discussions at conferences, on blogs, and among entrepreneurs.