An uncut version of this article, with additional content that had to be removed from the print edition for space reasons, can be found here.
A thunderhead towers at knee level, throwing tiny lightning bolts at my shoes. I’m standing–rather, my avatar is standing–astride a giant map [SLurl] of the continental United States, and southern Illinois, at my feet, is evidently getting a good April shower.
The weather is nicer on the East Coast: I can see pillowy cumulus clouds floating over Boston and New York, a few virtual meters away. I turn around and look west toward Nevada. There isn’t a raindrop in sight, of course; the region’s eight-year drought is expected to go on indefinitely, thanks to global warming. But I notice something odd, and I walk over to investigate.
The red polka dots over Phoenix and Los Angeles indicate a hot day, as I would expect. But the dot over the North Las Vegas airport is deep-freeze blue. That can’t be right. My house is only 30 kilometers from the airport, and I’ve had the air conditioner running all day.
Once you've downloaded the appropriate software from Second Life and Google Earth, many locations mentioned in this story can be accessed using special links in the copy. Links saying "SLurl" will open a Second Life viewer. Links saying "Google Earth location" will open Google Earth and spin the virtual globe to the proper coordinates.
“Any clue why this dot is blue?” I ask the avatar operating the weather map’s controls. The character’s name, inside the virtual world called Second Life, is Zazen Manbi; he has a pleasant face and well-kept chestnut hair, and the oval spectacles perched on his nose give him a look that’s half academic, half John Lennon. The man controlling Manbi is Jeffrey Corbin, a research assistant in the Department of Physics and Astronomy at the University of Denver.
“Let me check something,” Manbi/Corbin responds. “I can reset the map–sometimes it gets stuck.” He presses a button, and fresh data rushes in from the National Oceanic and Atmospheric Administration’s network of airport weather stations. The clouds over the East shift slightly. Los Angeles goes orange, meaning it’s cooled off a bit. But there’s still a spot of indigo over Vegas.
“I guess it’s feeling blue,” he jokes.
The map I am standing on belongs to NOAA, and it covers a 12-by-20-meter square of lawn on a large virtual island sustained entirely by servers and software at San Francisco-based Linden Lab, which launched Second Life in 2003. (On the map’s scale, my avatar is about 500 kilometers tall, which makes Illinois about three paces across.) Corbin, who’s on a personal mission to incorporate 3-D tools like this one into the science curriculum at Denver, paid Linden Lab for the island so that he could assemble exhibits demonstrating to the faculty how such tools might be used pedagogically. “Every student at DU is required to have a laptop,” he says. “But how many of them are just messaging one another in class?” A few more science students might learn something if they could walk inside a weather map, he reasons.
Corbin’s got plenty to show off: just west of the map is a virtual planetarium, a giant glass box housing a giant white sphere that in turn houses a giant orrery illustrating the geometry of solar eclipses. And he’s not the only one to offer such attractions. Just to the south, on an adjoining island, is the International Spaceflight Museum [video] [SLurl], where visitors can fly alongside life-size rockets, from the huge Apollo-era Saturn V to a prototype of the Ares V, one of the launch vehicles NASA hopes to use to send Americans back to the moon.
Second Life, which started out four years ago as a 1-square-kilometer patch with 500 residents, has grown into almost 600 square kilometers of territory spread over three minicontinents, with 6.9 million registered users and 30,000 to 40,000 residents online at any moment. It’s a world with birdsong, rippling water, shopping malls, property taxes, and realistic physics. And life inside is almost as varied as it is outside. “I help out new citizens, I rent some houses on some spare land I have, I socialize,” says a longtime Second Lifer whose avatar goes by the name Alan Cyr. “I dance far better than I do in real life. I watch sunsets and sunrises, go swimming, exploring, riding my Second Life Segway. I do a lot of random stuff.”
But aside from such diversions, the navigation tools provided by Second Life–users can fly and hover like Superman and zoom between micro and macro views of any object–make it an excellent place to investigate phenomena that would otherwise be difficult to visualize or understand. In that sense, this hideaway from the reality outside is beginning to function as an alternative lens on it. Ever wondered when the International Space Station might pass overhead? At the spaceflight museum, your avatar can fly alongside models of the station, the Hubble Space Telescope, and many other satellites as they orbit a 10-meter-diameter globe in sync with real-world data from the Air Force Space Command [video] [SLurl]. Or perhaps you suspect a bad call by the line judges at Wimbledon. If so, just stroll a virtual tennis court inside Second Life and examine the paths of every serve and volley of a match in progress, reproduced by IBM in close to real time [YouTube video].
Of course, from within a virtual world like Second Life, the real world can be glimpsed only through the imperfect filters of today’s software and hardware. Barring a startling increase in the Internet bandwidth available to the average PC user or a plunge in the cost of stereoscopic virtual-reality goggles, we will continue to experience virtual worlds as mere representations of 3-D environments on our flat old computer screens. And your avatar obviously isn’t really you; it’s a cartoonish marionette awkwardly controlled by your mouse movements and keyboard commands. Moreover, at the moment, every conversation inside a virtual world must be laboriously typed out (although Linden Lab will soon add an optional voice-chat function to Second Life).
So while virtual worlds are good for basic instruction and data representation, professionals aren’t yet rushing to use them to analyze large amounts of spatial information. For that, they stick to specialized design, animation, modeling, and mapping software from companies like Autodesk and ESRI. But there’s another new genre of 3-D visualization tools that are accessible to both professionals and average Internet users: “virtual globe” programs such as Google Earth, Microsoft’s Virtual Earth, and NASA’s open-source World Wind. Virtual globes let you plot your city’s sewer system, monitor a network of environmental sensors, count up the frequent-flyer miles between New York and New Delhi, or just soar through a photorealistic 3-D model of the Grand Canyon [Google Earth location].
Even as social virtual worlds incorporate a growing amount of real-world data, virtual globes and their 2-D counterparts, Web maps, are getting more personal and immersive. Digital maps are becoming a substrate for what Di-Ann Eisnor, CEO of the mapping site Platial in Portland, OR, calls “neogeography”: an explosion of user-created content, such as travel photos and blog posts, pinned to specific locations (see “Killer Maps,” October 2005). Using Platial’s map annotation software, people have created public maps full of details about everything from the history of genocide to spots for romance. Google has now built a similar annotation feature directly into Google Maps. “The idea that maps can be emotional things to interact with is fairly new,” says Eisnor. “But I can imagine a time when the base map is just a frame of reference, and there is much more emphasis on the reviews, opinions, photos, and everything else that fits on top.”
As these two trends continue from opposite directions, it’s natural to ask what will happen when Second Life and Google Earth, or services like them, actually meet.
Because meet they will, whether or not their owners are the ones driving their integration. Both Google and Linden Lab grant access to their existing 3-D platforms through tools that let outside programmers build their own auxiliary applications, or “mashups.” And many computer professionals think the idea of a “Second Earth” mashup is so cool that it’s inevitable, whether or not it will offer any immediate way to make money. “As long as somebody can find some really strong personal gratification out of doing it, then there is a driver to make it happen,” says Jamais Cascio, a consultant who cofounded the futurist website WorldChanging.com and helps organizations plan for technological change.
The first, relatively simple step toward a Second Earth, many observers predict, will be integrating Second Life’s avatars, controls, and modeling tools into the Google Earth environment. Groups of users would then be able to walk, fly, or swim across Google’s simulated landscapes and explore intricate 3-D representations of the world’s most famous buildings. Google itself may or may not be considering such a project. “It’s interesting, and I think there are people who want to do that,” says John Hanke, director of the division of the company responsible for Google Earth. “But that’s not something where we have any announcements or immediate plans to talk about it.”
A second alternative would be to expand the surface area of Second Life by millions of square kilometers and model the new territory on the real earth, using the same topographical data and surface imagery contained in Google Earth. (The existing parts of Second Life could remain, perhaps as an imaginary archipelago somewhere in the Pacific.) That’s a much more difficult proposition, for practical, technical reasons that I’ll get to later. And in any case, Linden Lab says it’s not interested.
But within 10 to 20 years–roughly the same time it took for the Web to become what it is now–something much bigger than either of these alternatives may emerge: a true Metaverse. In Neal Stephenson’s 1992 novel Snow Crash, a classic of the dystopian “cyberpunk” genre, the Metaverse was a planet-size virtual city that could hold up to 120 million avatars, each representing someone in search of entertainment, trade, or social contact. The Metaverse that’s really on the way, some experts believe, will resemble Stephenson’s vision, but with many alterations. It will look like the real earth, and it will support even more users than the Snow Crash cyberworld, functioning as the agora, laboratory, and gateway for almost every type of information-based pursuit. It will be accessible both in its immersive, virtual-reality form and through peepholes like the screen of your cell phone as you make your way through the real world. And like the Web today, it will become “the standard way in which we think of life online,” to quote from the Metaverse Roadmap, a forecast published this spring by an informal group of entrepreneurs, media producers, academics, and analysts (Cascio among them).
But don’t expect it to run any more smoothly than the real world. I called programmer and 3-D modeler Alyssa LaRoche, who created the immersive weather map for NOAA, to see if she could explain that pesky blue dot over Las Vegas. As it turns out, a networking glitch was preventing the airport weather feed from reaching the map inside Second Life. And when the map doesn’t get the data it’s expecting, the temperature dots default to blue. So Corbin was right, in a way.
While Second Life and Google Earth are commonly mentioned as likely forebears of the Metaverse, no one thinks that Linden Lab and Google will be its lone rulers. Their two systems are interesting mainly because they already have many adherents, and because they exemplify two fundamentally different streams of technology that will be essential to the Metaverse’s construction.
Second Life is a true virtual world, unconstrained by any resemblance to the real planet. What unites it and similar worlds such as There, Entropia Universe, Moove, Habbo Hotel, and Kaneva–beyond their 3-D graphics–is that they’re free-flowing, ungoverned communities shaped by the shared imaginations of their users. “Consensual hallucinations” was the term William Gibson used in his groundbreaking 1984 cyberpunk novel Neuromancer, which posited a Matrix-like cybersphere years before Snow Crash. These worlds are not games, however. Users don’t go on quests or strive to acquire more gold or magic spells; they’re far more likely to spend their time at virtual campfires, discos, and shopping malls. This sets these environments firmly apart from massively multiplayer 3-D gaming worlds such as Sony’s EverQuest, Blizzard Entertainment’s World of Warcraft, and NCsoft’s Lineage II, which together have far more users.
Google Earth and competing programs such as Microsoft Virtual Earth, on the other hand, are more accurately described as mirror worlds–a term invented by Yale University computer scientist David Gelernter (see “Artificial Intelligence Is Lost in the Woods”) to denote geographically accurate, utilitarian software models of real human environments and their workings. If they were books, virtual worlds would be fiction and mirror worlds would be nonfiction. They are microcosms: reality brought down to a size at which it can be grasped, manipulated, and rearranged, like an obsessively detailed dollhouse. And they’re used to keep track of the real world rather than to escape from it. Environmental scientists and sensor-net researchers, for example, are already feeding live data on climate conditions, pollution, and the like into Google Earth and Microsoft Virtual Earth, where the added spatial and geographical dimensions give extra context and help reveal hidden patterns.
It’s easy to see how a detailed mirror world might bring a tactical advantage to a large corporation, government agency, or military force–for example, by making it easier for the Wal-Marts of the future to track merchandise from factory to warehouse to retail shelf, or explore what-if scenarios such as the impact of a major storm on the supply chain. But when mirror worlds are joined by a third technology stream–what’s being called “mobile augmented reality”–they will become even more indispensable.
Mobile augmented reality is a way of using the data underlying mirror worlds without experiencing those worlds immersively. The extensive 3-D simulations in mirror worlds will, in the words of the Metaverse Roadmap, be draped over the real world and accessed locally in 2-D through location-aware mobile devices such as wireless phones. Even the screen of a GPS-enabled camera phone could serve as a temporary window into the Metaverse. Carry it with you on your next house-hunting expedition, for example, and it could connect to real-estate databases containing 3-D floor plans and information on sale prices, property taxes, and the like for every house on every block. Or point it at one of the turbines on your wind farm and see Google Earth’s virtual version of the structure, supplemented by engineering specifications, maintenance history, and a graph of hourly power output. Finnish cell-phone giant Nokia, French startup Total Immersion, and others are building prototype augmented-reality systems now and expect the big wireless carriers to take an interest soon (see “Augmented Reality” in “Emerging Technologies 2007,” March/April).
It would be far too simple to say that the Metaverse will consist of Linden Lab’s virtual world with maps, or Google’s mirror world with avatars, or some augmented-reality slice of either one. In fact, Second Life and Google Earth are likely to endure just as they are (with the usual upgrades) well into the Metaverse era. What’s coming is a larger digital environment combining elements of all these technologies–a “3-D Internet,” to use the term preferred by David Rolston, CEO of Forterra Systems, a company in San Mateo, CA, that makes immersive training simulations for the U.S. Department of Defense and other first-responder agencies. People will enter this environment using PC-based software similar to the programs that already grant access to Second Life and Google Earth. These “Metaverse browsers” will be to the 3-D Internet what Mosaic and Netscape were to the dot-com revolution–tools that both provide structure (by defining what’s possible) and enable infinite experimentation.
“There will be a bunch of different worlds, owned, controlled, and operated by different organizations,” Rolston predicts. “They will be built on different platforms, and you will have community standards about how you can connect these worlds, and open-source software that carries you between them.” The word “Metaverse” will refer to both the overarching collection of these worlds and the main port of entry to them, a sort of Grand Cyber Station that links to all other destinations. The central commons itself could be designed as a mirror world or a virtual world or some interleaving of the two: people logging in to the Metaverse might want it to look like Manhattan or the Emerald City of Oz, depending on the task at hand. But either way, partisans say, the full Metaverse will encompass thousands of individual virtual worlds and mirror worlds, each with its own special purpose. To borrow a trope from corporate networking, it will be an “interverse” connecting many local “intraverses.”
Rolston has already had plenty of experience building such separate worlds. Some of Forterra’s simulations are “geotypical”–plausible imitations of generic landscapes and urban environments–and others are “geospecific,” reproducing actual places such as the entrances to Baghdad’s battered Green Zone. The worlds of the Metaverse will be much more diverse but still bridgeable, Rolston predicts. “Portions of this 3-D Internet will be anchored to the real planet and will involve real-world activities, and others will not be,” he says. “People will move freely between representations of the real world and representations of synthetic fantasy worlds, and feel equally comfortable in both.”
For people who haven’t spent much time in a 3-D world, of course, it’s hard to imagine feeling comfortable in either. But such environments may soon be as unavoidable as the Web itself: according to technology research firm Gartner, current trends suggest that 80 percent of active Internet users and Fortune 500 companies will participate in Second Life or some competing virtual world by the end of 2011. And if you take a few months to explore Second Life, as I have done recently, you may begin to understand why many people have begun to think of it as a true second home–and why 3-D worlds are a better medium for many types of communication than the old 2-D Internet.
To begin with, Second Life is beautiful–wholly unlike the Metaverse one might imagine from reading Snow Crash. It has rolling grass-covered hills and snowy mountains, lush tropical jungles, tall pines that sway gently in the breeze, and Romanesque fountains with musically tinkling water. Linden Lab thoughtfully arranges a gorgeous golden-orange sunset every four hours.
A beautiful environment, however, isn’t enough to make a virtual world compelling. Single-player puzzle worlds such as Myst provided riveting 3-D graphics as long ago as the early 1990s, but these worlds were utterly lonely, leaving users with no reason to return after all the puzzles had been solved. Part of Second Life’s appeal, by contrast, is that it’s always crowded with thousands of other people. If you want company, just head for a clump of green dots on the Second Life world map–that’s where you’ll find people gathering for concerts, lectures, competitions, shopping, museum-going, and dancing. “Second Life is best viewed as a communication technology, just like the telephone,” says Cory Ondrejka, Linden Lab’s chief technology officer. “Except that you don’t communicate by voice; you communicate by shared experience.” And unlike the telephone system, Second Life is free (unless you want to own land, which means upgrading to premium membership for $9.95 per month).
Second Life residents also communicate through the buildings and other objects they create. Using built-in 3-D modeling tools, any resident can create something simple, like a flowerpot or a crude hut. But the revered wizards of the community are those who can quickly conjure basic building blocks called “prims” and reshape and combine them into complex objects, from charm bracelets and evening gowns to airplanes and office buildings. Alyssa LaRoche, creator of the NOAA weather map, is one of these builder-wizards. She started creating things as soon as she joined Second Life in January 2004, and by April 2006 she had quit her day job as an IT consultant in the financial-services industry to start a Second Life design agency called Aimee Weber Studio (after her avatar’s name). Business has been so brisk that LaRoche now employs four other full-time modelers and 19 contractors. “I’m certainly making more money than I made at my job as a consultant,” she says. Her agency recently finished an entire island of oceanographic and meteorological exhibits for NOAA, including a glacier, a submarine tour of a tropical reef, and an airplane ride through a hurricane [video] [SLurl].
NOAA commissioned its island as a kind of educational amusement park, a Weather World. But other parts of Second Life are more businesslike. Dozens of companies, including IBM [SLurl], Sony Ericsson [SLurl], and American Apparel [video] [SLurl], have bought land in the virtual world, and most have already built storefronts or headquarters where their employees’ avatars can do business. In March, for example, Coldwell Banker opened a Second Life real-estate brokerage where new residents can tour model virtual homes and make purchases at below-market rates [video] [SLurl]. In 2006, Starwood Hotels used Second Life as a virtual testing ground for a new chain of real-world hotels, called Aloft. The company constructed a prototype where visitors could walk the grounds, swim in the pool, relax in the lobby, and inspect the guest rooms [video] [SLurl]. It’s incorporating suggestions from Second Lifers into the design of the first real Aloft hotel, set to open in Rancho Cucamonga, CA, in 2008 [YouTube video].
Most structures in the Second Life universe, of course, lack any serious business purpose. But that doesn’t mean they have no relation to the real world. One of Second Life’s most trafficked places is a detailed re-creation of downtown Dublin [video] [SLurl]. The main draw: the Blarney Stone Irish pub, where there is live music most nights, piped in from real performance spaces via the Internet. A short teleport-hop away from virtual Dublin is virtual Amsterdam, where the canals, the houseboats, and even the alleyways of the red-light district have been textured with photographs from the real Amsterdam to lend authenticity [video] [SLurl].
This reimagining of the real world can go only so far, given current limitations on the growth of Linden Lab’s server farm, the amount of bandwidth available to stream data to users, and the power of the graphics card in the average PC. According to Ondrejka, Linden Lab must purchase and install more than 120 servers every week to keep up with all the new members pouring into Second Life, who increase the computational load by creating new objects and demanding their own slices of land. Each server at Linden Lab supports one to four “regions,” 65,536-square-meter chunks of the Second Life environment–establishing the base topography, storing and rendering all inanimate objects, animating avatars, running scripts, and the like. This architecture is what makes it next to impossible to imagine re-creating a full-scale earth within Second Life, even at a low level of detail. At one region per server, simulating just the 29.2 percent of the planet’s surface that’s dry land would require 2.3 billion servers and 150 dedicated nuclear power plants to keep them running. It’s the kind of system that “doesn’t scale well,” to use the jargon of information technology.
But then, Linden Lab’s engineers never designed Second Life’s back end to scale that way. Says Ondrejka, “We’re not interested in 100 percent veracity or a true representation of static reality.”
And they don’t have to be. As it turns out, simulations need not be convincing to be enveloping. “It’s not an issue of engaging the eyes and the hands, but rather of engaging the heart and the mind,” says Corey Bridges, executive producer at the Multiverse Network, which sells a standardized virtual-world platform that developers can tailor to their own needs. “If you can form a connection with someone, even just with a mouse and a keyboard and a video screen, whether it’s in Second Life or World of Warcraft, that is far more powerful than even the best virtual-reality simulation.”
Personal connections may be what a lot of people want, but going by the numbers, Google Earth is far more popular than any other type of virtual world, including the big role-playing worlds like Lineage II (which has 14 million subscribers) and World of Warcraft (more than 8 million). By the spring of 2007, less than two years after it was launched, Google Earth had already been downloaded more than 250 million times.
Google Earth and its lesser-known imitator, Microsoft Virtual Earth, owe their existence to a convergence in the early 2000s of several trends, including a drop in the price of satellite and aerial imagery, the more widespread availability of topographical and other geographical information collected by national governments around the world, the standardization of 3-D modeling technologies originally developed for video games, and the spread of consumer PCs with graphics cards capable of 3-D hardware acceleration. But the programs’ philosophical roots go back much further than that. John Hanke, who developed the original software behind Google Earth at a small company called Keyhole (which Google acquired in 2004), says that Snow Crash’s description of a 3-D program called Earth–“a globe about the size of a grapefruit, a perfectly detailed rendition of Planet Earth”–was part of his inspiration.
An equally detailed vision of a virtual earth was laid out in another book from the same era, David Gelernter’s Mirror Worlds: Or the Day Software Puts the Universe in a Shoebox … How It Will Happen and What It Will Mean. “The software model of your city, once it’s set up, will be available (like a public park) to however many people are interested,” Gelernter predicted. “It will sustain a million different views. … Each visitor will zoom in and pan around and roam through the model as he chooses.” Institutions such as universities and city governments would nourish the mirror world with a constant flow of data. The latest information on traffic jams, stock prices, or water quality would appear exactly where expected–overlaid on virtual roads and stock exchanges and water mains. But just as important, mirror worlds would function as social spaces, where people seeking similar information would frequently cross paths and share ideas. They would be “beer halls and grand piazzas, natural gathering places for information hunters and insight searchers.”
On page 203 of Mirror Worlds is a striking architectural drawing showing a bird’s-eye view of a fictional city distinguished by elegant skyscrapers, broad avenues, and abundant parkland. Superimposed on the view are several blank white boxes where, in Gelernter’s hypothetical mirror world, information about the streets and buildings would be displayed. The caption describes the drawing as “an abstract sketch, merely the general idea” of what a mirror-world interface might look like.
If the sketch looks familiar today, it’s because thousands of views like it can be found using Google Earth or Microsoft Virtual Earth, complete with 3-D buildings and white pop-up info boxes. There are superficial differences: the Google and Microsoft cityscapes, for example, are photorealistic, at least in the limited areas where buildings are covered with “skins” based on photographs of the real structures (like the virtual Amsterdam in Second Life). But Gelernter anticipated so many features of today’s virtual-globe software that these programs could readily serve today as the windows on a mirror world as he imagined it. In fact, Google Earth users can access a growing library of public and personal data, from national borders to Starbucks locations, jogging routes, and vacation photos–in effect, any kind of information that’s been “geocoded.”
Open geocoding standards allow anyone to contribute to the Google Earth mirror world. Just as Web browsers depend on HTML to figure out how and where to display text and images on a Web page, Google Earth depends on a standard called KML, the keyhole markup language, to tell it where geographic data should be placed on the underlying latitude-longitude grid. If you know how to assemble a KML file, you can make your own geographical data appear as a new “layer” on your computer’s copy of Google Earth; and if you publish that KML file on the Web, other people can download the layer and display it on their own computers.
This layering capability transforms Google Earth from a mere digital globe into something more like a 3-D Wikipedia of the planet. The results can be unexpectedly arresting. In one recent example, the U.S. Holocaust Memorial Museum worked with Google to create a layer highlighting the locations of 1,600 villages ravaged by the Sudanese government’s ongoing campaign to wipe out non-Arab tribes in the Darfur region. By zooming in on these locations, a user can see the remnants of the actual settlements destroyed by the Janjaweed, the government’s proxy militia. The closest views reveal that house after house has been reduced to a crumbling wreck–roofs burned away, contents apparently looted. Pop-up boxes contain testimony from survivors, statistics on the displaced populations, and dramatic, often grisly photographs taken in the field or at refugee camps [Google Earth link].
This evidence of genocide is attached to the same digital earth where most U.S. residents can quickly zoom and pan to North America and look down upon their own houses or their children’s schools. With the barrier of distance dissolved, it’s hard not to feel a greater sense of connectedness to tragedies abroad. Which is exactly what the Holocaust museum intends: “We hope this important initiative with Google will make it that much harder for the world to ignore those who need us the most,” museum director Sara Bloomfield said. (The Sudanese themselves cannot download Google Earth, owing to U.S. restrictions on software exports.)
Just as anyone can create a new layer for Google Earth, anyone with basic 3-D modeling skills can add buildings, bridges, and other objects to it. Google Earth uses the open Collada 3-D modeling format, which was originally created by Sony as a way to speed the development of video-game worlds for the Playstation Portable and the Playstation 3. Using a Google program called SketchUp, amateur architects have built thousands of Collada models and uploaded them to the Google 3D Warehouse, a free library of signature buildings and other 3-D models. Larger organizations around the world now have terabytes of Collada-formatted virtual objects in storage and can easily transform them into data layers for Google Earth. That’s what the city government of Berlin did in March, when it published a KML layer containing a meticulous 3-D model of the city, prepared as part of a new digital infrastructure for city management and economic development [Google Earth link]. The model is so finely detailed that a deft user of the Google Earth navigation controls can steer the camera through the front door of the newly renovated Reichstag and into the chambers of the German parliament.
But a true mirror world shouldn’t be static, as the Berlin model and the Darfur layer are; it should reflect all the hubbub of the real world, in real time. As it turns out, KML also supports direct, real-time exchanges over the Internet using the hypertext transfer protocol (HTTP), the basic communications protocol of the Web. One hypnotic example is the 3-D flight tracker developed by Fboweb.com, a company that offers online flight-planning tools for general-aviation pilots and enthusiasts. Download the KML layer for one of the eight major U.S. airports that Fboweb covers so far, and tiny airplane icons representing all the commercial aircraft heading toward that airport at that moment will be displayed at the appropriate altitude in Google Earth [Google Earth link]. As time passes, each flight leaves a purple trail recording every ascent, turn, and descent, all the way down to the runway. It’s a plane-spotter’s dream.
Microsoft, as one might expect, isn’t far behind Google in its effort to bridge map worlds and the real world. Scientists at Microsoft Research are perfecting a system called SensorMap that collects live data from any location and publishes it in Windows Live Local (the latest name for Microsoft’s online 2-D maps) or Microsoft Virtual Earth. Researchers at Harvard University and BBN Technologies in Cambridge, MA, won a grant from Microsoft this spring to create a SensorMap interface for “CitySense,” a network of 100 Wi‑Fi-connected weather and pollution sensors they’re installing in Cambridge. Other scientists, however, are already using Google Earth to monitor live sensor networks. At the Center for Embedded Networked Sensing at the University of California, Los Angeles, researchers have connected a network of wireless climate sensors and webcams in the James Reserve, a wilderness area in California’s San Jacinto Mountains, to a public KML layer in Google Earth. Click on an icon in Google Earth representing one of the reserve’s nest boxes, and you get a readout of the temperature and humidity inside the nest, as well as a live webcam picture showing whether any birds are at home [Google Earth link].
“Google Earth itself is really neat,” comments Jamais Cascio, the Metaverse Roadmap coauthor. “But Google Earth coupled with millions of sensors around the world, offering you real-time visuals, real-time atmospheric data, and so on–that’s transformative.”
Indeed, it’s important to remember that alongside the construction of the Metaverse, a complementary and equally ambitious infrastructure project is under way. It’s the wiring of the entire world, without the wires: tiny radio-connected sensor chips are being attached to everything worth monitoring, including bridges, ventilation systems, light fixtures, mousetraps, shipping pallets, battlefield equipment, even the human body. To be of any use, the vast amounts of data these sensors generate must be organized and displayed in forms that diagnosticians or decision makers can understand; “reality mining” is the term researchers from Accenture Technology Labs, the MIT Media Lab, and other organizations are using for this emerging specialty. And what better place to mine reality than in virtual space, where getting underneath, around, and inside data-rich representations of real-world objects is effortless?
In the field, technicians or soldiers may get 2-D slices of the most critical information through wireless handheld devices or heads-up displays; in operations centers, managers or military commanders will dive into full 3-D sensoriums to visualize their domains. “Augmented reality and sensor nets will blend right into virtual worlds,” predicts Linden Lab’s Ondrejka. “That’s when the line between the real world and its virtual representations will start blurring.”
I asked David Gelernter why we’d need the Metaverse or even mirror worlds, with all the added complications of navigating in three dimensions, when the time-tested format of the flat page has brought us so far on the Web. “That’s exactly like asking why we need Web browsers when we already have Gopher, or why we need Fortran when assembly language works perfectly well,” he replied.
The current Web might be capable of presenting all the real-time spatial data expected to flow into the Metaverse, Gelernter elaborates, but it wouldn’t be pretty. And it would keep us locked into a painfully mixed and inaccurate metaphor for our information environment–with “pages” that we “mark up” and collect into “sites” that we “go to” by means of a “locator” (the L in URL)–when a much more natural one is available. “The perception of the Web as geography is meaningless–it’s a random graph,” Gelernter says. “But I know my physical surroundings. I have a general feel for the world. This is what humans are built for, and this is the way they will want to deal with their computers.”
Judging by the growing market for location-aware technologies like GPS cell phones, the popularity of map-based storytelling and neogeography mashups like Platial, and the blistering pace of Google Earth downloads, Gelernter may be right. Google Earth is now so well known that it has been satirized on The Simpsons and is becoming a forum for classified ads and résumés. Second Life, meanwhile, is gaining roughly 25,000 members a day, sometimes stretching Linden Lab’s ability to keep its simulations running smoothly.
But for a true Metaverse to emerge, programmers must begin to weave together the technologies of social virtual worlds and mirror worlds. That would be a simpler task if Google and Linden Labs would release the source code behind their respective platforms, or at least provide application programming interfaces (APIs) so that outside developers could tap into their deeper functions. In late 2006, Google released an interface that allowed outside programmers to control some aspects of Google Earth’s behavior, but it wasn’t a full API, and there’s been no sign of one since. This January, Linden Lab released the source code for the Second Life viewer (the program that residents use on their PCs to connect to Second Life). Ondrejka says the code for the core Second Life simulation software will follow. First, he says, the company needs to get that software working better–and figure out how to make money in a world where it may no longer control the expansion of the Second Life ecosystem.
The real progress toward a fusion of Second Life and Google Earth is going on outside their home companies. Last year, Andrew “Roo” Reynolds, a “Metaverse evangelist” at IBM’s Hursley laboratory in England, hacked together an extension for SketchUp that turns Collada 3-D models into prim-based objects in Second Life. And while it may be impractical to make Second Life into a walkable Google Earth, Daden, a company in Birmingham, England, is bringing Google Earth into Second Life. The result isn’t exactly a globe, however. It’s a virtual virtual-reality chamber where the Google Earth continents are displayed as if pasted to the inside of a giant sphere, with the user’s avatar at the center. Clickable “hot spots” bring up real-time earthquake data and news feeds from CNN, the BBC, and the Indian Times [video] [SLurl; read the sign and click the button marked “TP”].
No one knows yet how to bring Second Life-like avatars directly into Google Earth, but researchers at Intel have demonstrated one possible approach. In late 2006, they created a primitive video game, called Mars Sucks, that challenges Google Earth users to search out and destroy Martian invaders using clues to the locations of their spaceships. The core of the game is a KML layer with special scripts that communicate with both the usual Google Earth content servers and a separate game server that controls elements such as the clues, cockpit graphics, and explosions. Using the same technique, it might be possible to superimpose avatars on the Google Earth environment without having to change anything about the program itself.
Avatars of a sort can already travel through Google Earth thanks to Unype, a mashup using the free voice-over-IP program Skype. Developed by New York software consultant Murat Aktihanoglu, Unype helps geography hounds logged in to Skype synchronize their copies of Google Earth so that they’re viewing the same locations and layers. Unype can insert crude, nonanimated avatars, which the users can build themselves in the Collada format.
“I don’t think it’s the ultimate realization of the Metaverse vision,” says Google’s Hanke. “It’s interesting to see people trying to bring these threads together.”
From these threads, indeed, an entire tapestry of 3-D services is faintly taking shape. The mature Metaverse won’t have a single killer app, say Gelernter and other observers, any more than the Web does.
Certainly, it will enable new kinds of data analysis and remote collaboration, with potentially life-saving results. “As soon as you look at the NOAA weather map in Second Life, you say, ‘Okay, what if we did the same thing using flu pandemic data?’” says Ondrejka. “You could get together the CDC and the country’s 50 leading epidemiologists, and they could have their huge supercomputer-driven infection model running. They’d get insights they couldn’t get just by reading reports.” It’s not an outlandish scenario: epidemiology has already come to Google Earth, courtesy of systems-biology graduate student Andrew Hill and colleagues at the University of Colorado, who published a KML file in April with a grim animated time line showing how the most virulent strains of avian flu jumped from species to species and country to country between 1996 and 2006 [Google Earth link].
Virtual tourism is another application whose audience seems certain to expand. Already, the National Geographic and Discovery networks offer Google Earth layers pegging multimedia files to exotic locations such as the Gombe forest in Tanzania, where researchers at the Jane Goodall Institute continue to study colonies of chimpanzees [Google Earth link]. More is possible. “What I want to do one day is represent the Grand Canyon or a national park with such fidelity that you could essentially go there and plan your whole trip,” says Michael Wilson, CEO of Makena Technologies, the company that operates the virtual world There. “Or what if you could model a Europe where the sea level is 10 feet higher than it is today, or walk around the Alaskan north and see the glaciers and the Bering Strait the way they were 10 years ago? Then perceptions around global warming might change.”
Such possibilities are uplifting, to be sure, but the hardnosed truth is that we don’t need a Stephensonian Metaverse to make them happen. Remote collaboration, virtual tourism, shopping, education, training, and the like are already common on the Web, a vast resource that grows faster than we can figure out how to use it. Digital globes are gaining in fidelity, as cities are filled out with 3-D models and old satellite imagery is gradually replaced by newer high-resolution shots. And today’s island virtual worlds will only get better, with more-realistic avatars and settings and stronger connections to outside reality. A fully articulated Metaverse, whether it’s more like Snow Crash or Second Life, would undeniably be overkill.
But many people feel a pull toward the Metaverse dream that defies practical logic. To illustrate, Will Harvey, the creator of There, tells a story about water.
Liquid, running, rippling water was one of the features he and his team badly wanted to include in There. “Every employee of the company understood that water was an essential component that made a landscape feel like a real place,” Harvey says. And when arch rival Second Life launched a few months before There in 2003, it was soaking in animated H2O, from waterfalls to fountains to the vast ocean surrounding its continents. “It became a standing joke that we desperately needed water,” Harvey continues. “But the business side of the company understood, correctly, that water wasn’t necessary to solve the problem of creating a place for people to socialize.”
The argument wore on for months. In the end, There got water, but it was motionless and impenetrable–“like blue cement,” Harvey says scowlingly.
The point, says Harvey, is that “if you trim the technology down to the features you really need in order to solve a problem, you end up with something that’s a lot less than the Metaverse. But deep inside me and inside all of the people running There or Second Life is a desire to build this incredibly fascinating, incredibly rich version of the Metaverse, the one that has been the vision of science fiction authors for 30 years and of computer engineers for 20.”
I have come to understand this desire. In the course of my research for this story, I bought land in Second Life, built a house, filled it with furniture, bought and razed the adjoining land, lifted my house a hundred meters into the sky to get it out of the way, and began work on a bigger house [SLurl]. I was also befriended by dozens of Second Life residents, several of whom I now know better than my real neighbors. Most were delighted to hear about my story, to tell me how they’re spending their second lives, and to show me their own creations, including a hot-dog-shaped airplane and an animated Tibetan prayer wheel.
This, then, is how the Metaverse will take shape: through the imaginations of the programmers, merchants, artists, activists, and networkers who are already moving there. If these part-time émigrés from reality want embellishments like running water or six sunsets a day, they’ll code their universes that way. The rest of us may smile at their whimsy–but we will take up, and come to depend upon, the serious tools that underlie their play. And if the world we create together is less lonely and less unpredictable than the one we have now, we’ll have made a good start.
Wade Roush is a Technology Review contributing editor.