Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo


Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

How did such advanced mapping tools wind up in the hands of average Web users?

The short answer starts with the U.S. Department of Defense’s 1978 launch of the first satellites in the Global Positioning System. A GPS receiver determines how long the time signals broadcast by several GPS satellites have taken to reach it and, with a bit of spherical geometry, can then calculate its position to within a matter of meters. The original use of the system was to allow U.S. missile submarines to determine their positions within a few minutes of surfacing – information required by the guidance systems in the subs’ ICBMs if they were to make direct hits on enemy missile silos.

Beginning during the Reagan administration, civilians could also use GPS, but only in a degraded form, accurate to about 100 meters in any direction. On May 1, 2000, the intentional degrading of GPS signals, called Selective Availability, was turned off by order of President Clinton, instantly reducing the range of error in a civilian GPS fix to 10 meters or so. This sudden and enormous increase in the accuracy of GPS location-finding set the stage for all the online mapping innovation that has followed.

It spurred a broad group, including hikers, hackers, and urban planners, to take a deeper interest in Web-based maps, which were a natural way to publish the new geographic data they could collect and share using their GPS units. After all, a consumer-grade GPS receiver could now distinguish between one side of a street and the other, determine which storefront a user was walking past, or guide someone to a hidden “geocache” using only its published latitude and longitude (see “Roamin’ Holiday,” September 2005).

Unfortunately, when it came to making online maps, there weren’t a lot of options to choose from. Since its launch in 1996, one website – MapQuest – had dominated this niche. And while many Web developers wrote programs that copied MapQuest maps for redisplay in other contexts, they couldn’t program more sophisticated tricks, such as overlaying their own data on MapQuest maps.

“The first-generation Web services in the mapping space – ESRI, MapQuest, MapPoint – have had APIs for quite some time, but they weren’t hacker-friendly,” says Tim O’Reilly, CEO of O’Reilly Media and creator of the Where 2.0 conference. Eventually, MapQuest prohibited even the repurposing of its maps. This created a demand for reusable map data, a demand that would eventually be met by companies such as Google, Yahoo, and Microsoft.

Along the way, however, a few other things had to happen. First, computers needed enough processing speed and storage capacity to handle the multigigabyte data sets and complex mathematical transformations that displaying and manipulating digital maps require. As Locative Technologies’ Erle notes, Moore’s Law took care of that.

Second, the sharing-oriented mindset of the open-source-software community, along with an awareness of the possibilities of the Web, had to penetrate the walls of traditional GIS companies like ESRI. ESRI had long focused its products on industries such as financial services, urban and regional planning, and defense. Its emphasis, understandably, was on building accurate maps to convey critical data, not on tinkering with code or putting fun, interactive maps on the Web (see “Do Maps Have Morals?” June 2005).

But over the last several years, conversation within industry standards groups like the Open Geospatial Consortium (of which ESRI is a leading member) and the World Wide Web Consortium at MIT has led to agreement on basic standards for mapping-software APIs – and on additions to the Web’s central language, XML, that make it easy to tie Web documents to geographical locations. Embedding the XML tags <geo:lat>38.888</geo:lat> and <geo:long>-77.035</geo:long> in a Web document, for example, lets mapping or browsing software know that the document is about the Washington Monument.

Third, owners of large, valuable, proprietary databases on the Web needed some time to arrive at the idea that granting outside access to their databases might actually be good for business. Amazon was one of the first companies to put this idea into practice, releasing an API in 2003 that allows programmers to tap into its product database, pull out whatever information they want, and present it on their own websites in any format they choose, as long as any resulting purchases are directed back to Amazon (see “Amazon: Giving Away the Store,” January 2005).

The basic idea of Web services – that the software and databases powering e-retailing, online photo-sharing, and the like should be built according to standards allowing other parties to tap into them – was still radical even three years ago. Today, however, it’s the guiding principle of an increasing number of open-source developers and megacorporations – even Microsoft.

By early 2005, then, the hardware, the standards, and the collaboration models were in place for a burst of innovation in Web mapping applications. All that was needed was a starting gun. The gun fired on February 8 – the day Google Maps went online.

2 comments. Share your thoughts »

Tagged: Communications

Reprints and Permissions | Send feedback to the editor

From the Archives


Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me