Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Searching the Web could become faster for users and much more efficient for search companies if search engines were split up and distributed around the world, according to researchers at Yahoo.

Currently, search engines are based on a centralized model, explains Ricardo Baeza-Yates, a researcher at Yahoo’s Labs in Barcelona, Spain. This means that a search engine’s index–the core database that lists the location and relative importance of information stored across the Web–as well as additional data, such as cached copies of content, are replicated within several data centers at different locations. The tendency among search companies, says Baeza-Yates, has been to operate a relatively small number of very large data centers across the globe.

Baeza-Yates and his colleagues devised another way: a “distributed” approach, with both the search index and the additional data spread out over a larger number of smaller data centers. With this approach, smaller data centers would contain locally relevant information and a small proportion of globally replicated data. Many search queries common to a particular area could be answered using the content stored in a local data center, while other queries would be passed on to different data centers.

“Many people have talked about this in the past,” says Baeza-Yates. But there was resistance, he says, because many assumed that such an approach would be too slow or expensive. It was also unclear how to ensure that each query got the best global result and not just the best that the local center had to offer. A few start-up companies have even launched peer-to-peer search engines that harness the power of users’ own machines. But this approach hasn’t proven very scalable.

To achieve a workable distributed system, Baeza-Yates and colleagues designed it so that statistical information about page rankings could be shared between the different data centers. This would allow each data center to run an algorithm that compares its results with those of others. If another data center gave a statistically better result, the query would be forwarded to it.

The group put the distributed approach to the test in a feasibility study, using real search data. They present their findings this week at the Association for Computing Machinery’s Conference on Information and Knowledge Management in Hong Kong, where they will receive the award for the best paper.

“We wanted to prove that we could achieve the same performance [as the centralized model] without it costing too much,” says Baeza-Yates. In fact, they found that their approach could reduce the overall costs of operating a search engine by as much as 15 percent without compromising the quality of the answers.

1 comment. Share your thoughts »

Credit: Technology Review

Tagged: Communications, Web, Internet, search, networks, algorithms, networking, Yahoo, Internet infrastructure

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me