Skip to Content

Searching the Web could become faster for users and much more efficient for search companies if search engines were split up and distributed around the world, according to researchers at Yahoo.

Currently, search engines are based on a centralized model, explains Ricardo Baeza-Yates, a researcher at Yahoo’s Labs in Barcelona, Spain. This means that a search engine’s index–the core database that lists the location and relative importance of information stored across the Web–as well as additional data, such as cached copies of content, are replicated within several data centers at different locations. The tendency among search companies, says Baeza-Yates, has been to operate a relatively small number of very large data centers across the globe.

Baeza-Yates and his colleagues devised another way: a “distributed” approach, with both the search index and the additional data spread out over a larger number of smaller data centers. With this approach, smaller data centers would contain locally relevant information and a small proportion of globally replicated data. Many search queries common to a particular area could be answered using the content stored in a local data center, while other queries would be passed on to different data centers.

“Many people have talked about this in the past,” says Baeza-Yates. But there was resistance, he says, because many assumed that such an approach would be too slow or expensive. It was also unclear how to ensure that each query got the best global result and not just the best that the local center had to offer. A few start-up companies have even launched peer-to-peer search engines that harness the power of users’ own machines. But this approach hasn’t proven very scalable.

To achieve a workable distributed system, Baeza-Yates and colleagues designed it so that statistical information about page rankings could be shared between the different data centers. This would allow each data center to run an algorithm that compares its results with those of others. If another data center gave a statistically better result, the query would be forwarded to it.

The group put the distributed approach to the test in a feasibility study, using real search data. They present their findings this week at the Association for Computing Machinery’s Conference on Information and Knowledge Management in Hong Kong, where they will receive the award for the best paper.

“We wanted to prove that we could achieve the same performance [as the centralized model] without it costing too much,” says Baeza-Yates. In fact, they found that their approach could reduce the overall costs of operating a search engine by as much as 15 percent without compromising the quality of the answers.

“It’s a valid approach,” says Bruce Maggs, a professor of computer science at Duke University in Durham, NC, and vice president of research at Akamai, a Web content delivery and caching company based in Cambridge, MA. Fully replicating a database at multiple sites, as search companies typically do now, is inefficient, Maggs says, since only a small proportion of data is accessed at each site. A distributed approach “also saves considerably on everything else in the same proportion, such as capital costs and real estate,” he says. This is because, overall, the number of servers required goes down.

For users, the advantage would be quicker search results. This is because most answers would come from a data center that’s geographically closer. A small number of results would take longer than normal–but only 20 to 30 percent longer, says Baeza-Yates. “On average, most queries will be faster,” he says.

Maggs says the performance improvement would need to be high enough to counteract any delay in those search queries that have to be sent further afield.

Another trade-off is that more users would get different results, depending on where they were, than is currently the case, says Peter Triantafillou, a researcher at the University of Patras in Greece who studies large-scale search. This already happens to some extent even under a centralized model, he says, but it could be a bigger concern if many more searches were inconsistent.

However, with search engine data centers already housing tens of thousands of servers, it’s questionable whether they can continue to grow and still function efficiently, Triantafillou says. “Will they be able to go to hundreds of thousands or millions?” he says. Just the practicality of installing the cabling and optics in and out of such facilities would pose serious problems, he says.

The distributed approach remains a long-term aim, Baeza-Yates admits. “But for the Internet,” he adds, “long-term is only about five years.”

Keep Reading

Most Popular

The inside story of how ChatGPT was built from the people who made it

Exclusive conversations that take us behind the scenes of a cultural phenomenon.

How Rust went from a side project to the world’s most-loved programming language

For decades, coders wrote critical systems in C and C++. Now they turn to Rust.

ChatGPT is about to revolutionize the economy. We need to decide what that looks like.

New large language models will transform many jobs. Whether they will lead to widespread prosperity or not is up to us.

Design thinking was supposed to fix the world. Where did it go wrong?

An approach that promised to democratize design may have done the opposite.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.