Searching the Web could become faster for users and much more efficient for search companies if search engines were split up and distributed around the world, according to researchers at Yahoo.
Currently, search engines are based on a centralized model, explains Ricardo Baeza-Yates, a researcher at Yahoo's Labs in Barcelona, Spain. This means that a search engine's index--the core database that lists the location and relative importance of information stored across the Web--as well as additional data, such as cached copies of content, are replicated within several data centers at different locations. The tendency among search companies, says Baeza-Yates, has been to operate a relatively small number of very large data centers across the globe.
Baeza-Yates and his colleagues devised another way: a "distributed" approach, with both the search index and the additional data spread out over a larger number of smaller data centers. With this approach, smaller data centers would contain locally relevant information and a small proportion of globally replicated data. Many search queries common to a particular area could be answered using the content stored in a local data center, while other queries would be passed on to different data centers.
"Many people have talked about this in the past," says Baeza-Yates. But there was resistance, he says, because many assumed that such an approach would be too slow or expensive. It was also unclear how to ensure that each query got the best global result and not just the best that the local center had to offer. A few start-up companies have even launched peer-to-peer search engines that harness the power of users' own machines. But this approach hasn't proven very scalable.
The distributed approach remains a long-term aim, Baeza-Yates admits. "But for the Internet," he adds, "long-term is only about five years."
From Technology Review
View Full Article
No entries found