Automatic Evaluation of World Wide Web Search Services
By Abdur Chowdhury, America Online Inc., and Ian Soboroff, National Institute of Standards and Technology, USA. 2002. The authors present a method for comparing search engines automatically based on how they rank known item search results. The queries are constructed using query log analysis, the result is automatically constructed via analysis of editor comments from ODP. Five search services are compared: Lycos, Netscape, Fast, Google, HotBot, with the result that some services perform better than others, but the majority are statistically equivalent. [PDF]
Can Social Bookmarking Improve Web Search?
By Paul Heymann, Georgia Koutrika, and Hector Garcia-Molina Dept. of Computer Science, Stanford University Stanford, CA, USA. The paper includes eleven experiments designed to evaluate different aspects of social bookmarking and their impact on web search, using bookmarking data, Yahoo! and AOL search data, and ODP data gathered between May and June of 2007. [PDF]
Random Sampling from a Search Engine´s Corpus
By Ziv Bar-Yossef and Maxim Gurevich. Technical report, August 2006. Two novel algorithms for random sampling are used to collect comparative statistics on the corpora of Google, MSN Search and Yahoo. ODP is used to create a test search engine and query pool. [PDF]
Using Titles and Category Names from Editor-driven Taxonomies for Automatic Evaluation
By Steven M. Beitzel, Eric C. Jensen, Abdur Chowdhury and David Grossman. Illinois Institute of Technology, USA. In: Proceedings of the twelfth international conference on Information and knowledge management, 2003. Evaluation of IR systems has always been difficult because of the need for manually assessed relevance judgments: large editor-driven taxonomies on the web make a new evaluation approach possible. ODP´s taxonomy is used to compare and contrast two methodologies. [PDF]
[Dmoz Mozilla 1]
Last update:
October 10, 2011 at 9:19:27 UTC
All Languages