Skip to main content

Effective Time Ratio: A Measure for Web Search Engines with Document Snippets

  • Conference paper
Information Retrieval Technology (AIRS 2010)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 6458))

Included in the following conference series:

Abstract

The dominant method for evaluating search engines is the Cranfield paradigm, but the existing metrics do not consider some modern search engines features, such as document snippets. In this paper, we propose a new metric effective time ratio for search engine evaluation. Effective time ratio measures the ratio between effective time a user spent on getting relevant information and the total search time. For retrieval system without presenting document snippet, its value is identical to precision. For search engine with snippet, some theoretical analysis proves that its value can reflect both retrieval system performance and snippet quality. We further deploy a real user study, showing that effective time ratio can reflect users’ satisfaction better than the existing metrics based on document relevance and/or snippet relevance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Kagolovsky, Y., Moehr, J.R.: Current status of the evaluation of information retrieval. J. Med. Syst. 27(5), 409–424 (2003)

    Article  Google Scholar 

  2. Harter, S., Hert, C.: Evaluation of information retrieval systems: approaches, is- sues, and methods. In: ARIST, pp. 3–99 (1997)

    Google Scholar 

  3. Hersh, W., Turpin, A., Price, S., Chan, B., Kramer, D., Sacherek, L., Olson, D.: Do batch and user evaluations give the same results? In: SIGIR 2000: Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 17–24. ACM, New York (2000)

    Google Scholar 

  4. Turpin, A.H., Hersh, W.: Why batch and user evaluations do not give the same results. In: SIGIR ’01: Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 225–231. ACM, New York (2001)

    Chapter  Google Scholar 

  5. Allan, J., Carterette, B., Lewis, J.: When will information retrieval be ”good enough”? In: SIGIR 2005: Proceedings of the 28th Annual Iternational ACM SIGIR Cnference on Research and Dvelopment in information Retrival, pp. 40–433. ACM, New York (2005)

    Google Scholar 

  6. Huffman, S.B., Hochster, M.: How well does result relevance predict session satis- faction? In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 567–574. ACM, New York (2007)

    Google Scholar 

  7. Al-Maskari, A., Sanderson, M., Clough, P.: The relationship between ir effectiveness measures and user satisfaction. In: SIGIR 2007: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 773–774. ACM, New York (2007)

    Google Scholar 

  8. Turpin, A., Scholer, F., Jarvelin, K., Wu, M., Culpepper, J.S.: Including summaries in system evaluation. In: SIGIR 2009: Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 508–515. ACM, New York (2009)

    Google Scholar 

  9. Moffat, A., Zobel, J.: Rank-biased precision for measurement of retrieval effective- ness. ACM Trans. Inf. Syst. 27(1), 1–27 (2008)

    Article  Google Scholar 

  10. Chapelle, O., Zhang, Y.: A dynamic bayesian network click model for web search ranking. In: WWW 2009: Proceedings of the 18th International Conference on World Wide Web, pp. 1–10. ACM, New York (2009)

    Google Scholar 

  11. Guo, F., Liu, C., Kannan, A., Minka, T., Taylor, M., Wang, Y.M., Faloutsos, C.: Click chain model in web search. In: WWW 2009: Proceedings of the 18th International Conference on World Wide Web, pp. 11–20. ACM, New York (2009)

    Google Scholar 

  12. Varadarajan, R., Hristidis, V.: A system for query-specific document summarization. In: CIKM 2006: Proceedings of the 15th ACM International Conference on Information and Knowledge Management, pp. 622–631. ACM, New York (2006)

    Google Scholar 

  13. Liang, S., Devlin, S., Tait, J.: Evaluating web search result summaries. In: Lalmas, M., MacFarlane, A., Rüger, S.M., Tombros, A., Tsikrika, T., Yavlinsky, A. (eds.) ECIR 2006. LNCS, vol. 3936, pp. 96–106. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

He, J., Shu, B., Li, X., Yan, H. (2010). Effective Time Ratio: A Measure for Web Search Engines with Document Snippets. In: Cheng, PJ., Kan, MY., Lam, W., Nakov, P. (eds) Information Retrieval Technology. AIRS 2010. Lecture Notes in Computer Science, vol 6458. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-17187-1_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-17187-1_7

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-17186-4

  • Online ISBN: 978-3-642-17187-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics