Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 7757))

Included in the following conference series:

Abstract

TREC-style evaluation is generally considered to be the use of test collections, an evaluation methodology referred to as the Cranfield paradigm. This paper starts with a short description of the original Cranfield experiment, with the emphasis on the how and why of the Cranfield framework. This framework is then updated to cover the more recent ”batch” evaluations, examining the methodologies used in the various open evaluation campaigns such as TREC. Here again the focus is on the how and why, and in particular on the evolving of the older evaluation methodologies to handle new information access techniques. The final section contains advice on using these existing test collections and building new ones.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 49.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bailey, P., Craswell, N., Hawking, D.: Engineering a Multi-Purpose Test Collection for Web Retrieval Experiments. Information Processing and Management 39(6), 853–871 (2003)

    Article  Google Scholar 

  2. Bernstein, Y., Zobel, J.: Redundant Documents and Search Effectiveness. In: Proceedings of the 2005 ACM CIKM International Conference on Information and Knowledge Management, pp. 736–743 (2005)

    Google Scholar 

  3. Bookstein, A.: Relevance. Journal of the American Society for Information Science, 269–273 (September 1979)

    Google Scholar 

  4. Buckley, C., Voorhees, E.: Retrieval System Evaluation. In: TREC: Experiment and Evaluation in Information Retrieval, ch. 3. The MIT Press (2005)

    Google Scholar 

  5. Buckley, C., Voorhees, E.M.: Evaluating Evaluation Measure Stability. In: Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 33–40 (2000)

    Google Scholar 

  6. Burgin, R.: Variations in Relevance Judgments and the Evaluation of Retrieval Performance. Information Processing and Management 28(5), 619–627 (1992)

    Article  Google Scholar 

  7. Cleverdon, C.: Report on the Testing and Analysis of an Investigation into the Comparative Efficiency of Indexing Systems. Aslib Cranfield Research Project, Cranfield, England (1962)

    Google Scholar 

  8. Cleverdon, C., Keen, E.: Factors Determining the Performance of Indexing Systems, vol. 2: Test Results. Aslib Cranfield Research Project, Cranfield, England (1966)

    Google Scholar 

  9. Cleverdon, C., Mills, J., Keen, E.: Factors Determining the Performance of Indexing Systems, vol. 1: Design. Aslib Cranfield Research Project, Cranfield, England (1966)

    Google Scholar 

  10. Cooper, W.: A Definition of Relevance for Information Retrieval. Information Storage and Retrieval 7, 19–37 (1971)

    Article  Google Scholar 

  11. Cormack, G.V., Clarke, C.L.A., Palmer, C.R., To, S.S.L.: Passage-based Refinement(MultiText Experiments for TREC-6). In: Proceedings of the Sixth Text REtrieval Conference (TREC-6), pp. 303–320 (1998)

    Google Scholar 

  12. Fujii, A., Iwayama, M., Kando, N.: Introduction to the special issue on patent processing. Information Processing and Management 43, 1149–1153 (2007)

    Article  Google Scholar 

  13. Harman, D.: Overview of the Third Text REtrieval Conference (TREC-3). In: Overview of the Third Text REtrieval Conference (TREC-3), Proceedings of TREC-3. (1995) 1–20

    Google Scholar 

  14. Harman, D.: Overview of the Fourth Text REtrieval Conference (TREC-4). In: Proceedings of the Fourth Text REtrieval Conference (TREC-4). (1996) 1–23

    Google Scholar 

  15. Harman, D.: Information Retrieval Evaluation. Morgan/Claypool (2011)

    Google Scholar 

  16. Harman, D., Buckley, C.: Overview of the Reliable Information Access Workshop. Information Retrieval 12, 615–641 (2009)

    Article  Google Scholar 

  17. Harter, S.P.: Variations in Relevance Assessments and the Measurement of Retrieval Effectiveness. Journal of the American Society for Information Science 47(1), 37–49 (1996)

    Article  Google Scholar 

  18. Hawking, D., Voorhees, E., Craswell, N.: Overview of TREC-8 Web Track. In: Proceedings of the Eighth Text REtrieval Conference (TREC-8), pp. 131–151 (2000)

    Google Scholar 

  19. Järvelin, K., Kekäläinen, J.: IR Evaluation Methods for Retrieving Highly Relevant Documents. In: Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 41–48 (2000)

    Google Scholar 

  20. Merchant, R. (ed.): The Proceedings of the TIPSTER Text Program—Phase I. Morgan Kaufmann Publishing Co., San Mateo (1994)

    Google Scholar 

  21. Salton, G. (ed.): The SMART Retrieval System. Prentice-Hall, Englewood Cliffs (1971)

    Google Scholar 

  22. Sanderson, M.: Test Collection Based Evaluation of Information Retrieval Systems. Foundations and Trends in Information Retrieval 4, 247–375 (2010)

    Article  MATH  Google Scholar 

  23. Sparck Jones, K., Bates, R.: Report on a Design Study for the “Ideal” Information Retrieval Test Collection. British Library Research and Development Report 5488, Computer Laboratory. University of Cambridge (1977)

    Google Scholar 

  24. Swanson, D.: Some Unexplained Aspects of the Cranfield Tests of Indexing Language Performance. Library Quarterly 41, 223–228 (1971)

    Article  Google Scholar 

  25. Tague-Sutcliffe, J., Blustein, J.: A Statistical Analysis of the TREC-3 Data. In: Overview of the Third Text REtrieval Conference (TREC-3), Proceedings of TREC-3, pp. 385–398 (1995)

    Google Scholar 

  26. Voorhees, E., Harman, D. (eds.): TREC: Experiment and Evaluation in Information Retrieval. The MIT Press (2005)

    Google Scholar 

  27. Voorhees, E.M.: Variations in Relevance Judgments and the Measurement of Retrieval Effectiveness. Information Processing and Management 36(5), 697–716 (2000)

    Article  Google Scholar 

  28. Voorhees, E.M.: Topic Set Size Redux. In: SIGIR 2009, pp. 806–807 (2009)

    Google Scholar 

  29. Voorhees, E.M., Buckley, C.: The Effect of Topic Set Size on Retrieval Experiment Error. In: Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 316–323 (2002)

    Google Scholar 

  30. Voorhees, E.M., Harman, D.: Overview of the Fifth Text REtrieval Conference (TREC-5). In: Proceedings of the Fifth Text REtrieval Conference (TREC-5), pp. 1–28 (1997)

    Google Scholar 

  31. Womser-Hacker, C.: Multilingual Topic Generation within the CLEF 2001 Experiments. In: Peters, C., Braschler, M., Gonzalo, J., Kluck, M. (eds.) CLEF 2001. LNCS, vol. 2406, pp. 389–393. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  32. Zhang, Y., Callan, J., Minka, T.: Novelty and Redundancy Detection in Adaptive Filtering. In: Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 81–88 (2002)

    Google Scholar 

  33. Zobel, J.: How Reliable are the Results of Large-Scale Information Retrieval Experiments. In: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 307–314 (1998)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Harman, D. (2013). TREC-Style Evaluations. In: Agosti, M., Ferro, N., Forner, P., Müller, H., Santucci, G. (eds) Information Retrieval Meets Information Visualization. PROMISE 2012. Lecture Notes in Computer Science, vol 7757. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-36415-0_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-36415-0_7

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-36414-3

  • Online ISBN: 978-3-642-36415-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics