Skip to main content

What to Read Next? Challenges and Preliminary Results in Selecting Representative Documents

  • Conference paper
  • First Online:
Database and Expert Systems Applications (DEXA 2018)

Abstract

The vast amount of scientific literature poses a challenge when one is trying to understand a previously unknown topic. Selecting a representative subset of documents that covers most of the desired content can solve this challenge by presenting the user a small subset of documents. We build on existing research on representative subset extraction and apply it in an information retrieval setting. Our document selection process consists of three steps: computation of the document representations, clustering, and selection of documents. We implement and compare two different document representations, two different clustering algorithms, and three different selection methods using a coverage and a redundancy metric. We execute our 36 experiments on two datasets, with 10 sample queries each, from different domains. The results show that there is no clear favorite and that we need to ask the question whether coverage and redundancy are sufficient for evaluating representative subsets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.acm.org/about/class/class/2012.

  2. 2.

    https://www.ncbi.nlm.nih.gov/pmc/.

  3. 3.

    https://www.nlm.nih.gov/mesh/.

  4. 4.

    https://www.elastic.co/.

References

  1. Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: Van den Bussche, J., Vianu, V. (eds.) ICDT 2001. LNCS, vol. 1973, pp. 420–434. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-44503-X_27

    Chapter  Google Scholar 

  2. Agrawal, R., Gollapudi, S., Halverson, A., Ieong, S.: Diversifying search results. In: Baeza-Yates, R.A., Boldi, P., Ribeiro-Neto, B.A., Cambazoglu, B.B. (eds.) Proceedings of the Second International Conference on Web Search and Web Data Mining, WSDM 2009, Barcelona, Spain, 9–11 February 2009, pp. 5–14. ACM (2009). https://doi.org/10.1145/1498759.1498766

  3. Arampatzis, A., Kamps, J., Robertson, S.: Where to stop reading a ranked list?: threshold optimization using truncated score distributions. In: Allan, J., Aslam, J.A., Sanderson, M., Zhai, C., Zobel, J. (eds.) Proceedings of the 32nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2009, Boston, MA, USA, 19–23 July 2009, pp. 524–531. ACM (2009). https://doi.org/10.1145/1571941.1572031

  4. Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet allocation. In: Dietterich, T.G., Becker, S., Ghahramani, Z. (eds.) Advances in Neural Information Processing Systems 14 (Neural Information Processing Systems: Natural and Synthetic, NIPS 2001, Vancouver, British Columbia, Canada, 3–8 December 2001), pp. 601–608. MIT Press (2001). http://papers.nips.cc/paper/2070-latent-dirichlet-allocation

  5. Endo, Y., Miyamoto, S.: Spherical k-means++ clustering. In: Torra, V., Narukawa, Y. (eds.) MDAI 2015. LNCS (LNAI), vol. 9321, pp. 103–114. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-23240-9_9

    Chapter  Google Scholar 

  6. He, J., Meij, E., de Rijke, M.: Result diversification based on query-specific cluster ranking. JASIST 62(3), 550–571 (2011). https://doi.org/10.1002/asi.21468

    Article  Google Scholar 

  7. Jardine, J.G.: Automatically generating reading lists. Ph.D. thesis, University of Cambridge, UK (2014). http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.648722

  8. Jardine, J.G., Teufel, S.: Topical PageRank: a model of scientific expertise for bibliographic search. In: Bouma, G., Parmentier, Y. (eds.) Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2014, Gothenburg, Sweden, 26–30 April 2014, pp. 501–510. The Association for Computer Linguistics (2014). http://aclweb.org/anthology/E/E14/E14-1053.pdf

  9. Le, Q.V., Mikolov, T.: Distributed representations of sentences and documents. In: Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21–26 June 2014. JMLR Workshop and Conference Proceedings, vol. 32, pp. 1188–1196. JMLR.org (2014). http://jmlr.org/proceedings/papers/v32/le14.html

  10. Lin, J.: Divergence measures based on the Shannon entropy. IEEE Trans. Inf. Theory 37(1), 145–151 (1991). https://doi.org/10.1109/18.61115

    Article  MathSciNet  MATH  Google Scholar 

  11. Lloyd, S.P.: Least squares quantization in PCM. IEEE Trans. Inf. Theory 28(2), 129–136 (1982). https://doi.org/10.1109/TIT.1982.1056489

    Article  MathSciNet  MATH  Google Scholar 

  12. Ma, B., Wei, Q., Chen, G.: A combined measure for representative information retrieval in enterprise information systems. J. Enterp. Inf. Manag. 24(4), 310–321 (2011). https://doi.org/10.1108/17410391111148567

    Article  Google Scholar 

  13. Naveen, G.K.R., Nedungadi, P.: Query-based multi-document summarization by clustering of documents. In: Proceedings of the 2014 International Conference on Interdisciplinary Advances in Applied Computing, ICONIAAC 2014, pp. 58:1–58:8. ACM, New York (2014). https://doi.org/10.1145/2660859.2660972

  14. Porter, M.F.: An algorithm for suffix stripping. Program 14(3), 130–137 (1980). https://doi.org/10.1108/eb046814

    Article  Google Scholar 

  15. Radev, D.R., Joseph, M.T., Gibson, B., Muthukrishnan, P.: A bibliometric and network analysis of the field of computational linguistics. J. Am. Soc. Inf. Sci. Technol. (2009)

    Google Scholar 

  16. Radev, D.R., Muthukrishnan, P., Qazvinian, V.: The ACL anthology network corpus. In: Proceedings, ACL Workshop on Natural Language Processing and Information Retrieval for Digital Libraries, Singapore (2009)

    Google Scholar 

  17. Radev, D.R., Muthukrishnan, P., Qazvinian, V., Abu-Jbara, A.: The ACL anthology network corpus. Lang. Res. Eval. 47, 1–26 (2013). https://doi.org/10.1007/s10579-012-9211-2

    Article  Google Scholar 

  18. Řehůřek, R., Sojka, P.: Software framework for topic modelling with large corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, May 2010. http://is.muni.cz/publication/884893/en

  19. Whissell, J.S., Clarke, C.L.A.: Improving document clustering using Okapi BM25 feature weighting. Inf. Retr. 14(5), 466–487 (2011). https://doi.org/10.1007/s10791-011-9163-y

    Article  Google Scholar 

  20. Zhang, B., Yin, X., Zhou, F., Jin, J.: Building your own reading list anytime via embedding relevance, quality, timeliness and diversity. In: Kando, N., Sakai, T., Joho, H., Li, H., de Vries, A.P., White, R.W. (eds.) Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, Shinjuku, Tokyo, Japan, 7–11 August 2017, pp. 1109–1112. ACM (2017). https://doi.org/10.1145/3077136.3080734

  21. Zhang, J., Liu, G., Ren, M.: Finding a representative subset from large-scale documents. J. Informetr. 10(3), 762–775 (2016). https://doi.org/10.1016/j.joi.2016.05.003

    Article  Google Scholar 

  22. Zhang, J., Ren, M., Xiao, X., Zhang, J.: Providing consumers with a representative subset from online reviews. Online Inf. Rev. 41(6), 877–899 (2017). https://doi.org/10.1108/OIR-05-2016-0125

    Article  Google Scholar 

Download references

Acknowledgment

This research was co-financed by the EU H2020 project MOVING (http://www.moving-project.eu/) under contract no 693092 and the EU project DigitalChampions_SH.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Falk Böschen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Beck, T., Böschen, F., Scherp, A. (2018). What to Read Next? Challenges and Preliminary Results in Selecting Representative Documents. In: Elloumi, M., et al. Database and Expert Systems Applications. DEXA 2018. Communications in Computer and Information Science, vol 903. Springer, Cham. https://doi.org/10.1007/978-3-319-99133-7_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-99133-7_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-99132-0

  • Online ISBN: 978-3-319-99133-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics