Skip to main content

Unsupervised Keyphrase Generation by Utilizing Masked Words Prediction and Pseudo-label BART Finetuning

  • Conference paper
  • First Online:
From Born-Physical to Born-Virtual: Augmenting Intelligence in Digital Libraries (ICADL 2022)

Abstract

A keyphrase is a short phrase of one or a few words that summarizes the key idea discussed in the document. Keyphrase generation is the process of predicting both present and absent keyphrases from a given document. Recent studies based on sequence-to-sequence (Seq2Seq) deep learning framework have been widely used in keyphrase generation. However, the excellent performance of these models on the keyphrase generation task is acquired at the expense of a large quantity of annotated documents. In this paper, we propose an unsupervised method called MLMPBKG, based on masked language model (MLM) and pseudo-label BART finetuning. We mask noun phrases in the article, and apply MLM to predict replaceable words. We observe that absent keyphrases can be found in these words. Based on the observation, we first propose MLMKPG, which utilizes MLM to generate keyphrase candidates and use a sentence embedding model to rank the candidate phrases. Furthermore, we use these top-ranked phrases as pseudo-labels to finetune BART for obtaining more absent keyphrases. Experimental results show that our method achieves remarkable results on both present and abstract keyphrase predictions, even surpassing supervised baselines in certain cases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Diya, A., Mizuho, I.: Keyphrase generation by utilizing BART finetuning and BERT-based ranking. In: DEIM Forum (2022)

    Google Scholar 

  2. Bennani-Smires, K., et al.: Simple unsupervised keyphrase extraction using sentence embeddings. In: Conference on Computational Natural Language Learning (2018)

    Google Scholar 

  3. Carbonell, J., Goldstein, J.: The use of MMR, diversity-based reranking for reordering documents and producing summaries. In: Proceedings of 21st Annual International ACM SIGIR Conference on on Research and Development in Information Retrieval, pp. 335–336 (1998)

    Google Scholar 

  4. Chen, J., Zhang, X., Wu, Y., Yan, Z., Li, Z.: Keyphrase generation with correlation constraints. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4057–4066 (2018)

    Google Scholar 

  5. Chen, W., Gao, Y., Zhang, J., King, I., Lyu, M.R.: Title-guided encoding for Keyphrase Generation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 6268–6275 (2019)

    Google Scholar 

  6. Cui, Y., Che, W., Liu, T., Qin, B. and Yang, Z.: Pre-training with whole word masking for Chinese BERT. ACM Trans. Audio Speech Lang. Process. 29, 3504–3514 (2021)

    Google Scholar 

  7. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT 2019, pp. 4171– 4186 (2019)

    Google Scholar 

  8. Gao, T., Yao, X., Chen, D.: SimCSE: simple contrastive learning of sentence embeddings. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 6894–6910 (2021)

    Google Scholar 

  9. Hasan, K.S., Ng, V.: Automatic keyphrase extraction: a survey of the state of the art. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, Volume 1: Long Papers, pp. 1262–1273 (2014)

    Google Scholar 

  10. Jones, K.S.: Index term weighting. Inf. Storage Retrieval 9(11), 619–633 (1973)

    Google Scholar 

  11. Lau, J.H., Baldwin, T.: An empirical evaluation of doc2vec with practical insights into document embedding generation. In: Proceedings of the 1st Workshop on Representation Learning for NLP, pp. 78–86 (2016)

    Google Scholar 

  12. Joshi, M., et al: SpanBERT: improving pre-training by representing and predicting spans. Trans. Assoc. Comput. Linguist. 8, 64–77 (2020)

    Google Scholar 

  13. Kim, Y., et al.: Applying graph-based keyword extraction to document retrieval. In: Proceedings of the Sixth International Joint Conference on Natural Language Processing, pp. 864–868 (2013)

    Google Scholar 

  14. Lee, D.H.: Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In: Workshop on Challenges in Representation Learning, ICML, vol. 3, No. 2, p. 896 (2013)

    Google Scholar 

  15. Pagliardini, M., Gupta, P., Jaggi, M.: Unsupervised learning of sentence embeddings using compositional n-gram features. In: Proceedings of NAACL-HLT, pp. 528–540 (2018)

    Google Scholar 

  16. Meng, R., et al: Deep Keyphrase generation. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, vol. 1: Long Papers, pp. 582–592 (2017)

    Google Scholar 

  17. Meng, Y., et al.: Text classification using label names only: a language model self-training approach. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 9006–9017 (2020)

    Google Scholar 

  18. Lewis, M., et al.: BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7871–7880 (2020)

    Google Scholar 

  19. Arora, S., Liang, Y., Ma, T.: A simple but tough-to-beat baseline for sentence embeddings. In: ICLR (2016)

    Google Scholar 

  20. Shen, X., Wang, Y., Meng, R., Shang, J.: Unsupervised deep keyphrase generation. Proc. AAAI Conf. Artif. Intell. 36(10), pp. 11303–11311 (2022)

    Google Scholar 

  21. Subramanian, S., et al.: Neural models for key phrase extraction and question generation. In: Proceedings of the Workshop on Machine Reading for Question Answering, pp. 78–88 (2018)

    Google Scholar 

  22. Wan, X., Xiao, J.: Single document keyphrase extraction using neighborhood knowledge. In: AAAI Conference on Artificial Intelligence (AAAI-08), pp. 855–860 (2008)

    Google Scholar 

  23. Yuan, X., et al.: One size does not fit all: generating and evaluating variable number of Keyphrases. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7961–7975 (2020)

    Google Scholar 

  24. Zhang, Y., et al.: Mike: keyphrase extraction by integrating multidimensional information. In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 1349–1358 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mizuho Iwaihara .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ju, Y., Iwaihara, M. (2022). Unsupervised Keyphrase Generation by Utilizing Masked Words Prediction and Pseudo-label BART Finetuning. In: Tseng, YH., Katsurai, M., Nguyen, H.N. (eds) From Born-Physical to Born-Virtual: Augmenting Intelligence in Digital Libraries. ICADL 2022. Lecture Notes in Computer Science, vol 13636. Springer, Cham. https://doi.org/10.1007/978-3-031-21756-2_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-21756-2_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-21755-5

  • Online ISBN: 978-3-031-21756-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics