Skip to main content

Gated Neural Network for Sentence Compression Using Linguistic Knowledge

  • Conference paper
  • First Online:
Natural Language Processing and Information Systems (NLDB 2017)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 10260))

Abstract

Previous works have recognized that linguistic features such as part of speech and dependency labels are helpful for sentence compression that aims to simplify a text while leaving its underlying meaning. In this work, we introduce a gating mechanism and propose a gated neural network that selectively exploits linguistic knowledge for deletion-based sentence compression. Experimental results on two popular datasets show that the proposed gated neural network equipped with selectively fused linguistic features leads to better compressions upon both automatic metric and human evaluation, compared with a previous competitive compression system. We also investigate the gating mechanism through visualization analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We use Parsey McParseface, one of the state-of-the-art English parsers released by Google https://github.com/tensorflow/models/tree/master/syntaxnet.

  2. 2.

    https://code.google.com/p/word2vec/.

  3. 3.

    https://github.com/Dodo-Cho/NLDB-2017.

  4. 4.

    Landis and Koch [17] characterize \(\kappa \) values < 0 as no agreement, 0–0.20 as slight, 0.21–0.40 as fair, 0.41–0.60 as moderate, 0.61–0.80 as substantial, and 0.81–1 as almost perfect agreement.

References

  1. Berg-Kirkpatrick, T., Gillick, D., Klein, D.: Jointly learning to extract and compress. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, vol. 1, pp. 481–490 (2011)

    Google Scholar 

  2. Bingel, J., Søgaard, A.: Text simplification as tree labeling. In: The 54th Annual Meeting of the Association for Computational Linguistics, pp. 337–343 (2016)

    Google Scholar 

  3. Cho, K., van Merriënboer, B., Bahdanau, D., Bengio, Y.: On the properties of neural machine translation: encoder-decoder approaches. In: Syntax, Semantics and Structure in Statistical Translation, pp. 103–111 (2014)

    Google Scholar 

  4. Clarke, J., Lapata, M.: Constraint-based sentence compression an integer programming approach. In: Proceedings of the COLING/ACL on Main Conference Poster Sessions, pp. 144–151 (2006)

    Google Scholar 

  5. Clarke, J., Lapata, M.: Global inference for sentence compression: an integer linear programming approach. J. Artif. Intell. Res. 31, 399–429 (2008)

    MATH  Google Scholar 

  6. Elman, J.L.: Finding structure in time. Cogn. Sci. 14(2), 179–211 (1990)

    Article  Google Scholar 

  7. Filippova, K., Altun, Y.: Overcoming the lack of parallel data in sentence compression. In: EMNLP, pp. 1481–1491 (2013)

    Google Scholar 

  8. Filippova, K., Alfonseca, E., Colmenares, C., Kaiser, L., Vinyals, O.: Sentence compression by deletion with LSTMs. In: EMNLP, pp. 360–368 (2015)

    Google Scholar 

  9. Gers, F.A., Schmidhuber, J., Cummins, F.: Learning to forget: continual prediction with LSTM. Neural Comput. 12(10), 2451–2471 (2000)

    Article  Google Scholar 

  10. Jing, H.: Sentence reduction for automatic text summarization. In: Proceedings of the sixth Conference on Applied Natural Language Processing, pp. 310–315 (2000)

    Google Scholar 

  11. Klerke, S., Goldberg, Y., Søgaard, A.: Improving sentence compression by learning to predict gaze. In: Proceedings of NAACL-HLT 2016, pp. 1528–1533 (2016)

    Google Scholar 

  12. Knight, K., Marcu, D.: Statistics-based summarization-step one: sentence compression. In: AAAI/IAAI, pp. 703–710 (2000)

    Google Scholar 

  13. McDonald, R.T.: Discriminative sentence compression with soft syntactic evidence. In: EACL, pp. 297–304 (2006)

    Google Scholar 

  14. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)

    Google Scholar 

  15. Andor, D., Alberti, C., Weiss, D., Severyn, A., Presta, A., Ganchev, K., Petrov, S., Collins, M.: Globally normalized transition-based neural networks (2013). arXiv:1603.06042

  16. Li, C., Liu, Y., Liu, F., Zhao, L., Weng, F.: Improving multi-documents summarization by sentence compression based on expanded constituent parse trees. In: EMNLP, pp. 691–701 (2014)

    Google Scholar 

  17. Richard Landis, J., Koch, G.G.: The measurement of observer agreement for categorical data. Biometrics 33, 159–174 (1977)

    Article  MATH  Google Scholar 

  18. Cohen, J.: A coefficient of agreement for nominal scales. Educ. Psychol. Measur. 20(1), 37–46 (1960)

    Article  Google Scholar 

  19. Chen, D., Manning, C.D.: A fast and accurate dependency parser using neural networks. In: EMNLP, pp. 740–750 (2014)

    Google Scholar 

  20. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate (2014). arXiv:1409.0473

  21. Zeng, W., Luo, W., Fidler, S., Urtasun, R.: Efficient summarization with read-again and copy mechanism (2016). arXiv:1611.03382

  22. McCulloch, W.S., Pitts, W.: A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5(4), 115–133 (1943)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

This work was supported by JSPS KAKENHI Grant Numbers 15H02754, 16H02865. We also thank anonymous reviewers for their careful reading and helpful suggestions.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yang Zhao or Akiko Aizawa .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Zhao, Y., Senuma, H., Shen, X., Aizawa, A. (2017). Gated Neural Network for Sentence Compression Using Linguistic Knowledge. In: Frasincar, F., Ittoo, A., Nguyen, L., Métais, E. (eds) Natural Language Processing and Information Systems. NLDB 2017. Lecture Notes in Computer Science(), vol 10260. Springer, Cham. https://doi.org/10.1007/978-3-319-59569-6_56

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-59569-6_56

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-59568-9

  • Online ISBN: 978-3-319-59569-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics