Skip to main content

Document-Improved Hierarchical Modular Attention for Event Detection

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2020)

Abstract

The task of event detection aims to find the event trigger for a sentence and identify the correct event type. Contextual information is crucial to event detection, which helps a model to identify the triggers better. Existing models utilizing contextual information only take document information as additional features for deep learning model, without considering the specific contribution of document information to trigger classification. In this paper, we propose a Document-Improved Hierarchical Modular Event Detection (DIHMED) model to extract hierarchical contextual information. Specifically, considering the relevance between event types, we build independent modules that combine the document-level information to express this relevance. Given the fact that events from the same document are often related, these modules can make better use of document-level information. We conduct several experiments on a Chinese political event dataset and the results show that our model can outperform the state-of-the-art models.

This work is supported by the National Natural Science Foundation of China (Grant No. U1934212).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Andreas, J., Rohrbach, M., Darrell, T., Klein, D.: Neural module networks. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), Las Vegas, NV, USA, 27–30 June 2016, pp. 39–48 (2016). https://doi.org/10.1109/CVPR.2016.12

  2. Chen, Y., Xu, L., Liu, K., Zeng, D., Zhao, J.: Event extraction via dynamic multi-pooling convolutional neural networks. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing (ACL 2015), 26–31 July 2015, Beijing, China, Volume 1: Long Papers, pp. 167–176 (2015). https://www.aclweb.org/anthology/P15-1017/

  3. Duan, S., He, R., Zhao, W.: Exploiting document level information to improve event detection via recurrent neural networks. In: Proceedings of the Eighth International Joint Conference on Natural Language Processing (IJCNLP 2017), Taipei, Taiwan, 27 November–1 December 2017 - Volume 1: Long Papers, pp. 352–361 (2017). https://www.aclweb.org/anthology/I17-1036/

  4. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997). https://doi.org/10.1162/neco.1997.9.8.1735

    Article  Google Scholar 

  5. Hong, Y., Zhang, J., Ma, B., Yao, J., Zhou, G., Zhu, Q.: Using cross-entity inference to improve event extraction. In: The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, 19–24 June 2011, Portland, Oregon, USA, pp. 1127–1136 (2011). https://www.aclweb.org/anthology/P11-1113/

  6. Kalchbrenner, N., Grefenstette, E., Blunsom, P.: A convolutional neural network for modelling sentences. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014), 22–27 June 2014, Baltimore, MD, USA, Volume 1: Long Papers, pp. 655–665 (2014). https://doi.org/10.3115/v1/p14-1062

  7. Kim, Y.: Convolutional neural networks for sentence classification. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), 25–29 October 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pp. 1746–1751 (2014). https://doi.org/10.3115/v1/d14-1181

  8. Li, Q., Ji, H., Hong, Y., Li, S.: Constructing information networks using one single model. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), 25–29 October 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pp. 1846–1851 (2014). https://www.aclweb.org/anthology/D14-1198/

  9. Li, Q., Ji, H., Huang, L.: Joint event extraction via structured prediction with global features. In: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL 2013), 4–9 August 2013, Sofia, Bulgaria, Volume 1: Long Papers, pp. 73–82 (2013). https://www.aclweb.org/anthology/P13-1008/

  10. Liu, X., Luo, Z., Huang, H.: Jointly multiple events extraction via attention-based graph information aggregation. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018, pp. 1247–1256 (2018). https://doi.org/10.18653/v1/d18-1156

  11. Luo, Z., Sui, G., Zhao, H., Li, X.: A shallow semantic parsing framework for event argument extraction. In: Douligeris, C., Karagiannis, D., Apostolou, D. (eds.) KSEM 2019. LNCS (LNAI), vol. 11776, pp. 88–96. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29563-9_9

    Chapter  Google Scholar 

  12. Luong, T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015), Lisbon, Portugal, 17–21 September 2015, pp. 1412–1421 (2015). https://www.aclweb.org/anthology/D15-1166/

  13. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held 5–8 December 2013, Lake Tahoe, Nevada, United States, pp. 3111–3119 (2013). http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality

  14. Nguyen, T.H., Cho, K., Grishman, R.: Joint event extraction via recurrent neural networks. In: NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, 12–17 June 2016, pp. 300–309 (2016). https://www.aclweb.org/anthology/N16-1034/

  15. Nguyen, T.H., Grishman, R.: Event detection and domain adaptation with convolutional neural networks. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing (ACL 2015), 26–31 July 2015, Beijing, China, Volume 2: Short Papers, pp. 365–371 (2015). https://www.aclweb.org/anthology/P15-2060/

  16. Shimura, K., Li, J., Fukumoto, F.: HFT-CNN: learning hierarchical category structure for multi-label short text categorization. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018, pp. 811–816 (2018). https://www.aclweb.org/anthology/D18-1093/

  17. Walker, C., Strassel, S., Medero, J., Maeda, K.: ACE 2005 multilingual training corpus. Linguistic Data Consortium, Philadelphia 57 (2006)

    Google Scholar 

  18. Wang, X., et al.: HMEAE: hierarchical modular event argument extraction. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP 2019), Hong Kong, China, 3–7 November 2019, pp. 5776–5782 (2019). https://doi.org/10.18653/v1/D19-1584

  19. Zhang, T., Ji, H., Sil, A.: Joint entity and event extraction with generative adversarial imitation learning. Data Intell. 1(2), 99–120 (2019). https://doi.org/10.1162/dint_a_00014

    Article  Google Scholar 

  20. Zhao, Y., Jin, X., Wang, Y., Cheng, X.: Document embedding enhanced event detection with hierarchical and supervised attention. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018), Melbourne, Australia, 15–20 July 2018, Volume 2: Short Papers, pp. 414–419 (2018). https://doi.org/10.18653/v1/P18-2066

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qingfeng Du .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ni, Y., Du, Q., Xu, J. (2020). Document-Improved Hierarchical Modular Attention for Event Detection. In: Li, G., Shen, H., Yuan, Y., Wang, X., Liu, H., Zhao, X. (eds) Knowledge Science, Engineering and Management. KSEM 2020. Lecture Notes in Computer Science(), vol 12275. Springer, Cham. https://doi.org/10.1007/978-3-030-55393-7_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-55393-7_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-55392-0

  • Online ISBN: 978-3-030-55393-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics