Skip to main content

Conclusion

  • Chapter
  • First Online:
Neural Representations of Natural Language

Part of the book series: Studies in Computational Intelligence ((SCI,volume 783))

  • 628 Accesses

Abstract

In this book we have introduced methods for finding representations of natural language. A special focus of this book is on neural networks and related technologies. Neural networks elegantly produce useful representations as by-products, when applied to NLP tasks. We have introduced recent advances in neural networks, in particular, various forms of recurrent neural networks, and have covered techniques for working with words, word senses, and larger structures such as phrases, sentences and documents.

The key to artificial intelligence has always been the representation. You and I are streaming data engines

Jeff Hawkins 2012

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 99.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Bolukbasi, Tolga, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam T. Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in neural information processing systems, 4349–4357.

    Google Scholar 

  • Caliskan, Aylin, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science356 (6334), 183–186. ISSN: 0036-8075, https://doi.org/10.1126/science.aal4230, http://science.sciencemag.org/content/356/6334/183.full.pdf.

  • Chen, Tianqi and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 785–794. ACM.

    Google Scholar 

  • Cortes, Corinna and Vladimir Vapnik. 1995. Support-vector networks. Machine Learning 20 (3): 273–297. ISSN: 1573-0565, https://doi.org/10.1007/BF00994018.

  • Siddique, Farhad Bin, Onno Kampman, Yang Yang, Anik Dey, and Pascale Fung. 2017. Zara returns: Improved personality induction and adaptation by an empathetic virtual agent. In Proceedings of ACL 2017, system demonstrations, 121–126.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lyndon White .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

White, L., Togneri, R., Liu, W., Bennamoun, M. (2019). Conclusion. In: Neural Representations of Natural Language. Studies in Computational Intelligence, vol 783. Springer, Singapore. https://doi.org/10.1007/978-981-13-0062-2_6

Download citation

Publish with us

Policies and ethics