Skip to main content

Knowledge Injection to Neural Networks with Progressive Learning Strategy

  • Conference paper
  • First Online:
Agents and Artificial Intelligence (ICAART 2020)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12613))

Included in the following conference series:

Abstract

Nowadays, deep learning has become the most modern and practical approach to solve a wide range of problems. With the ability to automatically extract the hierarchy of semantic level from the data, neural networks often outperform other techniques in complex issues. However, to perform well, the models need a vast amount of data, which is not always available. To overcome that problem, we propose an approach of injecting knowledge into the neural network instead of letting it struggles by itself. Our proposed policy for the training process is guiding the model to learn the label from a similarity distribution. Finally, we conduct experiments in the chord modeling problem to show the effectiveness of our method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Asami, T., Masumura, R., Yamaguchi, Y., Masataki, H., Aono, Y.: Domain adaptation of dnn acoustic models using knowledge distillation. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5185–5189, March 2017. https://doi.org/10.1109/ICASSP.2017.7953145

  2. Bengio, S., Vinyals, O., Jaitly, N., Shazeer, N.: Scheduled sampling for sequence prediction with recurrent neural networks. In: Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS 2015, vol. 1. pp. 1171–1179. MIT Press, Cambridge (2015). http://dl.acm.org/citation.cfm?id=2969239.2969370

  3. Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014)

  4. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  5. Gentner, D., Markman, A.B.: Structure mapping in analogy and similarity. Am. Psychol. 52(1), 45 (1997)

    Article  Google Scholar 

  6. Graves, A., Mohamed, A.r., Hinton, G.: Speech recognition with deep recurrent neural networks. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 6645–6649. IEEE (2013)

    Google Scholar 

  7. Hahn, U., Chater, N., Richardson, L.B.: Similarity as transformation. Cognition 87(1), 1–32 (2003)

    Article  Google Scholar 

  8. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)

  9. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  10. Huang, Z., Xu, W., Yu, K.: Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991 (2015)

  11. John, J.H.: Neural network and physical systems with emergent collective computational abilities. Proc. Nat. Acad. Sci. U.S.A. 79, 2554–2558 (1982)

    Article  MathSciNet  Google Scholar 

  12. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)

  13. Neuwirth, M., Harasim, D., Moss, F.C., Rohrmeier, M.: The Annotated Beethoven Corpus (ABC): a dataset of harmonic analyses of all Beethoven string quartets. Front. Digit. Hum. 5, 16 (2018). https://doi.org/10.3389/fdigh.2018.00016. https://www.frontiersin.org/article/11.3389/fdigh.2018.00016

  14. Pennington, J., Socher, R., Manning, C.: GloVe: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, pp. 1532–1543. Association for Computational Linguistics, October 2014. https://doi.org/10.3115/v1/D14-1162

  15. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. OpenAI Blog 1(8), 9 (2019)

    Google Scholar 

  16. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. OpenAI Blog 1(8), 9 (2019)

    Google Scholar 

  17. Shepard, R.N.: The analysis of proximities: multidimensional scaling with an unknown distance function. i. Psychometrika 27(2), 125–140 (1962)

    Google Scholar 

  18. Song, L., Cheong, C.W., Yin, K., Cheung, W.K., Fung, B.C.M., Poon, J.: Medical concept embedding with multiple ontological representations. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, pp. 4613–4619. International Joint Conferences on Artificial Intelligence Organization (7 2019)

    Google Scholar 

  19. Sundermeyer, M., Schlüter, R., Ney, H.: LSTM neural networks for language modeling. In: Thirteenth Annual Conference of the International Speech Communication Association (2012)

    Google Scholar 

  20. Tversky, A.: Features of similarity. Psychol. Rev. 84(4), 327 (1977)

    Article  Google Scholar 

  21. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)

    Google Scholar 

  22. Vu, T.K., Racharak, T., Tojo, S., Nguyen, H.T., Nguyen, L.M.: Progressive training in recurrent neural networks for chord progression modeling. In: Proceedings of the 12th International Conference on Agents and Artificial Intelligence (2020)

    Google Scholar 

  23. Vural, V., Fung, G., Rosales, R., Dy, J.G.: Multi-class classifiers and their underlying shared structure. In: IJCAI (2009)

    Google Scholar 

  24. Wu, Z., Jiang, Y.G., Wang, J., Pu, J., Xue, X.: Exploring inter-feature and inter-class relationships with deep neural networks for video classification. In: Proceedings of the 22Nd ACM International Conference on Multimedia, MM 2014, pp. 167–176. ACM, New York (2014). https://doi.org/10.1145/2647868.2654931

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ha Thanh Nguyen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nguyen, H.T., Vu, T.K., Racharak, T., Nguyen, L.M., Tojo, S. (2021). Knowledge Injection to Neural Networks with Progressive Learning Strategy. In: Rocha, A.P., Steels, L., van den Herik, J. (eds) Agents and Artificial Intelligence. ICAART 2020. Lecture Notes in Computer Science(), vol 12613. Springer, Cham. https://doi.org/10.1007/978-3-030-71158-0_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-71158-0_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-71157-3

  • Online ISBN: 978-3-030-71158-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics