Abstract
In this report, a deep learning network architecture, which combined with the architecture of convolutional neural network, the architecture of recurrent neural network is proposed and implemented to do the text classification with the pre-trained word vectors. The proposed method set the word vector be as a static lookup table without updating, and the network still can ignore the noise which caused by missing words. The experimental results show that the accuracy of this study is consistent with the accuracy of other studies. It is shown the feasibility of this architecture. And has the following advantages: the accuracy rate of this architecture is higher than that of recurrent neural network only; compared with the convolutional neural network, the accuracy results are more stable; and less epoch is used to get stable results. But the shortcoming of this proposed architecture is that the training time will consumed much time.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Image net classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, Nevada, USA, 3–6 December 2012, pp. 1097–1105 (2012)
Graves, A., Mohamed, A.-r., Hinton, G.: Speech recognition with deep recurrent neural networks. In: IEEE International Conference on Acoustics, Speech and Signal Processing (2013). https://doi.org/10.1109/icassp.2013.6638947
Lopez, M.M., Kalita, J.: Deep learning applied to NLP (2017). https://arxiv.org/pdf/1703.03091.pdf
Brownlee, J.: 7 applications of deep learning for natural language processing. In: Deep Learning for Natural Language Processing, 20 September 2017. https://machinelearningmastery.com/applications-of-deep-learning-for-natural-language-processing/
Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. In: Proceedings of the International Conference on Learning Representations (ICLR 2013), Scottsdale, AZ, USA, 2–4 May 2013. https://arxiv.org/pdf/1301.3781.pdf
Bengio, Y., Ducharme, R., Vincent, P., Jauvin, C.: A neural probabilistic language model. J. Mach. Learn. Res. 3, 1137–1155 (2013)
Hinton, G.E.: Learning distributed representations of concepts. In: Proceedings of the Eighth Annual Conference of the Cognitive Science Society, Amherst, Massachusetts, USA, 15–17 August 1986, pp. 46–61 (1986)
Mikolov, T., Sutskever, I., Chen, K., Corrado, G., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Proceedings of the 26th International Conference on Neural Information Processing Systems, Harrah’s Lake Tahoe, Stateline, Nevada, USA, 5–10 December 2013, pp. 3111–3119 (2013)
Kim, Y.: Convolutional neural networks for sentence classification. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014, pp. 1746–1751 (2014)
Lai, S., Xu, L., Liu, K., Zhao, J.: Recurrent convolutional neural networks for text classification. In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, Hyatt Regency in Austin, Texas, USA, 25–30 January 2015, pp. 2267–2273 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Wang, MS., Wen, T.C. (2020). Implementation of Text Classification Model Based on Recurrent Neural Networks. In: Shen, J., Chang, YC., Su, YS., Ogata, H. (eds) Cognitive Cities. IC3 2019. Communications in Computer and Information Science, vol 1227. Springer, Singapore. https://doi.org/10.1007/978-981-15-6113-9_7
Download citation
DOI: https://doi.org/10.1007/978-981-15-6113-9_7
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-6112-2
Online ISBN: 978-981-15-6113-9
eBook Packages: Computer ScienceComputer Science (R0)