Skip to main content
Log in

LSTM-SN: complex text classifying with LSTM fusion social network

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

Whether it is an NLP (natural language processing) task or an NLU (natural language understanding) task, many methods are model oriented, ignoring the importance of data features. Such models did not perform well for many tasks based on feature loose, unbalanced tricky data including text classification tasks. In this regard, this paper proposes a classification method called LSTM-SN (long-short term memory RNN fusion social network) based on extremely complex datasets. The approach condenses the characteristics of the dataset. LSTM combines with social network methods derived from specific datasets to complete the classification task, and then use complex network structure evolution methods to discover dynamic social attributes. The experimental results show that this method can overcome the shortcomings of traditional methods and achieve better classification results. Finally, a method to calculate the accuracy of fusion model is proposed. The research ideas of this paper have far-reaching significance in the domain of social data analysis and relation extraction.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Jie Z, Jianwen Z (2019) Machine learning classification problem and algorithm research. Software 40(7):205–208

    Google Scholar 

  2. Hang L (2019) Statistical learning methods. Tsinghua University Press. 567–78

  3. Zhou B, Polap D, Wozniak M (2019) A regional adaptive variational PDE model for computed tomography image reconstruction. Pattern Recognit 92:64–81. https://doi.org/10.1016/j.patcog.2019.03.009

    Article  Google Scholar 

  4. Qiao K, Nowak J, Korytkowski M, Scherer R, Woniak M (2020) accurate and fast URL phishing detector: a convolutional neural network approach. Comput Netw 178(4):107275. https://doi.org/10.1016/j.comnet.2019.04.017

    Article  Google Scholar 

  5. Xia X, Marcin W, Fan X, Damasevicius R, Li Y (2019) Multi-sink distributed power control algorithm for cyber-physical-systems in coal mine tunnels. Comput Netw 161:210–219. https://doi.org/10.1016/j.comnet.2019.04.017

    Article  Google Scholar 

  6. Wei W, Song H, Wei L, Shen P, Vasilakos A (2017) Gradient-driven parking navigation using a continuous information potential field based on wireless sensor network. Inf Sci 408

  7. Wang B, Shen T, Long G, et al (2021) Eliminating sentiment bias for aspect-level sentiment classification with unsupervised opinion extraction

  8. Cherif W, Madani A, Kissi M (2021) Text categorization based on a new classification by thresholds. Progress Artif Intell 10(4):433–447

    Article  Google Scholar 

  9. Maldonado S, Vairetti C (2021) Efficient n-gram construction for text categorization using feature selection techniques. Intell Data Anal 25(3):509–525

    Article  Google Scholar 

  10. Liu M, Liu L, Cao J, Du Q (2022) Co-attention network with label embedding for text classification. Neurocomputing 471:61–69

    Article  Google Scholar 

  11. Ma Y, Liu X, Zhao L, Liang Y, Jin B (2021) Hybrid embedding-based text representation for hierarchical multi-label text classification. Expert Syst Appl 187(15):115905

    Google Scholar 

  12. Kim Y (2014) Convolutional neural networks for sentence classification. In: EMNLP 2014 - 2014 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 1746–1751). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/d14-1181

  13. Johnson R, Tong Z (2017) Deep pyramid convolutional neural networks for text categorization. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

  14. Zaremba W, Sutskever I, Vinyals O (2014) Recurrent neural network regularization. Eprint Arxiv

  15. Zhuang T, Zhishu W (2021) Transformer-capsule integrated model for text classification. Comput Eng Appl 151–156

  16. Rajagopal D, Balachandran V, Hovy E, Tsvetkov Y (2021) SelfExplain: a self-explaining architecture for neural text classifiers. https://doi.org/10.48550/arXiv.2103.12279

  17. Wang H, Shi J, Zhang Z (2018) Semantic relation extraction of lstm based on attention mechanism. Comput Appl Res 35(5):143–146

    Google Scholar 

  18. Peng Y, Xiao T, Yuan H (2021) Cooperative gating network based on a single BERT encoder for aspect term sentiment analysis. Appl Intell 52(5):5867–5879

    Article  Google Scholar 

  19. Ramaswamy SL, Chinnappan J (2022) Recog Net-LSTM+CNN: a hybrid network with attention mechanism for aspect categorization and sentiment classification. J Intell Inform Syst 58(2):379–404

    Article  Google Scholar 

  20. Sharma V, Srivastava S, Valarmathi B (2021) A comparative study on the performance of deep learning algorithms for detecting the sentiments expressed in modern slangs

  21. Guo H, Chi C and Zhan X, (2021) ERNIE-BiLSTM Based Chinese text sentiment classification method ICCEA 84–88.

  22. Peter S, Uszkoreit J, and Vaswani A (2018) Self-attention with relative position representations. arXiv preprint arXiv:1803.02155

  23. Shen T, Zhou T, Long G, Jiang J, Pan S, and Zhang C, Disan (2017) Directional self-attention network for rnn/cnn-free language understanding. arXiv preprint arXiv:1709.04696

  24. Moreno Y, Gómez JB, Pacheco AF (2003) Epidemic incidence in correlated complex networks. Phys Rev E 68:035103(R)

    Article  Google Scholar 

  25. Naderipour M, Fazel Zarandi MH, Bastani S (2021) Fuzzy community detection on the basis of similarities in structural/attribute in large-scale social networks. Artif Intell Rev 55(2):1373–1407

    Article  Google Scholar 

  26. Meng D, Sun L, Tian G (2022) Dynamic mechanism design on social networks. Games Econ Behav 131:84–120

    Article  MATH  MathSciNet  Google Scholar 

  27. Sims, M, and D. Bamman. (2020) Measuring information propagation in literary social networks

  28. Wu Q, et al (2019) Dual graph aention networks for deep latent representation of multifaceted social e ects in recommender systems

  29. Roller S, ERK K, and Boleda G (2014) Inclusive yet selective: supervised distributional hypernymy detection. In COLING

  30. Nakashole N, Weikum G, and Suchanek F Pa & y: a taxonomy of relational Pa & erns with semantic types. In EMNLP

  31. Mikolov T, Sutskever I, Chen K, Corrado GS, and Dean J (2013) Distributed representations of words and phrases and their compositionality. In NIPS

  32. Aggarwal M, and Murty MN (2021) Machine learning in social networks: embedding nodes, edges, Communities, and Graphs

  33. Zhang B, Zhou Y, Xu X, Wang D, Guan X (2016) Dynamic structure evolution of time-dependent network. Physica A Stat Mech Appl 456:347–358

    Article  Google Scholar 

  34. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, & Gomez AN et al. (2017) Attention is all you need. arXiv

  35. Huang K, Wang X (2022) ADA-INCVAE: Improved data generation using variational autoencoder for imbalanced classification. Appl Intell 1-16

  36. Qu, M, Ren X, & Han J (2017) Automatic synonym discovery with knowledge bases. ACM

  37. Szegedy C, Ioffe S, Vanhoucke V, et al (2017) Inception-v4, inception-res net and the impact of residual connections on learning

  38. Alshubaily I (2021) TextCNN with attention for text classification https://doi.org/10.48550/arXiv.2108.01921[P]

  39. Zhang Y, Wallace B (2015) A sensitivity analysis of (and practitioners' guide to) convolutional neural networks for sentence classification. Comput Sci

  40. Zhang Hu, Wang X, Hongye Ru, Tan L (2019) Applying data discretization to DPCNN for law article prediction. NLPCC 1:459–470

    Google Scholar 

Download references

Acknowledgements

This job is Supported by Natural Science Foundation of Shaanxi Province of China (2021JM-344) and the Key Research and Development Program of Shaanxi Province (No.2018ZDXM-GY-036) and Shaanxi Key Laboratory of Intelligent Processing for Big Energy Data (No.IPBED7) , This work is also supported by the independent research project of Shaanxi Provincial Key Laboratory of Network Computing and Security Technology (NCST2021YB-05).

Author information

Authors and Affiliations

Authors

Contributions

The contributions of the various authors in this manuscript are as follows: Wei Wei completed the design of the accuracy calculation model and the revision of the paper. Xiaowan Li completed the construction and implementation of the classification model and the accuracy calculation model, as well as the writing of the paper. Beibei Zhang designed the classification model architecture and analyzed the specific steps of data preprocessing and the structure of the thesis. Linfeng Li completed data filtering, labeling and preprocessing, and drawing of graphs in the article. Robertas Damaševičius completed the testing of the model and polished the language of the paper. Rafal Scherer completed the testing of the model and polished the language of the paper. At the beginning of the research in this paper, a large number of data annotator were required to complete the work of data markup, so that subsequent research could continue. Here, I would like to thank Ms. Ding Xiangxiang, Mr. Sun Xuesong, Mr. Wang Tuo who participated in the data labeling and cleaning. We also thank Dr. Qiang Si for helping the authors translate this manuscript.

Corresponding author

Correspondence to Beibei Zhang.

Ethics declarations

Conflict of interest

All texts, pictures and tables in the article belong to the original author and follow ethical guidelines, no academic misconduct. In addition, all authors informed and agreed to submit this manuscript for publication in The Journal of Supercomputing. Because the data used in this manuscript belong to the project, according to the project requirements, the data cannot be shared. All authors of this article have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wei, W., Li, X., Zhang, B. et al. LSTM-SN: complex text classifying with LSTM fusion social network. J Supercomput 79, 9558–9583 (2023). https://doi.org/10.1007/s11227-022-05034-w

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-022-05034-w

Keywords

Navigation