Skip to main content
Log in

Emotional computing based on cross-modal fusion and edge network data incentive

  • Original Article
  • Published:
Personal and Ubiquitous Computing Aims and scope Submit manuscript

A Correction to this article was published on 15 February 2020

This article has been updated

Abstract

In large-scale emotional events and complex emotional recognition applications, how to improve the recognition accuracy, computing efficiency, and user experience quality becomes the first problem to be solved. Aiming at the above problems, this paper proposes an emotional computing algorithm based on cross-modal fusion and edge network data incentive. In order to improve the efficiency of emotional data collection and the accuracy of emotional recognition, deep cross-modal fusion can capture the semantic deviation between multi-modal and design fusion methods through non-linear cross-layer mapping. In this paper, a deep fusion cross-modal data fusion method is designed. In order to improve the computational efficiency and user experience quality, a data incentive algorithm for edge network is designed in this paper, based on the overlapping delay gaps and incentive weights of large data collection and error detection. Finally, the edge network is mapped to a finite data set space from the set of emotional data elements inspired by heterogeneous emotional events. In this space, all emotional events and emotional data elements are balanced. In this paper, an emotional computing algorithm based on cross-modal data fusion is designed. The results of simulation experiments and theoretical analysis show that the proposed algorithm is superior to the edge network data incentive algorithm and the cross-modal data fusion algorithm in recognition accuracy, complex emotion recognition efficiency, and computation efficiency and delay.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Change history

  • 15 February 2020

    The support funding information was missed to be included in the printed version of the proof.

References

  1. Ren G, Zhang X, Duan S (2018) Articulatory-acoustic analyses of mandarin words in emotional context speech for smart campus[J]. IEEE Access 6:48418–48427

    Article  Google Scholar 

  2. Laurence Likforman-Sulem, Anna Esposito, Marcos Faundez-Zanuy et.al EMOTHAW: a novel database for emotional state recognition from handwriting and drawing[J]. IEEE Trans Human-Machine Syst 2017, 47(2): 273–284

  3. Zhou Q (2018) Multi-layer affective computing model based on emotional psychology[J]. Electron Commer Res 18(1):109–124

    Article  Google Scholar 

  4. Dumitras T, Prakash BA, Subrahmanian VS et al (2017) Understanding the relationship between human behavior and susceptibility to cyber attacks: a data-driven approach[J]. ACM Trans Intell Syst Technol 8(4):1–25

    Google Scholar 

  5. Wang C-H|L, Koong H-C (2018) Emotional design tutoring system based on multimodal affective computing techniques.[J]. Int J Dist Educ Technol 16(1):103–117

    Article  Google Scholar 

  6. Peng Y, Qi J, Xin H et al (2017) CCL: cross-modal correlation learning with multigrained fusion by hierarchical network[J]. IEEE Trans Multimedia 20(2):405–420

    Article  Google Scholar 

  7. Li G, Gan Y, Wu H, Xiao N, Lin L (2018) Cross-modal attentional context learning for RGB-D object detection[J]. IEEE Trans Image Process 28(4):1591–1601

    Article  MathSciNet  Google Scholar 

  8. Vukotić V, Raymond C, Gravier G (2018) A crossmodal approach to multimodal fusion in video hyperlinking[J]. IEEE Multimedia 25(2):11–23

    Article  Google Scholar 

  9. Tse WK (2016) Coherent magneto-optical effects in topological insulators: excitation near the absorption edge[J]. Phys Rev B 94(12):125430

    Article  Google Scholar 

  10. Ng TS, Wang S (2017) Recycling systems design using reservation incentive data[J]. J Oper Res Soc 68(10):1–23

    Article  Google Scholar 

  11. Dan P, Fan W, Chen G (2018) Data quality guided incentive mechanism design for crowdsensing[J]. IEEE Trans Mob Comput 17(2):307–319

    Article  Google Scholar 

  12. Lo LY, Li WO, Lee LP et al (2018) Running in fear: an investigation into the dimensional account of emotion in discriminating emotional expressions[J]. Cogn Process 19(3):1–11

    Google Scholar 

  13. Xu B, Fu Y, Jiang YG, Li B, Sigal L (2018) Heterogeneous knowledge transfer in video emotion recognition, attribution and summarization[J]. IEEE Trans Affect Comput 9(2):255–270

    Article  Google Scholar 

  14. Dean J. Krusienski, Guoxu Zhou, Jing Jin et.al Discriminative feature extraction via multivariate linear regression for SSVEP-based BCI[J]. IEEE Trans Neural Syst Rehabil Eng 2016, 24(5): 532–541

  15. Rayn Sakaguchi, Kenneth D. Morton, Leslie M. Collins, et.al. A comparison of feature representations for explosive threat detection in ground penetrating radar data[J]. IEEE Trans Geosci Remote Sens 2017, 55(12): 6736–6745

  16. Liu Y, Liu Y, Ding L (2018) Scene classification based on two-stage deep feature fusion[J]. IEEE Geosci Remote Sens Lett 15(2):183–186

    Article  Google Scholar 

  17. Yong Ye, Chunlong He,Bin Liao et.al Capacitive proximity sensor array with a simple high sensitivity capacitance measuring circuit for human–computer interaction[J]. IEEE Sensors J, 2018, 18(14):5906–5914

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoyan Shen.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ma, L., Ju, F., Wan, J. et al. Emotional computing based on cross-modal fusion and edge network data incentive. Pers Ubiquit Comput 23, 363–372 (2019). https://doi.org/10.1007/s00779-019-01232-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00779-019-01232-1

Keywords

Navigation