Skip to main content

Contactless Human Emotion Analysis Across Different Modalities

  • Chapter
  • First Online:
Contactless Human Activity Analysis

Part of the book series: Intelligent Systems Reference Library ((ISRL,volume 200))

Abstract

Emotion recognition and analysis is an essential part of affective computing which plays a vital role nowadays in healthcare, security systems, education, etc. Numerous scientific researches have been conducted developing various types of strategies, utilizing methods in different areas to identify human emotions automatically. Different types of emotions are distinguished through the combination of data from facial expressions, speech, and gestures. Also, physiological signals, e.g., EEG (Electroencephalogram), EMG (Electromyogram), EOG (Electrooculogram), blood volume pulse, etc. provide information on emotions. The main idea of this paper is to identify various emotion recognition techniques and denote relevant benchmark data sets and specify algorithms with state-of-the-art results. We have also given a review of multimodal emotion analysis, which deals with various fusion techniques of the available emotion recognition modalities. The results of the existing literature show that emotion recognition works best and gives satisfactory accuracy if it uses multiple modalities in context. At last, a survey of the rest of the problems, challenges, and corresponding openings in this field is given.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ekman, P., Oster, H.: Facial expressions of emotion. Ann. Rev. Psychol. 30(1), 527–554 (1979)

    Article  Google Scholar 

  2. Ekman, P., Friesen, W.V., Ellsworth, P.: Emotion in the Human Face: Guidelines for Research and an Integration of Findings, 1st edn. Elsevier (1972)

    Google Scholar 

  3. Posner, J., Russell, J.A., Peterson, B.S.: The circumplex model of affect: an integrative approach to affective neuroscience, cognitive development, and psychopathology. Dev Psychopathol 17, 715–734 (2005)

    Article  Google Scholar 

  4. Wundt, W.: Principles of physiological psychology. In: Readings in the History of Psychology, pp. 248–250. Appleton-Century-Crofts, Connecticut, USA (1948)

    Google Scholar 

  5. De Nadai, S., D’Incà, M., Parodi, F., Benza, M., Trotta, A., Zero, E., Zero, L., Sacile, R.: Enhancing safety of transport by road by on-line monitoring of driver emotions. In: 11th System of Systems Engineering Conference (SoSE), vol. 2016, pp. 1–4. Kongsberg (2016)

    Google Scholar 

  6. Guo, R., Li, S., He, L., Gao, W., Qi, H., Owens, G.: Pervasive and unobtrusive emotion sensing for human mental health. In: Proceedings of the 7th International Conference on Pervasive Computing Technologies for Healthcare, pp. 436–439, Italy, Venice (2013)

    Google Scholar 

  7. Verschuere, B., Crombez, G., Koster, E., Uzieblo, K.: Psychopathy and physiological detection of concealed information: a review. Psychologica Belgica 46, 99–116 (2006)

    Article  Google Scholar 

  8. Marechal, C., et al.: Survey on AI-based multimodal methods for emotion detection. In: High-Performance Modelling and Simulation for Big Data Applications, pp. 307-324. Springer (2019)

    Google Scholar 

  9. Sebe, N., Cohen, I., Gevers, T., Huang, T.S.: Multimodal approaches for emotion recognition: a survey. In: Proceedings of SPIE—The International Society for Optical Engineering (2004)

    Google Scholar 

  10. Mukeshimana, M., Ban, X., Karani, N., Liu, R.: Multimodal emotion recognition for human-computer interaction: a survey. Int. J. Sci. Eng. Res. 8(4), 1289–1301 (2017)

    Google Scholar 

  11. Xu, T., Zhou, Y., Wang, Z., Peng, Y.: Learning emotions EEG-based recognition and brain activity: a survey study on BCI for intelligent tutoring system. In: The 9th International Conference on Ambient Systems. Networks and Technologies (ANT 2018) and the 8th International Conference on Sustainable Energy Information Technology (SEIT-2018), pp. 376–382, Porto, Portugal (2018)

    Google Scholar 

  12. Corneanu, C.A., Simón, M.O., Cohn, J.F., Guerrero, S.E.: Survey on RGB, 3D, thermal, and multimodal approaches for facial expression recognition: history, trends, and affect-related applications. IEEE Trans. Pattern Anal. Mach. Intell. 38(8), 1548–1568 (2016)

    Article  Google Scholar 

  13. Samadiani, N., Huang, G., Cai, B., Luo, W., Chi, H., Xiang, Y., He, J.: A review on automatic facial expression recognition systems assisted by multimodal sensor data. Sensors 19(8), 1863–1890 (2019)

    Article  Google Scholar 

  14. Shu, L., Xie, J., Yang, M., Li, Z., Liao, D., Xu, X., Yang, X.: A review of emotion recognition using physiological signals. Sensors 18(7), 2074–2115 (2018)

    Article  Google Scholar 

  15. Ko, B.C.: A brief review of facial emotion recognition based on visual information. Sensors 18(7), 2074–2115 (2018)

    MathSciNet  Google Scholar 

  16. Sailunaz, K., Dhaliwal, M., Rokne, J., Alhajj, R.: Emotion detection from text and speech: a survey. Soc. Netw. Anal. Mining 8(28) (2018)

    Google Scholar 

  17. Oh, Y., See, J., Anh, C.L., Phan, R.C., Baskaran, M.V.: A survey of automatic facial micro-expression analysis: databases, methods, and challenges. Front. Psychol. 9, 1128–1149 (2018)

    Article  Google Scholar 

  18. Suk, M., Prabhakaran, B.: Real-time mobile facial expression recognition system—a case study. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 132-137. Columbus, OH, USA (2014)

    Google Scholar 

  19. Pohjalainen, J., Ringeval, F., Zhang, Z., Orn Schuller, B.: Spectral and cepstral audio noise reduction techniques in speech emotion recognition. In: Proceedings of the 24th ACM international Conference on Multimedia, pp. 670–674 (2016)

    Google Scholar 

  20. Koelstra, S., et al.: DEAP: a database for emotion analysis using physiological signals. IEEE Trans. Affect. Comput. 3(1), 18–31 (2012)

    Article  Google Scholar 

  21. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  22. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv preprint arXiv:1409.1556

  23. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9 (2015)

    Google Scholar 

  24. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)

    Google Scholar 

  25. Cootes, T.F., Edwards, G.J., Taylor, C.J.: Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 681–685 (2001)

    Article  Google Scholar 

  26. Zhu, X., Ramanan, D.: Face detection, pose estimation, and landmark localization in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2879-2886 (2012)

    Google Scholar 

  27. Asthana, A., Zafeiriou, S., Cheng, S., Pantic, M.: Robust discriminative response map fitting with constrained local models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3444–3451 (2013)

    Google Scholar 

  28. Xiong, X., De la Torre, F.: Supervised descent method and its applications to face alignment. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 532–539 (2013)

    Google Scholar 

  29. Ren, S., Cao, X., Wei, Y., Sun, J.: Face alignment at 3000 fps via regressing local binary features. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1685–1692 (2014)

    Google Scholar 

  30. Asthana, A., Zafeiriou, S., Cheng, S., Pantic, M.: Incremental face alignment in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1859–1866 (2014)

    Google Scholar 

  31. Zhang, Z., Luo, P., Loy, C.C., Tang, X.: Facial landmark detection by deep multi-task learning. In: European Conference on Computer Vision, pp. 94–108 (2014)

    Google Scholar 

  32. Zhang, K., Zhang, Z., Li, Z., Qiao, Y.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process. Lett. 23(10), 1499–1503 (2016)

    Article  Google Scholar 

  33. Kuo, C.-M., Lai, S.-H., Sarkis, M.: A compact deep learning model for robust facial expression recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 2121–2129 (2018)

    Google Scholar 

  34. Pitaloka, D.A., Wulandari, A., Basaruddin, T., Liliana, D.Y.: Enhancing CNN with preprocessing stage in automatic emotion recognition. Procedia Computer Science 116, 523–529 (2017)

    Article  Google Scholar 

  35. Zhang, Q., Chen, X., Zhan, Q., Yang, T., Xia, S.: Respiration-based emotion recognition with deep learning. Comput. Indus. 92–93, 84–90 (2017)

    Article  Google Scholar 

  36. Park, M.W., Kim, C.J., Hwang, M., Lee, E.C.: Individual emotion classification between happiness and sadness by analyzing photoplethysmography and skin temperature. In: Proceedings of the 2013 4th World Congress on Software Engineering, pp. 190–194 (2013)

    Google Scholar 

  37. Ouellet, S.: Real-time emotion recognition for gaming using deep convolutional network features (2014). arXiv preprint arXiv:1408.3750

  38. Li, J., Lam, E.Y.: Facial expression recognition using deep neural networks. In: IEEE International Conference on Imaging Systems and Techniques (IST), pp. 1–6 (2015)

    Google Scholar 

  39. Liu, P., Han, S., Meng, Z., Tong, Y.: Facial expression recognition via a boosted deep belief network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1805–1812 (2014)

    Google Scholar 

  40. Liu, M., Li, S., Shan, S., Chen, X.: Au-aware deep networks for facial expression recognition. In: 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 1–6 (2013)

    Google Scholar 

  41. Liu, M., Li, S., Shan, S.: Au-inspired deep networks for facial expression feature learning. Neurocomputing 159, 126–136 (2015)

    Article  Google Scholar 

  42. Khorrami, P., Paine, T., Huang, T.: Do deep neural networks learn facial action units when doing expression recognition? (2015). arXiv preprint arXiv:1510.02969v3

  43. Ding, H., Zhou, S.K., Chellappa, R.: Facenet2expnet: regularizing a deep face recognition net for expression recognition. In: 12th IEEE International Conference on Automatic Face & Gesture Recognition, pp. 118–126 (2017)

    Google Scholar 

  44. Zeng, N., Zhang, H., Song, B., Liu, W., Li, Y., Dobaie, A.M.: Facial expression recognition via learning deep sparse autoencoders. Neurocomputing 273, 643–649 (2018)

    Article  Google Scholar 

  45. Cai, J., Meng, Z., Khan, A.S., Li, Z., O’Reilly, J., Tong, Y.: Island loss for learning discriminative features in facial expression recognition. In: 13th IEEE International Conference on Automatic Face & Gesture Recognition, pp. 302–309 (2018)

    Google Scholar 

  46. Meng, Z., Liu, P., Cai, J., Han, S., Tong, Y.: Identity-aware convolutional neural network for facial expression recognition. In: 12th IEEE International Conference on Automatic Face & Gesture Recognition, pp. 558–565 (2017)

    Google Scholar 

  47. Liu, X., Kumar, B., You, J., Jia, P.: Adaptive deep metric learning for identity-aware facial expression recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 522–531 (2017)

    Google Scholar 

  48. Yang, H., Ciftci, U., Yin, L.: Facial expression recognition by de-expression residue learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2168–2177 (2018)

    Google Scholar 

  49. Zhang, Z., Luo, P., Chen, C.L., Tang, X.: From facial expression recognition to interpersonal relation prediction. Int. J. Comput. Vis. 126(5), 1–20 (2018)

    Article  MathSciNet  Google Scholar 

  50. Zhao, G., Pietikinen, M.: Boosted multi-resolution spatiotemporal descriptors for facial expression recognition. Pattern Recogn. Lett. 30, 1117–1127

    Google Scholar 

  51. Song, M., Tao, D., Liu, Z., Li, X., Zhou, M.: Image ratio features for facial expression recognition application. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 40, 779–788 (2010)

    Google Scholar 

  52. Zhang, L., Tjondronegoro, D.: Facial expression recognition using facial movement features. IEEE Trans. Affect. Comput. 2(4), 219–229 (2011)

    Article  Google Scholar 

  53. Poursaberi, A., Noubari, H.A., Gavrilova, M., Yanushkevich, S.N.: Gauss-Laguerre wavelet textural feature fusion with geometrical information for facial expression identification. EURASIP J. Image Video Process. 1–13 (2012)

    Google Scholar 

  54. Ji, Y., Idrissi, K.: Automatic facial expression recognition based on spatiotemporal descriptors. Pattern Recogn. Lett. 33, 1373–1380 (2012)

    Article  Google Scholar 

  55. Ucar, A., Demir, Y., Guzelis, C.: A new facial expression recognition based on curvelet transform and online sequential extreme learning machine initialized with spherical clustering. Neural Comput. Appl. 27, 131–142 (2014)

    Article  Google Scholar 

  56. Zhang, L., Tjondronegoro, D., Chandran, V.: Random Gabor based templates for facial expression recognition in images with facial occlusion. Neurocomputing 145, 451–464 (2014)

    Article  Google Scholar 

  57. Mahersia, H., Hamrouni, K.: Using multiple steerable filters and Bayesian regularization for facial expression recognition. Eng. Appl. Artif. Intell. 38, 190–202 (2015)

    Article  Google Scholar 

  58. Happy, S.L., Member, S., Routray, A.: Automatic facial expression recognition using features of salient facial patches. IEEE Trans. Affect. Comput. 6, 1–12 (2015)

    Article  Google Scholar 

  59. Biswas, S.: An efficient expression recognition method using contourlet transform. In: Proceedings of the 2nd International Conference on Perception and Machine Intelligence, pp. 167–174 (2015)

    Google Scholar 

  60. Siddiqi, M.H., Ali, R., Khan, A.M., Park, Y., Lee, S.: Human facial expression recognition using stepwise linear discriminant analysis and hidden conditional random fields. IEEE Trans. Image Process. 24(4), 1386–1398 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  61. Cossetin, M.J., Nievola , J.C., Koerich, A.L.: Facial expression recognition using a pairwise feature selection and classification approach. In: 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, pp. 5149–5155 (2016)

    Google Scholar 

  62. Salmam, F.Z., Madani, A., Kissi, M.: Facial expression recognition using decision trees. In: 2016 13th International Conference on Computer Graphics, Imaging and Visualization (CGiV), Beni Mellal, pp. 125–130 (2016)

    Google Scholar 

  63. Kumar, S., Bhuyan, M.K., Chakraborty, B.K.: Extraction of informative regions of a face for facial expression recognition. IET Comput. Vis. 10(6), 567–576 (2016)

    Article  Google Scholar 

  64. Mollahosseini, A., Chan, D., Mahoor, M.H.: Going deeper in facial expression recognition using deep neural networks. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, pp. 1–10 (2016)

    Google Scholar 

  65. Li, S., Deng, W., Du, J.: Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, pp. 2584–2593 (2017)

    Google Scholar 

  66. Reed, S., Sohn, K., Zhang, Y., Lee, H.: Learning to disentangle factors of variation with manifold interaction. In: International Conference on Machine Learning, pp. 1431–1439 (2014)

    Google Scholar 

  67. Devries, T., Biswaranjan, K., Taylor, G.W.: Multi-task learning of facial landmarks and expression. In: 2014 Canadian Conference on Computer and Robot Vision, Montreal, QC, pp. 98–103 (2014)

    Google Scholar 

  68. Tang, Y.: Deep learning using linear support vector machines (2013). arXiv preprint arXiv:1306.0239

  69. Zhang, Z., Luo, P., Loy, C.-C., Tang, X.: Learning social relation traits from face images. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3631–3639 (2015)

    Google Scholar 

  70. Guo, Y., Tao, D., Yu, J., Xiong, H., Li, Y., Tao, D.: Deep neural networks with relativity learning for facial expression recognition. In: 2016 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Seattle, WA, pp. 1–6 (2016)

    Google Scholar 

  71. Kim, B.-K., Dong, S.-Y., Roh, J., Kim, G., Lee, S.-Y.: Fusing aligned and non-aligned face information for automatic affect recognition in the wild: a deep learning approach. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 48–57 (2016)

    Google Scholar 

  72. Pramerdorfer, C., Kampel, M.: Facial expression recognition using convolutional neural networks: state-of-the-art (2016). arXiv preprint arXiv:1612.02903

  73. Hamester, D., Barros, P., Wermter, S. (2015) Face expression recognition with a 2-channel convolutional neural network. In: 2015 International Joint Conference on Neural Networks (IJCNN), Killarney, pp. 1–8 (2015)

    Google Scholar 

  74. Noh, S., Park, H., Jin, Y., Park, J.: Feature-adaptive motion energy analysis for facial expression recognition. In: International Symposium on Visual Computing, pp. 452–463 (2007)

    Google Scholar 

  75. Bashyal, S., Venayagamoorthy, G.K.: Recognition of facial expressions using Gabor wavelets and learning vector quantization. Eng. Appl. Artif. Intell. 21, 1056–1064 (2008)

    Article  Google Scholar 

  76. Wang, H., Hu, Y., Anderson, M., Rollins, P., Makedon, F.: Emotion detection via discriminative Kernel method. In: Proceedings of the 3rd International Conference on Pervasive Technologies Related to Assistive Environments (2010)

    Google Scholar 

  77. Owusu, E., Zhan, Y., Mao, Q.R.: A neural-Ada boost based facial expression recognition system. Expert Syst. Appl. 41, 3383–3390 (2014)

    Article  Google Scholar 

  78. Dahmane, M., Meunier, J.: Prototype-based modeling for facial expression analysis. IEEE Trans. Multimedia 16(6), 1574–1584 (2014)

    Article  Google Scholar 

  79. Hegde, G.P., Seetha, M., Hegde, N.: Kernel locality preserving symmetrical weighted fisher discriminant analysis based subspace approach for expression recognition. Eng. Sci. Technol. Int. J. 19, 1321–1333 (2016)

    Google Scholar 

  80. Levi, G., Hassner, T.: Emotion recognition in the wild via convolutional neural networks and mapped binary patterns. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 503–510 (2015)

    Google Scholar 

  81. Ng, H.-W., Nguyen, V.D., Vonikakis, V., Winkler, S.: Deep learning for emotion recognition on small datasets using transfer learning. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 443–449 (2015)

    Google Scholar 

  82. Kim, B.-K., Lee, H., Roh, J., Lee, S.-Y.: Hierarchical committee of deep cnns with exponentially-weighted decision fusion for static facial expression recognition. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 427–434 (2015)

    Google Scholar 

  83. Yu, Z., Zhang, C.: Image based static facial expression recognition with multiple deep network learning. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 435–442 (2015)

    Google Scholar 

  84. Mandal, T., Majumdar, A., Wu, Q.J.: Face recognition by curvelet based feature extraction. In: Proceedings of the International Conference Image Analysis and Recognition, Montreal, pp. 806–817 (2007)

    Google Scholar 

  85. Mohammed, A.A., Minhas, R., Wu, Q.J., Sid-Ahmed, M.A.: Human face recognition based on multidimensional PCA and extreme learning machine. Pattern Recogn. 44, 2588–2597 (2011)

    Article  MATH  Google Scholar 

  86. Lee, S.H., Plataniotis, K.N., Ro, Y.M.: Intra-class variation reduction using training expression images for sparse representation based facial expression recognition. IEEE Trans. Affect. Comput. 5(3), 340–351 (2014)

    Article  Google Scholar 

  87. Zheng, W.: Multi-view facial expression recognition based on group sparse reduced-rank regression. IEEE Trans. Affect. Comput. 5(1), 71–85 (2014)

    Article  Google Scholar 

  88. Benitez-Quiroz, C.F., Srinivasan, R., Martinez, A.M.: EmotioNet: an accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, pp. 5562–5570 (2016)

    Google Scholar 

  89. Hasani, B., Mahoor, M.H.: Facial expression recognition using enhanced deep 3D convolutional neural networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, pp. 2278–2288 (2017)

    Google Scholar 

  90. Zhao, K., Chu, W., Zhang, H.: Deep region and multi-label learning for facial action unit detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, pp. 3391–3399 (2016)

    Google Scholar 

  91. Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble, France, pp. 46–53 (2000)

    Google Scholar 

  92. Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended Cohn-Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition—Workshops, San Francisco, CA, pp. 94–101 (2010)

    Google Scholar 

  93. Pantic, M., Valstar, M., Rademaker, R., Maat, L.: Web-based database for facial expression analysis. In: 2005 IEEE International Conference on Multimedia and Expo, Amsterdam, pp. 5–10 (2005)

    Google Scholar 

  94. Valstar, M., Pantic, M.: Induced disgust, happiness and surprise: an addition to the mmi facial expression database. In: Proceedings of 3rd International Workshop on EMOTION (satellite of LREC): Corpora for Research on Emotion and Affect, p. 65 (2010)

    Google Scholar 

  95. Susskind, J.M., Anderson, A.K., Hinton, G.E.: The toronto face database. Department of Computer Science, University of Toronto, Toronto, ON, Canada. Technical Report, vol. 3 (2010)

    Google Scholar 

  96. Goodfellow, I.J., Erhan, D., Carrier, P.L., Courville, A., Mirza, M., Hamner, B., Cukierski, W., Tang, Y., Thaler, D., Lee, D.-H., et al.: Challenges in representation learning: a report on three machine learning contests. In: International Conference on Neural Information Processing, pp. 117–124 (2013)

    Google Scholar 

  97. Lyons, M., Akamatsu, S., Kamachi, M., Gyoba, J.: Coding facial expressions with gabor wavelets. In: Third IEEE International Conference on Automatic Face and Gesture Recognition, pp. 200–205 (1998)

    Google Scholar 

  98. Dhall, A., Murthy, O.R., Goecke, R., Joshi, J., Gedeon, T.: Video and image based emotion recognition challenges in the wild: EmotioW 2015. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 423–426 (2015)

    Google Scholar 

  99. Yin, L., Wei, X., Sun, Y., Wang, J., Rosato, M.J.: A 3d facial expression database for facial behavior research. In: 7th International Conference on Automatic Face and Gesture Recognition (FGR06), Southampton, pp. 211–216 (2006)

    Google Scholar 

  100. Mavadati, S.M., Mahoor, M.H., Bartlett, K., Trinh, P., Cohn, J.F.: DISFA: a spontaneous facial action intensity database. IEEE Trans. Affect. Comput. 4(2), 151–160 (2013)

    Article  Google Scholar 

  101. Burkhardt, F., Paeschke, A., Rolfes, M., Sendlmeier, W.F., Weiss, B.: A database of German emotional speech. In: INTERSPEECH, pp. 1517–1527 (2005)

    Google Scholar 

  102. Tao, J., Liu, F., Zhang, M., Jia, H.: Design of speech corpus for mandarin text to speech. In: The Blizzard Challenge 2008 Workshop (2008)

    Google Scholar 

  103. Engberg, I.S., Hansen, A.V., Andersen, O., Dalsgaard, P.: Design, recording and verification of a danish emotional speech database. In: Fifth European Conference on Speech Communication and Technology (1997)

    Google Scholar 

  104. Wang, K.X., Zhang, Q.L., Liao, S.Y.: A database of elderly emotional speech. Proc. Int. Symp. Signal Process. Biomed. Eng Inf. 549–553 (2014)

    Google Scholar 

  105. Lee, S., Yildirim, S., Kazemzadeh, A., Narayanan, S.: An articulatory study of emotional speech production. In: Ninth European Conference on Speech Communication and Technology (2005)

    Google Scholar 

  106. Emotional prosody speech and transcripts. http://olac.ldc.upenn.edu/item/oai:www.ldc.upenn.edu:LDC2002S28. Accessed15 May 2019

  107. Batliner, A., Steidl, S., Noeth, E.: Releasing a thoroughly annotated and processed spontaneous emotional database: the FAU Aibo Emotion Corpus. In: Proceedings of a Satellite Workshop of LREC, p. 28 (2008)

    Google Scholar 

  108. Albornoz, E.M., Milone, D.H., Rufiner, H.L.: Spoken emotion recognition using hierarchical classifiers. Comput. Speech Lang. 25(3), 556–570 (2011)

    Article  Google Scholar 

  109. Bitouk, D., Verma, R., Nenkova, A.: Class-level spectral features for emotion recognition. Speech Commun. 52, 613–625 (2010)

    Article  Google Scholar 

  110. Borchert, M., Dusterhoft, A.: Emotions in speech-experiments with prosody and quality features in speech for use in categorical and dimensional emotion recognition environments. In: Proceedings of 2005 IEEE International Conference on Natural Language Processing and Knowledge Engineering, pp. 147–151 (2005)

    Google Scholar 

  111. Mao, Q., Dong, M., Huang, Z., Zhan, Y.: Learning salient features for speech emotion recognition using convolutional neural networks. IEEE Trans. Multimedia 16, 2203–2213 (2014)

    Article  Google Scholar 

  112. Schuller, B., Muller, R., Lang, M., Rigoll, G.: Speaker independent emotion recognition by early fusion of acoustic and linguistic features within ensembles. In: Ninth European Conference on Speech Communication and Technology (2005)

    Google Scholar 

  113. Shen, P., Changjun, Z., Chen, X.: Automatic speech emotion recognition using support vector machine. Int. Conf. Electron. Mech. Eng. Inf. Technol. (EMEIT) 2, 621–625 (2011)

    Article  Google Scholar 

  114. Deng, J., Zhang, Z., Marchi, E., Schuller, B.: Sparse autoencoder-based feature transfer learning for speech emotion recognition. In: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, pp. 511–516 (2013)

    Google Scholar 

  115. Wu, S., Falk, T.H., Chan, W.-Y.: Automatic speech emotion recognition using modulation spectral features. Speech Commun. 53, 768–785 (2011)

    Article  Google Scholar 

  116. Wang, K., An, N., Li, B.N., Zhang, Y., Li, L.: Speech emotion recognition using fourier parameters. IEEE Trans. Affect. Comput. 6, 69–75 (2015)

    Article  Google Scholar 

  117. Yang, B., Lugger, M.: Emotion recognition from speech signals using new harmony features. Signal Process. 90(5), 1415–1423 (2010)

    Article  MATH  Google Scholar 

  118. Ververidis, D., Kotropoulos, C.: Emotional speech classification using gaussian mixture models and the sequential floating forward selection algorithm. In: 2005 IEEE International Conference on Multimedia and Expo (ICME), vol. 7, pp. 1500–1503 (2005)

    Google Scholar 

  119. Zhang, Z., Weninger, F., Wollmer, M., Schuller, B.: Unsupervised learning in cross-corpus acoustic emotion recognition. In: IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pp. 523–528 (2011)

    Google Scholar 

  120. Busso, C., Lee, S., Narayanan, S.: Analysis of emotionally salient aspects of fundamental frequency for emotion detection. IEEE Trans. Audio Speech Lang. Process. 17(4), 582–596 (2009)

    Article  Google Scholar 

  121. Grimm, M., Kroschel, K., Provost, E.M., Narayanan, S.: Primitives based evaluation and estimation of emotions in speech. Speech Commun. 49, 787–800 (2007)

    Article  Google Scholar 

  122. Deng, J., Zhang, Z., Eyben, F., Schuller, B.: Autoencoder-based unsupervised domain adaptation for speech emotion recognition. IEEE Signal Process. Lett. 21(9), 1068–1072 (2014)

    Article  Google Scholar 

  123. Kwon, O., Chan, K., Hao, J., Lee, T.-W. : Emotion recognition by speech signals. In: Eighth European Conference on Speech Communication and Technology (2003)

    Google Scholar 

  124. Lee, C., Mower, E., Busso, C., Lee, S., Narayanan, S.: Emotion recognition using a hierarchical binary decision tree approach. Speech Commun. 53, 1162–1171 (2011)

    Article  Google Scholar 

  125. Iemocap database. https://sail.usc.edu/iemocap/. Accessed 15 May 2019

  126. Han, K., Yu, D., Tashev, I.: Speech emotion recognition using deep neural network and extreme learning machine. In: Fifteenth Annual Conference of the International Speech Communication Association (2014)

    Google Scholar 

  127. Mirsamadi, S., Barsoum, E., Zhang, C.: Automatic speech emotion recognition using recurrent neural networks with local attention. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2227–2231 (2017)

    Google Scholar 

  128. Surrey audio-visual expressed emotion database. https://sail.usc.edu/iemocap/. Accessed 15 Feb 2020

  129. Martin, O., Kotsia, I., Macq, B., Pitas, I.: The eNTERFACE’05 audio-visual emotion database. In: 22nd International Conference on Data Engineering Workshops (ICDEW’06), pp. 8–16 (2006)

    Google Scholar 

  130. Ringeval, F., Sonderegger, A., Sauer, J., Lalanne, D.: Introducing the recola multimodal corpus of remote collaborative and affective interactions. In: 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 1–8 (2013)

    Google Scholar 

  131. Trigeorgis, G., Ringeval, F., Brueckner, R., Marchi, E., Nicolaou, M.A., Schuller, B., Zafeiriou, S.: Adieu features? End-to-end speech emotion recognition using a deep convolutional recurrent network. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5200–5204 (2016)

    Google Scholar 

  132. Grimm, M., Kroschel, K., Narayanan, S.: The vera am Mittag German audiovisual emotional speech database. In: IEEE International Conference on Multimedia and Expo, pp. 865–868 (2008)

    Google Scholar 

  133. Schuller, B.: Recognizing affect from linguistic information in 3d continuous space. IEEE Trans. Affect. Comput. 2(4), 192–205 (2011)

    Article  Google Scholar 

  134. Schuller, B., Muller, R., Eyben, F., Gast, J., Hornler, B., Wollmer, M., Rigoll, G., Hothker, A., Konosu, H.: Being bored? Recognising natural interest by extensive audiovisual integration for real-life application. Image Vis. Comput. 27(12), 1760–1774 (2009)

    Article  Google Scholar 

  135. McKeown, G., Valstar, M., Cowie, R., Pantic, M., Schroder, M.: The SEMAINE database: annotated multimodal records of emotionally colored conversations between a person and a limited agent. IEEE Trans. Affect. Comput. 3(1), 5–17 (2011)

    Article  Google Scholar 

  136. Kaya, H., Fedotov, D., Yesilkanat, A., Verkholyak, O., Zhang, Y., Karpov, A.: LSTM based cross-corpus and cross-task acoustic emotion recognition. In: Interspeech, pp. 521–525 (2018)

    Google Scholar 

  137. Subramanian, R., Wache, J., Khomami Abadi, M., Vieriu, R., Winkler, S., Sebe, N.: ASCERTAIN: emotion and personality recognition using commercial sensors. IEEE Trans. Affect. Comput. 1, (2016). https://doi.org/10.1109/TAFFC.2016.2625250

  138. Soleymani, M., Lichtenauer, J., Pun, T., Pantic, M.: A multimodal database for affect recognition and implicit tagging. IEEE Trans. Affect. Comput. 3, 42–55 (2012)

    Article  Google Scholar 

  139. Chen, J., Hu, B., Xu, L., Moore, P., Su, Y.: Feature-level fusion of multimodal physiological signals for emotion recognition. In: IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Washington, DC, pp. 395–399 (2015)

    Google Scholar 

  140. Tong, Z., Chen, X., He, Z., Tong, K., Fang, Z., Wang, X.: Emotion recognition based on photoplethysmogram and electroencephalogram. In: IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC), Tokyo vol. 2018, pp. 402–407 (2018)

    Google Scholar 

  141. Wollmer, M., Eyben, F., Reiter, S., Schuller, B., Cox, C., Douglas-Cowie, E., Cowie, R.: Abandoning emotion classes-towards continuous emotion recognition with modelling of long-range dependencies. In: Proceedings of 9th Interspeech 2008 incorp. 12th Australian International Conference on Speech Science and Technology, SST 2008, Brisbane, Australia, pp. 597–600 (2008)

    Google Scholar 

  142. Caridakis, G., Malatesta, L., Kessous, L., Amir, N., Raouzaiou, A., Karpouzis, K.: Modeling naturalistic affective states via facial and vocal expressions recognition. In: Proceedings of 8th International Conference Multimodal Interfaces, pp. 146–154 (2004)

    Google Scholar 

  143. Subramanian, R., Wache, J., Abadi, M.K., Vieriu, R.L., Winkler, S., Sebe, N.: ASCERTAIN: emotion and personality recognition using commercial sensors. IEEE Trans. Affect. Comput. 9, 147–160 (2018)

    Article  Google Scholar 

  144. Pan, Y., Shen, P., Shen, L.: Speech emotion recognition using support vector machine. Int. J. Smart Home 6(2), 101–108 (2012)

    Google Scholar 

  145. Xiao, Z., Dellandrea, E., Dou, W., Chen, L.: Multi-stage classification of emotional speech motivated by a dimensional emotion model. Multimedia Tools Appl. 46(1), 119 (2010)

    Article  Google Scholar 

  146. Kim, J., Englebienne, G., Truong, K.P., Evers, V.: Towards speech emotion recognition in the wild using aggregated corpora and deep multi-task learning (2017). arXiv preprint arXiv:1708.03920

  147. Chen, M., He, X., Yang, J., Zhang, H.: 3-d convolutional recurrent neural networks with attention model for speech emotion recognition. IEEE Signal Process. Lett. 25(10), 1440–1444 (2018)

    Article  Google Scholar 

  148. Latif, S., Rana, R., Younis, S., Qadir, J., Epps, J.: Transfer learning for improving speech emotion classification accuracy (2018). arXiv preprint arXiv:1801.06353

  149. Sahu, S., Gupta, R., Sivaraman, G., Espy-Wilson, C.: Smoothing model predictions using adversarial training procedures for speech based emotion recognition. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4934–4938 (2018)

    Google Scholar 

  150. Liu, J., Su, Y., Liu, Y.: Multi-modal emotion recognition with temporal-band attention based on LSTM-RNN. In: Zeng, B., Huang, Q., El Saddik, A., Li, H., Jiang, S., Fan, X. (eds.) Advances in Multimedia Information Processing-PCM 2017, vol. 10735. Springer, Cham and Switzerland (2018)

    Google Scholar 

  151. Koelstra, S., Patras, I.: Fusion of facial expressions and EEG for implicit affective tagging. Image Vis. Comput. 31, 164–174 (2013)

    Article  Google Scholar 

  152. Petta, P., Pelachaud, C., Cowie, R.: Emotion-Oriented Systems the Humaine Handbook. Springer, Berlin (2011)

    Google Scholar 

  153. Huang, C., Liew, S.S., Lin, G.R., Poulsen, A., Ang, M.J.Y., Chia, B.C.S., Chew, S.Y., Kwek, Z.P., Wee, J.L.K., Ong, E.H., et al.: Discovery of irreversible inhibitors targeting histone methyltransferase, SMYD3. ACS Med. Chem. Lett. 10, 978–984 (2019)

    Article  Google Scholar 

  154. Benezeth, Y., Li, P., Macwan, R., Nakamura, K., Yang, F., Benezeth, Y., Li, P., Macwan, R., Nakamura, K., Gomez, R., et al.: Remote heart rate variability for emotional state monitoring. In: Proceedings of the 2018 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), Las Vegas, NV, USA, pp. 153–156, 4–7 March 2018

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nazmun Nahid .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Nahid, N., Rahman, A., Ahad, M.A.R. (2021). Contactless Human Emotion Analysis Across Different Modalities. In: Ahad, M.A.R., Mahbub, U., Rahman, T. (eds) Contactless Human Activity Analysis. Intelligent Systems Reference Library, vol 200. Springer, Cham. https://doi.org/10.1007/978-3-030-68590-4_9

Download citation

Publish with us

Policies and ethics