Skip to main content

Emotion Recognition from Semi Natural Speech Using Artificial Neural Networks and Excitation Source Features

  • Conference paper
Contemporary Computing (IC3 2012)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 306))

Included in the following conference series:

Abstract

This paper proposes Linear Prediction (LP) residual of speech signal for characterizing the basic emotions. LP residual is extracted from speech signal by LP analysis, by inverse filtering of the speech signal. LP residual basically contains higher order relations among the samples. Instant of glottal closure in a speech signal is known as an epoch. The significant excitation of vocal tract usually takes place at the instant of glottal closure. For analysing speech emotions, the LP residual samples chosen around glottal closure instants are used. A semi-natural database GEU-SNESC (Graphic Era University Semi Natural Emotion Speech Corpus) is used for modeling the emotions. This database is collected by recording dialogs of film actors from Hindi movies. In the study four emotions namely anger, happy, neutral and sadness are used. Auto-associative neural network models are used for characterizing the basic emotions present in the speech. Average emotion recognition of 66% and 59% is observed respectively for the epoch based and entire LP residual samples.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Schuller, B., Rigoll, G., Lang, M.: Speech emotion recognition combining acoustic features and linguistic information in a hybrid support vector machine-belief network architecture. In: Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing, pp. 577–580. IEEE Press (May 2004)

    Google Scholar 

  2. Dellert, F., Polzin, T., Waibel, A.: Recognizing emotion in speech. In: Fourth International Conference on Spoken Language Processing, Philadelphia, PA, USA, pp. 1970–1973 (October 1996)

    Google Scholar 

  3. Koolagudi, S.G., Maity, S., Kumar, V.A., Chakrabarti, S., Rao, K.S.: IITKGP-SESC: Speech Database for Emotion Analysis. In: Ranka, S., Aluru, S., Buyya, R., Chung, Y.-C., Dua, S., Grama, A., Gupta, S.K.S., Kumar, R., Phoha, V.V. (eds.) IC3 2009. CCIS, vol. 40, pp. 485–492. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  4. Lee, C.M., Narayanan, S.S.: Toward detecting emotions in spoken dialogs. IEEE Trans. Speech and Audio Processing 13, 293–303 (2005)

    Article  Google Scholar 

  5. Nakatsu, R., Nicholson, J., Tosa, N.: Emotion recognition and its application to computer agents with spontaneous interactive capabilities. Knowledge Based Systems 13, 497–504 (2000)

    Article  Google Scholar 

  6. Charles, F., Pizzi, D., Cavazza, M., Vogt, T., Andr, E.: Emoemma: Emotional speech input for interactive story telling. In: Decker, Sichman, Sierra, Castelfranchi (eds.) Eighth Int. Conf. on Autonomous Agents and Multiagent Systems (AAMAS 2009), Budapest, Hungary, pp. 1381–1382 (May 2009)

    Google Scholar 

  7. Ververidis, D., Kotropoulos, C.: A state of the art review on emotional speech databases. In: Eleventh Australasian International Conference on Speech Science and Technology, Auckland, New Zealand (December 2006)

    Google Scholar 

  8. France, D.J., Shiavi, R.G., Silverman, S., Silverman, M., Wilkes, M.: Acoustical properties of speech as indicators of depression and suicidal risk. IEEE Transactions on Biomedical Engg. 47(7), 829–837 (2000)

    Article  Google Scholar 

  9. Nwe, T.L., Foo, S.W., Silva, L.C.D.: Speech emotion recognition using hidden Markov models. Speech Communication 41, 603–623 (2003)

    Article  Google Scholar 

  10. McGilloway, S., Cowie, R., Douglas-Cowie, E., Gielen, S., Westerdijk, M., Stroeve, S.: Approaching automatic recognition of emotion from voice: A rough benchmark, Belfast (2000)

    Google Scholar 

  11. Dellaert, F., Polzin, T., Waibel, A.: Recognising emotions in speech. In: ICSLP 1996 (October 1996)

    Google Scholar 

  12. Nicholson, J., Takahashi, K., Nakatsu, R.: Emotion recognition in speech using neural networks. In: Sixth International Conference on Neural Information Processing, ICONIP 1999, pp. 495–501 (1999)

    Google Scholar 

  13. Ververidis, D., Kotropoulos, C., Pitas, I.: Automatic emotional speech classification. In: ICASSP 2004, pp. I593–I596. IEEE (2004)

    Google Scholar 

  14. Iida, A., Campbell, N., Higuchi, F., Yasumura, M.: A corpus-based speech synthesis system with emotion. Speech Communication 40, 161–187 (2003)

    Article  MATH  Google Scholar 

  15. Gobl, C., Chasaide, A.: The role of voice quality in communicating emotion, mood and attitude. In: SPC, vol. 40, pp. 189–212 (2003)

    Google Scholar 

  16. Kwon, O., Chan, K., Hao, J., Lee, T.: Emotion recognition by speech signals. In: Eurospeech, Geneva, pp. 125–128 (2003)

    Google Scholar 

  17. Wang, Y., Guan, L.: An investigation of speech-based human emotion recognition. In: IEEE 6th Workshop on Multimedia Signal Processing, pp. 15–18 (2004)

    Google Scholar 

  18. Yegnanarayana, B., Murty, K.S.R.: Event-based instantaneous fundamental frequency estimation from speech signals. IEEE Trans. Audio, Speech, and Language Processing 17(4), 614–624 (2009)

    Article  Google Scholar 

  19. Koolagudi, S.G., Sreenivasa Rao, K.: Exploring Speech Features for Classifying Emotions along Valence Dimension. In: Chaudhury, S., Mitra, S., Murthy, C.A., Sastry, P.S., Pal, S.K. (eds.) PReMI 2009. LNCS, vol. 5909, pp. 537–542. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Koolagudi, S.G., Devliyal, S., Barthwal, A., Sreenivasa Rao, K. (2012). Emotion Recognition from Semi Natural Speech Using Artificial Neural Networks and Excitation Source Features. In: Parashar, M., Kaushik, D., Rana, O.F., Samtaney, R., Yang, Y., Zomaya, A. (eds) Contemporary Computing. IC3 2012. Communications in Computer and Information Science, vol 306. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-32129-0_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-32129-0_30

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-32128-3

  • Online ISBN: 978-3-642-32129-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics