Skip to main content

Laughter Research: A Review of the ILHAIRE Project

  • Chapter
  • First Online:
Toward Robotic Socially Believable Behaving Systems - Volume I

Abstract

Laughter is everywhere. So much so that we often do not even notice it. First, laughter has a strong connection with humour. Most of us seek out laughter and people who make us laugh, and it is what we do when we gather together as groups relaxing and having a good time. But laughter also plays an important role in making sure we interact with each other smoothly. It provides social bonding signals that allow our conversations to flow seamlessly between topics; to help us repair conversations that are breaking down; and to end our conversations on a positive note.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.ilhaire.eu/.

  2. 2.

    Laughter elements correspond to individual bursts of energy, whose succession is characteristic of laughter.

  3. 3.

    http://www.qub.ac.uk/ilhairelaughter.

  4. 4.

    http://www.cantoche.com/.

References

  1. André E, Martin JC, Lingenfelser F, Wagner J (2013) Multimodal fusion in human-agent dialogue. In: Rojc M, Campbell N (eds) Coverbal synchrony in human-machine interaction. CRC Press, Boca Raton

    Google Scholar 

  2. Bachorowski JA, Owren MJ (2001) Not all laughs are alike: voiced but not unvoiced laughter readily elicits positive affect. Psychol Sci 12(3):252–257

    Article  Google Scholar 

  3. Bachorowski JA, Owren MJ (2003) Sounds of emotion. Ann N Y Acad Sci 1000:244–265

    Article  Google Scholar 

  4. Bachorowski, J.A., Smoski, M.J., Owen, M.J.: The acoustic features of human laughter. J Acoust Soc Am 110(3, Pt1), 1581–1597 (2001)

    Google Scholar 

  5. Beller G (2009) Analysis and generative model for expressivity. Applied to speech and musical performance. PhD thesis, Université Paris VI Pierre et Marie Curie

    Google Scholar 

  6. Bollepalli B, Urbain J, Raitio T, Gustafson J, Cakmak H (2014) A comparative evaluation of vocoding techniques for hmm-based laughter synthesis. In: 2014 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 255–259. doi:10.1109/ICASSP.2014.6853597

  7. Bonin F, Campbell N, Vogel C (2012) Laughter and topic changes: temporal distribution and information flow. In: CogInfoCom 2012–3rd IEEE international conference on cognitive info communications. Kosice, Slovakia, pp 53–58

    Google Scholar 

  8. Bryant GA, Aktipis CA (2014) The animal nature of spontaneous human laughter. Evol Hum Behav 35(4):327–335

    Article  Google Scholar 

  9. Burkhardt F, Campbell N (2015) Emotional speech synthesis. In: Calvo R, D’Mello S, Gratch J, Kappas A (eds) The oxford handbook of affective computing. Oxford University Press, Oxford

    Google Scholar 

  10. Cagampan B, Ng H, Panuelos K, Uy K, Cu J, Suarez M (2013) An exploratory study on naturalistic laughter synthesis. In: Proceedings of the 4th international workshop on empathic computing (IWEC’13). Beijing, China

    Google Scholar 

  11. Cai R, Lu L, Zhang HJ, Cai LH (2003) Highlight sound effects detection in audio stream. In: Proceedings of the 2003 international conference on multimedia and expo, 2003. ICME ’03, vol 3, pp III-37–40. doi:10.1109/ICME.2003.1221242

  12. Campbell N, Kashioka H, Ohara R (2005) No laughing matter. In: Proceeding of INTERESPEECH, pp 465–468. Lisbon, Portugal (2005)

    Google Scholar 

  13. Çakmak H, Urbain J, Dutoit T (2014) The AV-LASYN database: a synchronous corpus of audio and 3D facial marker data for audio-visual laughter synthesis. In: Proceedings of the 9th international conference on language resources and evaluation (LREC’14)

    Google Scholar 

  14. Çakmak H, Urbain J, Tilmanne J, Dutoit T (2014) Evaluation of HMM-based visual laughter synthesis. 2014 IEEE international conference on acoustics speech and signal processing (ICASSP). IEEE, Florence, pp 4578–4582

    Chapter  Google Scholar 

  15. Çakmak H, Urbain J, Dutoit T (2015) Synchronization rules for HMM-based audio-visual laughter synthesis. In: 2015 IEEE international conference on acoustics speech and signal processing (ICASSP). IEEE, South Brisbane, pp 2304–2308

    Google Scholar 

  16. Cosker, D., Edge, J.: Laughing, crying, sneezing and yawning: automatic voice driven animation of non-speech articulations. In: Computer animation and social agents (CASA) (2009)

    Google Scholar 

  17. dAlessandro N, Tilmanne J, Astrinaki M, Hueber T, Dall R, Ravet T, Moinet A, Cakmak H, Babacan O, Barbulescu A, Parfait V, Huguenin V, Kalayc ES, Hu Q (2014) Reactive statistical mapping: towards the sketching of performative control with data. In: Rybarczyk Y, Cardoso T, Rosas J, Camarinha-Matos L (eds) Innovative and creative developments in multimodal interaction systems, IFIP advances in information and communication technology, vol 425, pp 20–49. Springer, Heidelberg (2014)

    Google Scholar 

  18. Davila Ross M, Owren MJ, Zimmermann E (2009) Reconstructing the evolution of laughter in great apes and humans. Current Biol 19(13):1106–1111

    Google Scholar 

  19. Davila Ross M, Allcock B, Thomas C, Bard KA (2011) Aping expressions? chimpanzees produce distinct laugh types when responding to laughter of others. Emotion 11(5):1013–1020

    Google Scholar 

  20. Devillers L, Vidrascu L (2007) Positive and negative emotional states behind the laughs in spontaneous spoken dialogs. In: Interdisciplinary workshop on the phonetics of laughter, p 37

    Google Scholar 

  21. DiLorenzo P, Zordan V, Sanders B (2008) Laughing out loud: control for modeling anatomically inspired laughter using audio. ACM Trans Graph

    Google Scholar 

  22. Ding Y (2014) Data-driven expressive animation model of speech and laughter for an embodied conversational agent. PhD thesis, Télécom ParisTech (2014)

    Google Scholar 

  23. Ding Y, Huang J, Fourati N, Artières T, Pelachaud C (2014) Upper body animation synthesis for a laughing character. In: Intelligent virtual agents. Springer, Heidelberg, pp 164–173

    Google Scholar 

  24. Ding Y, Prepin K, Huang J, Pelachaud C, Artières T (2014) Laughter animation synthesis. In: Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems. International foundation for autonomous agents and multiagent systems, pp. 773–780

    Google Scholar 

  25. Douglas-Cowie E, Campbell N, Cowie R, Roach P (2003) Emotional speech: towards a new generation of databases. Speech Commun 40(1–2):33–60. doi:10.1016/S0167-6393(02)00070-5. http://www.sciencedirect.com/science/article/pii/S0167639302000705

    Google Scholar 

  26. Douglas-Cowie E, Cowie R, Sneddon I, Cox C, Lowry O, McRorie M, Martin JC, Devillers L, Abrilian S, Batliner A, Amir N, Karpouzis K (2007) The humaine database: addressing the collection and annotation of naturalistic and induced emotional data. In: Paiva A, Prada R, Picard R (eds) Affective computing and intelligent interaction, Lecture notes in computer science, vol 4738. Springer, Heidelberg, pp 488–500

    Google Scholar 

  27. Dunbar R (2008) Mind the gap: or why humans are not just great apes. In: Proceedings of the British academy, vol 154. Joint British academy/British psychological society annual lecture

    Google Scholar 

  28. Ekman P (2003) Sixteen enjoyable emotions. Emotion Res 18(2):6–7

    Google Scholar 

  29. Ekman P, Friesen WV, Hager JC (2002) Facial action coding system: a technique for the measurement of facial movement

    Google Scholar 

  30. El Haddad K, Çakmak H, Dupont S, Dutoit T (2015) Towards a speech synthesis system with controllable amusement levels. In: Proceedings of 4th interdisciplinary workshop on laughter and other non-verbal vocalisations in speech. Enschede, The Netherlands

    Google Scholar 

  31. El Haddad K, Dupont S, d’Alessandro N, Dutoit T (2015) An HMM-based speech-smile synthesis system: an approach for amusement synthesis. In: Proceedings of 3rd international workshop on emotion representation, analysis and synthesis in continuous time and space (EmoSPACE15). Ljubljana, Slovenia

    Google Scholar 

  32. El Haddad K, Dupont S, Urbain J, Dutoit T (2015) Speech-laughs: an HMM-based approach for amused speech synthesis. In: International conference on acoustics, speech and signal processing (ICASSP 2015)

    Google Scholar 

  33. El Haddad K, Moinet A, Çakmak H, Dupont S, Dutoit T (2015) Using mage for real time speech-laugh synthesis. In: Proceedings of 4th interdisciplinary workshop on laughter and other non-verbal vocalisations in speech. Enschede, The Netherlands

    Google Scholar 

  34. Fredrickson B (2004) The broaden-and-build theory of positive emotions. Philos Trans R Soc B Biol Sci 359:1367–1378

    Article  Google Scholar 

  35. Fukushima S, Hashimoto Y, Nozawa T, Kajimoto H (2010) Laugh enhancer using laugh track synchronized with the user’s laugh motion. In: CHI ’10 extended abstracts on human factors in computing systems, CHI EA ’10, pp 3613–3618. ACM, New York. doi:10.1145/1753846.1754027

  36. Glenn PJ (2003) Laughter in interaction. The discourse function of laughter in writing tutorials. Cambridge University Press, Cambridge

    Google Scholar 

  37. Grammer K (1990) Strangers meet: Laughter and nonverbal signs of interest in opposite-sex encounters. J Nonverbal Behav 14(4):209–236. doi:10.1007/BF00989317

    Article  Google Scholar 

  38. Greengross G, Miller GF (2011) Humor ability reveals intelligence, predicts mating success, and is higher in males. Intelligence 39(4):188–192

    Article  Google Scholar 

  39. Griffin H, Aung M, Romera-Paredes B, McLoughlin C, McKeown G, Curran W, Bianchi-Berthouze N (2013) Laughter type recognition from whole body motion. In: 2013 Humaine association conference on affective computing and intelligent interaction (ACII), pp 349–355. doi:10.1109/ACII.2013.64

  40. Griffin H, Aung M, Romera-Paredes B, McLoughlin C, McKeown G, Curran W, Berthouze N (2015) Perception and automatic recognition of laughter from whole-body motion: continuous and categorical perspectives. IEEE transactions on affective computing, PP(99). doi:10.1109/TAFFC.2015.2390627

    Google Scholar 

  41. Hatfield E, Cacioppo JT, Rapson RL (1994) Emotional contagion. Cambridge University Press, New York

    Google Scholar 

  42. Hofmann J (2014) Intense or malicious? the decoding of eyebrow-lowering frowning in laughter animations depends on the presentation mode. Front Psychol 5:1306

    Google Scholar 

  43. Hofmann J (2014) Smiling and laughter in positive emotions: personality influences and expressive features. PhD thesis, University of Zurich

    Google Scholar 

  44. Hofmann J, Platt T, Ruch W, Proyer RT (2015) Individual differences in gelotophobia predict responses to joy and contempt. Sage Open 5(2):1–12

    Article  Google Scholar 

  45. Hofmann J, Platt T, Ruch W, More than amusement: Laughter and smiling in positive emotions (under review)

    Google Scholar 

  46. Hofmann J, Platt T, Ruch W, Niewiadomski R, Urbain J (2015) The influence of a virtual companion on amusement when watching funny films. Motiv Emot 39(3): 434–447

    Google Scholar 

  47. Hofmann J, Ruch W (2016) Schadenfreude laughter. Semiotika (Special Issue on Laughter)

    Google Scholar 

  48. Hofmann J, Stoffel F, Weber A, Platt T (2011) The 16 enjoyable emotions induction task (16-EEIT)—unpublished research instrument, Technical report, University of Zurich, Switzerland

    Google Scholar 

  49. Hofmann J, Ruch W, Platt T (2012) The en-and decoding of schadenfreude laughter. sheer joy expressed by a duchenne laugh or emotional blend with a distinct morphological expression? In: Interdisciplinary workshop on laughter and other non-verbal vocalisations in speech proceedings, pp 26–27

    Google Scholar 

  50. Holt E (2010) The last laugh: shared laughter and topic termination. J Pragmat 42(6):1513–1525

    Article  Google Scholar 

  51. Hudenko WJ, Magenheimer MA (2011) Listeners prefer the laughs of children with autism to those of typically developing children. Autism 16(6):641–655. doi:10.1177/1362361311402856

    Article  Google Scholar 

  52. Ito A, Wang X, Suzuki M, Makino S (2005) Smile and laughter recognition using speech processing and face recognition from conversation video. In: Proceedings of the 2005 international conference on cyberworlds, CW ’05, pp 437–444. IEEE Computer Society, Washington. doi:10.1109/CW.2005.82

  53. Janin A, Baron D, Edwards J, Ellis D, Gelbart D, Morgan N, Peskin B, Pfau T, Shriberg E, Stolcke A, Wooters C (2003) The ICSI meeting corpus. In: 2003 IEEE international conference on acoustics, speech, and signal processing, 2003. proceedings. (ICASSP ’03), vol 1, pp I-364-I-367. doi:10.1109/ICASSP.2003.1198793

  54. Kayyal M, Widen S, Russell J (2015) Context is more powerful than we think: contextual cues override facial cues even for valence. Emotion 15(3):287–291

    Google Scholar 

  55. Kennedy L, Ellis D (2004) Laughter detection in meetings. In: NIST ICASSP 2004 meeting recognition workshop. Montreal, Canada, pp 118–121

    Google Scholar 

  56. Kipper S, Todt D (2001) Variation of sound parameters affects the evaluation of human laughter. Behaviour 138(9):1161–1178

    Article  Google Scholar 

  57. Kipper S, Todt D (2003) Dynamic-acoustic variation causes differences in evaluations of laughter. Percept Motor Skills 96(3):799–809

    Google Scholar 

  58. Kipper S, Todt D (2003) The role of rhythm and pitch in the evaluation of human laughter. J Nonverbal Behav 27(4):255–272

    Article  Google Scholar 

  59. Klein E, Geist M, Piot B, Pietquin O (2012) Inverse reinforcement learning through structured classification. In: Bartlett P, Pereira FCN, Burges CJC, Bottou L. Weinberger KQ (eds.) Advances in neural information processing systems 25, pp 1016–1024. URL http://books.nips.cc/papers/files/nips25/NIPS2012_0491.pdf

  60. Klein E, Piot B, Geist M, Pietquin O (2013) A cascaded supervised learning approach to inverse reinforcement learning. In: Blockeel H, Kersting K, Nijssen S, Zelezny F (eds) Proceedings of the European conference on machine learning and principles and practice of knowledge discovery in databases (ECML/PKDD 2013), Lecture notes in computer science, vol 8188, pp 1–16. Springer, Prague (Czech Republic) (2013). URL http://www.ecmlpkdd2013.org/wp-content/uploads/2013/07/327.pdf

  61. Knox MT, Mirghafori N (2007) Automatic laughter detection using neural networks. In: INTERSPEECH 2007, 8th annual conference of the international speech communication association, ISCA. Antwerp, Belgium, August 27–31, 2007, pp 2973–2976

    Google Scholar 

  62. Kori S (1989) Perceptual dimensions of laughter and their acoustic correlates. Proc Int Conf Phon Sci Tallinn 4:255–258

    Google Scholar 

  63. Lasarcyk E, Trouvain J (2007) Imitating conversational laughter with an articulatory speech synthesis. In: Proceedings of the interdisciplinary workshop on the phonetics of laughter. Saarbrücken, Germany, pp 43–48

    Google Scholar 

  64. Lingenfelser F, Wagner J, André E, McKeown G, Curran W (2014) An event driven fusion approach for enjoyment recognition in real-time. In: Proceedings of the ACM international conference on multimedia, MM ’14. ACM, New York, pp 377–386. doi:10.1145/2647868.2654924

  65. Lockerd A, Mueller FM (2002) Lafcam: leveraging affective feedback camcorder. In: CHI ’02 Extended abstracts on human factors in computing systems, CHI EA ’02. ACM, New York, pp 574–575. doi:10.1145/506443.506490

  66. Mancini M, Varni G, Glowinski D, Volpe G (2012) Computing and evaluating the body laughter index. In: Salah A, Ruiz-del Solar J, Merili E, Oudeyer PY (eds) Human behavior understanding, Lecture notes in computer science, vol 7559. Springer, Heidelberg, pp 90–98

    Google Scholar 

  67. Mancini M, Hofmann J, Platt T, Volpe G, Varni G, Glowinski D, Ruch W, Camurri A (2013) Towards automated full body detection of laughter driven by human expert annotation. In: 2013 Humaine association conference on affective computing and intelligent interaction (ACII). IEEE, New Jersey, pp 757–762

    Google Scholar 

  68. Mancini M, Ach L, Bantegnie E, Baur T, Berthouze N, Datta D, Ding Y, Dupont S, Griffin H, Lingenfelser F, Niewiadomski R, Pelachaud C, Pietquin O, Piot B, Urbain J, Volpe G, Wagner J (2014) Laugh when you’re winning. In: Rybarczyk Y, Cardoso T, Rosas J, Camarinha-Matos L (eds) Innovative and creative developments in multimodal interaction systems, IFIP Advances in information and communication technology, vol 425. Springer, Heidelberg, pp 50–79

    Google Scholar 

  69. Mancini M, Varni G, Niewiadomski R, Volpe G, Camurri A (2014) How is your laugh today? In: Proceedings of the extended abstracts of the 32nd annual ACM conference on human factors in computing systems, CHI EA ’14. ACM, New York, pp. 1855–1860. doi:10.1145/2559206.2581205

  70. Matsusaka T (2004) When does play panting occur during social play in wild chimpanzees? Primates J Primatol 45(4):221–229

    Article  Google Scholar 

  71. McKeown G, Cowie R, Curran W, Ruch W, Douglas-Cowie E (2012) Ilhaire laughter database. In: Proceedings of the LREC workshop on corpora for research on emotion sentiment and social signals (ES 2012). European language resources association (ELRA), Istanbul

    Google Scholar 

  72. McKeown G, Curran W, Kane D, Mccahon R, Griffin HJ, McLoughlin C, Bianchi-Berthouze N (2013) Human perception of laughter from context-free whole body motion dynamic stimuli. In: 2013 Humaine association conference on affective computing and intelligent interaction, pp 306–311. doi:http://doi.ieeecomputersociety.org/10.1109/ACII.2013.57

  73. McKeown G, Curran W, McLoughlin C, Griffin H, Bianchi-Berthouze N (2013) Laughter induction techniques suitable for generating motion capture data of laughter associated body movements. In: Proceedings of the 2nd international workshop on emotion representation, analysis and synthesis in continuous time and space (EmoSPACE) In conjunction with the IEEE FG. Shanghai, China

    Google Scholar 

  74. McKeown G, Sneddon I, Curran W (2015) Gender differences in the perceptions of genuine and simulated laughter and amused facial expressions. Emot Rev 7(1):30–38

    Article  Google Scholar 

  75. McKeown G, Sneddon I, Curran W (2015) The underdetermined nature of laughter. In preparation

    Google Scholar 

  76. McKeown GJ (2013) The analogical peacock hypothesis: the sexual selection of mind-reading and relational cognition in human communication. Rev Gen Psychol 17(3):267–287

    Article  Google Scholar 

  77. McKeown G, Valstar M, Cowie R, Pantic M, Schroder M (2012) The semaine database: annotated multimodal records of emotionally colored conversations between a person and a limited agent. IEEE Trans Affect Comput 3(1):5–17. doi:10.1109/T-AFFC.2011.20

    Article  Google Scholar 

  78. Melder WA, Truong KP, Uyl MD, Van Leeuwen DA, Neerincx MA, Loos LR, Plum BS (2007) Affective multimodal mirror: Sensing and eliciting laughter. In: Proceedings of the international workshop on human-centered multimedia, HCM ’07. ACM, New York, pp. 31–40. doi:10.1145/1290128.1290134

  79. Miller GF (2001) The mating mind. Vintage, London

    Google Scholar 

  80. Niewiadomski R, Pelachaud C (2012) Towards multimodal expression of laughter. In: Intelligent virtual agents. Springer, New York, pp 231–244

    Google Scholar 

  81. Niewiadomski R, Pelachaud C (2015) The effect of wrinkles, presentation mode, and intensity on the perception of facial actions and full-face expressions of laughter. ACM Trans Appl Percept (TAP) 12(1):2

    Google Scholar 

  82. Niewiadomski R, Urbain J, Pelachaud C, Dutoit T (2012) Finding out the audio and visual features that influence the perception of laughter intensity and differ in inhalation and exhalation phases. In: proceedings of the 4th International workshop on Corpora for research on emotion, sentiment and social signals, satellite of LREC 2012, Istanbul, Turkey

    Google Scholar 

  83. Niewiadomski R, Obaid M, Bevacqua E, Looser J, Anh LQ, Pelachaud C (2011) Cross-media agent platform. In: Proceedings of the 16th international conference on 3D web technology. ACM, New York, pp 11–19

    Google Scholar 

  84. Niewiadomski R, Pammi S, Sharma A, Hofmann J, Platt T, Cruz R, Qu B (2012) Visual laughter synthesis: initial approaches. In: Interdisciplinary workshop on laughter and other non-verbal vocalisations in speech, Dublin, Ireland

    Google Scholar 

  85. Niewiadomski R, Hofmann J, Urbain J, Platt T, Wagner J, Piot B, Çakmak H, Pammi S, Baur T, Dupont S, Geist M, Lingenfelser F, McKeown G, Pietquin O, Ruch W (2013) Laugh-aware virtual agent and its impact on user amusement. In: Proceedings of the international conference on autonomous agents and multi-agent systems, AAMAS (2013)

    Google Scholar 

  86. Niewiadomski R, Mancini M, Baur T, Varni G, Griffin H, Aung MSH (2013) MMLI: multimodal multiperson corpus of laughter in interaction. In: Salah AA, Hung H, Aran O, Gunes H (eds) HBU, Lecture notes in computer science, vol 8212. Springer, Hiedelberg, pp 184–195

    Google Scholar 

  87. Niewiadomski R, Mancini M, Ding Y, Pelachaud C, Volpe G (2014) Rhythmic body movements of laughter. In: Proceedings of the 16th international conference on multimodal interaction. ACM, New York, pp 299–306

    Google Scholar 

  88. O’Donnell Trujillo N, Adams K (1983) Heheh in conversation: some coordinating accomplishments of laughter. West J Commun (Includes communication reports) 47(2):175–191

    Google Scholar 

  89. Oh J, Wang G (2013) Laughter modulation: from speech to speech-laugh. In: Proceedings of the 14th annual conference of the international speech communication association (Interspeech). Lyon, France, pp 754–755

    Google Scholar 

  90. Oh J, Wang G (2013) Lolol: laugh out loud on laptop. In: Proceedings of the 2013 international conference on new musical instruments (NIME’13). Daejon, Korea

    Google Scholar 

  91. Owren M, Bachorowski JA (2003) Reconsidering the evolution of nonlinguistic communication: the case of laughter. J Nonverbal Behav 27(3):183–200

    Article  Google Scholar 

  92. Pammi S, Khemiri H, Chollet G (2012) Laughter detection using alisp-based N-gram models. In: Proceeding of the interdisciplinary workshop on laughter and other non-verbal vocalisations. Dublin, Ireland, pp 16–17

    Google Scholar 

  93. Pecune F, Biancardi B, Ding Y, Pelachaud C, Mancini M, Varni G, Camurri A, Volpe G (2015) Lol-laugh out loud. In: Proceedings of AAAI 2015

    Google Scholar 

  94. Pelachaud C (2014) Interacting with socio-emotional agents. Procedia Comput Sci 39:4–7

    Article  Google Scholar 

  95. Petridis S, Pantic M (2008) Fusion of audio and visual cues for laughter detection. In: International conference on content-based image and video retrieval, CIVR 2008. ACM, New York, pp 329–337. URL http://doc.utwente.nl/62669/

  96. Petridis S, Pantic M (2011) Audiovisual discrimination between speech and laughter: why and when visual information might help. IEEE Trans Multimed 13(2):216–234. doi:10.1109/TMM.2010.2101586

    Article  Google Scholar 

  97. Petridis S, Martinez B, Pantic M (2013) The mahnob laughter database. Image Vis Comput 31(2):186–202. doi:10.1016/j.imavis.2012.08.014

    Article  Google Scholar 

  98. Piot B, Pietquin O, Geist M (2014) Predicting when to laugh with structured classification. In: Annual conference of the international speech communication association (Interspeech)

    Google Scholar 

  99. Platt T, Hofmann J, Ruch W, Niewiadomski R, Urbain J (2012) Experimental standards in research on AI and humor when considering psychology. In: Proceedings of fall symposium on artificial intelligence of humor

    Google Scholar 

  100. Platt T, Hofmann J, Ruch W, Proyer RT (2013) Duchenne display responses towards sixteen enjoyable emotions: individual differences between no and fear of being laughed at. Motiv Emot 37(4):776–786

    Article  Google Scholar 

  101. Preuschoft S, van Hooff JARAM (1997) The social function of “smile” and “laughter”: variations across primate species and societies. Lawrence erlbaum associates, Mahweh, New Jersey, pp 171–189

    Google Scholar 

  102. Qu B, Pammi S, Niewiadomski R, Chollet G (2012) Estimation of faps and intensities of aus based on real-time face tracking. In: Proceedings of the 3rd symposium on facial analysis and animation, FAA ’12. ACM, New York, pp 13:1–13:1. doi:10.1145/2491599.2491612

  103. Reuderink B (2007) Fusion for audio-visual laughter detection (2007). URL http://essay.utwente.nl/714/

  104. Riek L, Rabinowitch T, Chakrabarti B, Robinson, P (2009) Empathizing with robots: fellow feeling along the anthropomorphic spectrum. In: 3rd International conference on affective computing and intelligent interaction and workshops 2009. ACII 2009, pp 1–6. doi:10.1109/ACII.2009.5349423

  105. Rienks R (2007) Meetings in smart environments. implications of progressing technology. PhD thesis, University of Twente. ISBN: 978-90-365-2533-6, Number of pages: 201

    Google Scholar 

  106. Rothbart MK (1973) Laughter in young children. Psychol Bull 80(3):247–256

    Article  Google Scholar 

  107. Ruch W (1993) The handbook of emotions, chapter Exhilaration and humor, pp 605–616. Guilford Press, New York

    Google Scholar 

  108. Ruch W (2012) Towards a new structural model of the sense of humor: preliminary findings. In: Proceedings of fall symposium on artificial intelligence of humor

    Google Scholar 

  109. Ruch W, Ekman P (2001) Emotion, qualia and consciousness, chapter The expressive pattern of laughter. World Scientic Publishers, Tokyo, pp 426–443

    Google Scholar 

  110. Ruch W, Hofmann J (2012) A temperament approach to humor. Humor and health promotion, pp 79–113

    Google Scholar 

  111. Ruch W, Hofmann J, Platt T (2013) Investigating facial features of four types of laughter in historic illustrations. Eur J Humour Res 1(1):99–118

    Google Scholar 

  112. Ruch W, Hofmann J, Platt T, Proyer R (2013) The state-of-the art in gelotophobia research: a review and some theoretical extensions. Humor Int J Humor Res 27(1):23–45

    Google Scholar 

  113. Ruch WF, Platt T, Hofmann J, Niewiadomski R, Urbain J, Mancini M, Dupont S (2014) Gelotophobia and the challenges of implementing laughter into virtual agents interactions. Front Human Neurosci 8:928

    Google Scholar 

  114. Ruch W, Hofmann J, Platt T (2015) Individual differences in gelotophobia and responses to laughter-eliciting emotions. Personal Individ Differ 72:117–121

    Article  Google Scholar 

  115. Sathya AT, Sudheer K, Yegnanarayana B (2013) Synthesis of laughter by modifying excitation characteristics. J Acous Soc Am 133:3072–3082

    Article  Google Scholar 

  116. Schuller B, Steidl S, Batliner A, Vinciarelli A, Scherer KR, Ringeval F, Chetouani M, Weninger F, Eyben F, Marchi E, Mortillaro M, Salamin H, Polychroniou A, Valente F, Kim S (2013) The interspeech 2013 computational paralinguistics challenge: social signals, conflict, emotion, autism. In: Interspeech. ISCA, pp 148–152

    Google Scholar 

  117. Sestito M, Umiltà MA, De Paola G, Fortunati R, Raballo A, Leuci E, Maffei S, Tonna M, Amore M, Maggini C et al (2013) Facial reactions in response to dynamic emotional stimuli in different modalities in patients suffering from schizophrenia: a behavioral and emg study. Front Human Neurosci 7:368

    Google Scholar 

  118. Shahid S, Krahmer E, Swerts M, Melder W, Neerincx M (2009) You make me happy: using an adaptive affective interface to investigate the effect of social presence on positive emotion induction. In: 3rd International conference on affective computing and intelligent interaction and workshops 2009. ACII 2009, pp 1–6. doi:10.1109/ACII.2009.5349355

  119. Sneddon I, McRorie M, McKeown G, Hanratty J (2012) The belfast induced natural emotion database. IEEE Trans Affect Comput 3(1):32–41. doi:10.1109/T-AFFC.2011.26

    Article  Google Scholar 

  120. Sundaram S, Narayanan S (2007) Automatic acoustic synthesis of human-like laughter. J Acous Soc Am 121(1):527–535

    Article  Google Scholar 

  121. Szameitat DP, Darwin CJ, Wildgruber D, Alter K, Szameitat AJ (2011) Acoustic correlates of emotional dimensions in laughter: arousal, dominance, and valence. Cognit Emot 25(4):599–611

    Article  Google Scholar 

  122. Tanaka H, Campbell N (2014) Classification of social laughter in natural conversational speech. Comput Speech Lang 28(1):314–325

    Article  Google Scholar 

  123. Tokuda K, Yoshimura T, Masuko T, Kobayashi T, Kitamura T (2000) Speech parameter generation algorithms for hmm-based speech synthesis. In: Proceedings of the IEEE international conference on acoustics, speech, and signal processing (ICASSP), vol 3. IEEE, New York, pp 1315–1318

    Google Scholar 

  124. Truong KP, van Leeuwen DA (2007) Automatic discrimination between laughter and speech. Speech Commun 49(2):144–158. doi:10.1016/j.specom.2007.01.001, http://www.sciencedirect.com/science/article/pii/S0167639307000027

    Google Scholar 

  125. Urbain J (2014) Acoustic laughter processing. PhD thesis, University of Mons

    Google Scholar 

  126. Urbain J, Dutoit T (2012) Measuring instantaneous laughter intensity from acoustic features. In: Proceeding of the interdisciplinary workshop on laughter and other non-verbal vocalisations. Dublin, Ireland, pp 18–19

    Google Scholar 

  127. Urbain J, Niewiadomski R, Bevacqua E, Dutoit T, Moinet A, Pelachaud C, Picart B, Tilmanne J, Wagner J (2010) Avlaughtercycle. J Multimodal User Interfaces 4(1):47–58. doi:10.1007/s12193-010-0053-1

    Article  Google Scholar 

  128. Urbain J, Cakmak H, Dutoit T (2012) Development of HMM-based acoustic laughter synthesis. In: Interdisciplinary workshop on laughter and other non-verbal vocalisations in speech, Dublin, Ireland, pp 26–27

    Google Scholar 

  129. Urbain J, Niewiadomski R, Hofmann J, Bantegnie E, Baur T, Berthouze N, Cakmak H, Cruz R, Dupont S, Geist M, Griffin H, Lingenfelser F, Mancini M, Miranda M, McKeown G, Pammi S, Pietquin O, Piot B, Platt T, Ruch W, adn Volpe G, Wagner J (2012) Laugh machine. In: Proceedings of Enterface12. The 8th international summer workshop on multimodal interfaces

    Google Scholar 

  130. Urbain J, Çakmak H, Dutoit T (2013) Automatic phonetic transcription of laughter and its application to laughter synthesis. In: Proceedings of the 5th biannual humaine association conference on affective computing and intellignet interaction (ACII). Geneva, Switzerland, pp 153–158

    Google Scholar 

  131. Urbain J, Çakmak H, Dutoit T (2013) Evaluation of HMM-based laughter synthesis. In: Proceedings of the IEEE international conference on acoustics, speech, and signal processing (ICASSP), Vancouver, Canada, pp 7835–7839

    Google Scholar 

  132. Urbain J, Niewiadomski R, Mancini M, Griffin H, Çakmak H, Ach L, Volpe G (2013) Multimodal analysis of laughter for an interactive system. In: Proceedings of the INTETAIN 2013

    Google Scholar 

  133. Vinciarelli A, Pantic M, Heylen D, Pelachaud C, Poggi I, D’Errico F, Schroeder M (2012) Bridging the gap between social animal and unsocial machine: a survey of social signal processing. IEEE Trans Affect Comput 3(1):69–87. doi:10.1109/T-AFFC.2011.27

    Article  Google Scholar 

  134. Urbain J, Çakmak H, Charlier A, Denti M, Dutoit T, Dupont S (2014) Arousal-driven synthesis of laughter. IEEE J Select Top Signal Process 8:273–284. doi:10.1109/JSTSP.2014.2309435

    Article  Google Scholar 

  135. Wagner J, Lingenfelser F, André E (2013) Using phonetic patterns for detecting social cues in natural conversations. In: Bimbot F, Cerisara C, Fougeron C, Gravier G, Lamel L, Pellegrino F, Perrier P (eds) INTERSPEECH 2013, 14th Annual conference of the international speech communication association, Lyon, France, August 25–29. ISCA, pp 168–172

    Google Scholar 

  136. Wagner J, Lingenfelser F, Baur T, Damian I, Kistler F, André E (2013) The social signal interpretation (SSI) framework: multimodal signal processing and recognition in real-time. Proceedings of the 21st ACM international conference on multimedia, MM ’13. ACM, New York, pp 831–834

    Google Scholar 

  137. Yoshimura T, Tokuda K, Masuko T, Kobayashi T, Kitamura T (1999) Simultaneous modeling of spectrum, pitch and duration in HMM-based speech synthesis. In: Proceedings of Eurospeech. Budapest, Hungary

    Google Scholar 

Download references

Acknowledgments

We would like to acknowledge all colleagues within the ILHAIRE project, from the following partner organisations: University of Mons (Belgium), Télécom ParisTech / Centre National de la Recherche Scientifique (France), University of Augsburg (Germany), Università degli Studi of Genova (Italy), University College London (United Kingdom), Queens̀ University ‘Belfast (United Kingdom), University of Zurich (Switzerland), Supélec (France), Cantoche (France), University of Lille (France). Our thanks go to Laurent Ach, Elisabeth André, Hane Aung, Emeline Bantegnie, Tobias Baur, Nadia Berthouze, Antonio Camurri, Gerard Chollet, Roddy Cowie, Will Curran, Yu Ding, Stéphane Dupont, Thierry Dutoit, Matthieu Geist, Harry Griffin, Jing Huang, Jennifer Hofmann, Florian Lingenfelser, Anh Tu Mai, Maurizio Mancini, Gary McKeown, Benoît Morel, Radoslaw Niewiadomski, Sathish Pammi, Catherine Pelachaud, Olivier Pietquin, Bilal Piot, Tracey Platt, Bingqing Qu, Johannes Wagner, Willibald Ruch, Abhisheck Sharma, Lesley Storey, Jérôme Urbain, Giovanna Varni, Gualtiero Volpe, and their colleagues and co-authors. They all contributed to the initial ideas, to the teambuilding, or to the scientific/research developments within the project. The research leading to these results has received funding from the EU Seventh Framework Programme (FP7/2007–2013) under grant nbr. 270780 (ILHAIRE project).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stéphane Dupont .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Dupont, S. et al. (2016). Laughter Research: A Review of the ILHAIRE Project. In: Esposito, A., Jain, L. (eds) Toward Robotic Socially Believable Behaving Systems - Volume I . Intelligent Systems Reference Library, vol 105. Springer, Cham. https://doi.org/10.1007/978-3-319-31056-5_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-31056-5_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-31055-8

  • Online ISBN: 978-3-319-31056-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics