Skip to main content

Intelligent Presentation Skills Trainer Analyses Body Movement

  • Conference paper
  • First Online:
Advances in Computational Intelligence (IWANN 2015)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9095))

Included in the following conference series:

Abstract

Public speaking is a non-trivial task since it is affected by how nonverbal behaviors are expressed. Practicing to deliver the appropriate expressions is difficult while they are mostly given subconsciously. This paper presents our empirical study on the nonverbal behaviors of presenters. Such information was used as the ground truth to develop an intelligent tutoring system. The system can capture bodily characteristics of presenters via a depth camera, interpret this information in order to assess the quality of the presentation, and then give feedbacks to users. Feedbacks are delivered immediately through a virtual conference room, in which the reactions of the simulated avatars can be controlled based on the performance of presenters.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Seiler, W.J., Beall, M.L.: Communication - Making connections. Allyn&Bacon (2004)

    Google Scholar 

  2. Rodman, G., Adler, R.B.: Style: delivery and language choices. In: The New Public Speaker, 1st edn. Wadsworth Publishing (1996)

    Google Scholar 

  3. Argyle, M., Alkema, F., Gilmour, R.: The communication of friendly and hostile attitudes by verbal and nonverbal signals. European Journal of Social Psychology 1, 385–402 (1971)

    Article  Google Scholar 

  4. D’Arcy, J.: Communicating with effective body language. In: Technically Speaking. Battelle Press, ch. 14 (1998)

    Google Scholar 

  5. Vinciarelli, A., Pantic, M., Bourlard, H.: Social signal processing: Survey of an emerging domain. Image and Vision Computing 27(12), 1743–1759 (2009)

    Article  Google Scholar 

  6. Picard, R.: Affective Computing, 1st edn. The MIT Press (2000)

    Google Scholar 

  7. Hincks, R., Edlund, J.: Promoting increased pitch variation in oral presentations with transient visual feedback. Language Learning & Technology 13(3), 32–50 (2009)

    Google Scholar 

  8. Kurihara, K., Goto, M., Ogata, J.: Presentation sensei: a presentation training system using speech and image processing. In: Proceedings of the 9th International Conference on Multimodal interfaces, pp. 358–365 (2007)

    Google Scholar 

  9. Pfister, T., Robinson, P.: Real-Time Recognition of Affective States from Nonverbal Features of Speech and Its Application for Public Speaking Skill Analysis. IEEE Transactions on the Affective Computing, 1–14 (2011). http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5740838

  10. Silverstein, D.A., Tong, Z., Zhang, T.: System and method of providing evaluation feedback to a speaker while giving a real-time oral presentation. US Patent 7,050,978 (2003)

    Google Scholar 

  11. Duan, K.-B., Keerthi, S.S.: Which is the best multiclass SVM method? An empirical study. In: Oza, N.C., Polikar, R., Kittler, J., Roli, F. (eds.) MCS 2005. LNCS, vol. 3541, pp. 278–285. Springer, Heidelberg (2005). http://link.springer.com/chapter/10.1007/11494683_28

    Chapter  Google Scholar 

  12. Gao, T., Wu, C., Aghajan, H.: User-centric speaker report: Ranking-based effectiveness evaluation and feedback. In: 2009 IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops), pp. 1004–1011. IEEE (2009)

    Google Scholar 

  13. Freund, Y., Iyer, R., Schapire, R., Singer, Y.: An efficient boosting algorithm for combining preferences. The Journal of Machine Learning Research 4, 933–969 (2003). http://dl.acm.org/citation.cfm?id=964285

    MathSciNet  Google Scholar 

  14. Nguyen, A., Chen, W., Rauterberg, G.: Feedback system for presenters detects nonverbal expressions. In: SPIE Newsroom (2013). http://spie.org/x91885.xml?highlight=x2410&ArticleID=x91885

  15. Zimmerman, P., Bolhuis, J.: The Observer XT: A tool for the integration and synchronization of multimodal signals. Behavior Research Methods 41(3), 731–735 (2009). http://link.springer.com/article/10.3758/BRM.41.3.731

    Article  Google Scholar 

  16. Kleinke, C.L.: Gaze and eye contact: a research review. Psychological Bulletin 100(1), 78–100 (1986)

    Article  Google Scholar 

  17. Jacob, R., Karn, K.: Eye tracking in human-computer interaction and usability research: Ready to deliver the promises. Work 2(3), 573–605 (2003). http://www.ee.uwa.edu.au/~roberto/research/projects2013/10.1.1.100.445.pdf

  18. Sheikhi, S., Odobez, J.-M.: Recognizing the visual focus of attention for human robot interaction. In: Salah, A.A., Ruiz-del-Solar, J., Meriçli, Ç., Oudeyer, P.-Y. (eds.) HBU 2012. LNCS, vol. 7559, pp. 99–112. Springer, Heidelberg (2012). http://link.springer.com/chapter/10.1007/978-3-642-34014-7_9

    Chapter  Google Scholar 

  19. Ba, S.O., Odobez, J.-M.: Recognizing visual focus of attention from head pose in natural meetings. IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics : a publication of the IEEE Systems, Man, and Cybernetics Society 39(1), 16–33 (2009). http://www.ncbi.nlm.nih.gov/pubmed/19068430

    Article  Google Scholar 

  20. Stiefelhagen, R.: Tracking focus of attention in meetings. In: Proceedings of the Fourth IEEE International Conference on Multimodal Interfaces, pp. 273–280. IEEE Comput. Soc (2002). http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1167006

  21. Camurri, A., Hashimoto, S., Ricchetti, M., Ricci, A., Suzuki, K., Trocca, R., Volpe, G.: Eyesweb: Toward gesture and affect recognition in interactive dance and music systems. Computer Music Journal 24(1), 57–69 (2000)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anh-Tuan Nguyen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Nguyen, AT., Chen, W., Rauterberg, M. (2015). Intelligent Presentation Skills Trainer Analyses Body Movement. In: Rojas, I., Joya, G., Catala, A. (eds) Advances in Computational Intelligence. IWANN 2015. Lecture Notes in Computer Science(), vol 9095. Springer, Cham. https://doi.org/10.1007/978-3-319-19222-2_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-19222-2_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-19221-5

  • Online ISBN: 978-3-319-19222-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics