Abstract
Active listening is a communication technique that the listener listens to the speaker carefully and attentively by confirming or asking for more details aboutwhat they heard. In order to improve the effect, always-available and trustable conversational partners in enough number are demanded. The ultimate goal of this study is the development of a virtual agent who can engage active listening and maintain a long-term relationship with elderly users. We assume that the task of the active listener (a human volunteer or the agent) is to maintain the speaker’s (elderly user) mood in good state. In order to do this, like a human listener, the listener agent has to observe the listener’s attitude, has to estimate the listener’s mood from the observation, and has to predict the change of listener’s mood caused by his / her own behaviors both verbally and non-verbally. On the other hand, the active listener is evaluated by the speaker from his / her impression of the listener’s attitude. The hypothesis is, if the impression is good, then the speaker’s mood is good. However, virtual agents which are made by computer graphics animations are more limited in expressiveness than human listeners, both in the aspects of quality and communication channels. Therefore, there is a research question that the graphical agent with “reduced expressiveness” can really engage the active listening task at human listeners’ level, even if they do the same behaviors, smiles, gestures, or utterances at the same timings. This paper presents our first step of this study, a human-human teleconferencing experiment to foresee whether it is possible to implement an active listener agent.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Boersma, P., Weenink, D.: Praat: doing phonetics by computer (computer software) (2012), Web Site: http://www.praat.org/
Huang, L., Morency, L.-P., Gratch, J.: Virtual rapport 2.0. In: Vilhjálmsson, H.H., Kopp, S., Marsella, S., Thórisson, K.R. (eds.) IVA 2011. LNCS, vol. 6895, pp. 68–79. Springer, Heidelberg (2011)
McKeown, G., Valstar, M.F., Cowie, R., Pantic, M.: The SEMAINE corpus of emotionally coloured character interactions. In: IEEE International Conference Multimedia and Expo, pp. 1079–1084 (2011)
Ohama, R.: The research about turn exchange and bacback-channeling in Japanese. Keisuishya (2006) (in Japanse)
Pammi, S., Schro, M.: Annotating meaning of listener vocalizations for speech synthesis. In: 3rd International Conference on Affective Computing and Intelligent Interaction (ACII 2009), pp. 1–6 (2009)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Huang, HH., Konishi, N., Shibusawa, S., Kawagoe, K. (2014). Exploring the Difference of the Impression on Human and Agent Listeners in Active Listening Dialog. In: Bickmore, T., Marsella, S., Sidner, C. (eds) Intelligent Virtual Agents. IVA 2014. Lecture Notes in Computer Science(), vol 8637. Springer, Cham. https://doi.org/10.1007/978-3-319-09767-1_23
Download citation
DOI: https://doi.org/10.1007/978-3-319-09767-1_23
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-09766-4
Online ISBN: 978-3-319-09767-1
eBook Packages: Computer ScienceComputer Science (R0)