Skip to main content

Towards Personalization of Spoken Dialogue System Communication Strategies

  • Chapter
  • First Online:
Conversational Dialogue Systems for the Next Decade

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 704))

Abstract

This study examines the effects of 3 conversational traits – Register, Explicitness, and Misunderstandings – on user satisfaction and the perception of specific subjective features for Virtual Home Assistant spoken dialogue systems. Eight different system profiles were created, each representing a different combination of these 3 traits. We then utilized a novel Wizard of Oz data collection tool and recruited participants who interacted with the 8 different system profiles, and then rated the systems on 7 subjective features. Surprisingly, we found that systems which made errors were preferred overall, with the statistical analysis revealing error-prone systems were rated higher than systems which made no errors for all 7 of the subjective features rated. There were also some interesting interaction effects between the 3 conversational traits, such as implicit confirmations being preferred for systems employing a “conversational” Register, while explicit confirmations were preferred for systems employing a “formal” Register, even though there was no overall main effect for Explicitness. This experimental framework offers a fine-grained approach to the evaluation of user satisfaction which looks towards the personalization of communication strategies for spoken dialogue systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Artstein R, Traum D, Boberg J, Gainer A, Gratch J, Johnson E, Leuski A (2017) Listen to my body: Does making friends help influence people? In: Proceedings of FLAIRS, Florida, USA

    Google Scholar 

  2. Georgila K, Gordon C, Choi H, Boberg J, Jeon H, Traum D (2018) Toward low-cost automated evaluation metrics for Internet of Things dialogues. In: Proceedings of IWSDS, Singapore

    Google Scholar 

  3. Georgila K, Gordon C, Yanov V, Traum D (2020) Predicting ratings of real dialogue participants from artificial data and ratings of human dialogue observers. In: Proceedings of LREC, Marseille, France

    Google Scholar 

  4. Geutner P, Steffens F, Manstetten D (2002) Design of the VICO spoken dialogue system: Evaluation of user expectations by Wizard-of-Oz experiments. In: Proceedings of LREC, Las Palmas, Spain

    Google Scholar 

  5. Gordon C, Georgila K, Choi H, Boberg J, Traum D (2018) Evaluating subjective feedback for Internet of Things dialogues. In: Proceedings of SemDial:AixDial, Aix-en-Provence, France

    Google Scholar 

  6. Gordon C, Yanov V, Traum D, Georgila K (2019) A Wizard of Oz data collection framework for Internet of Things dialogues. In: Proceedings of SemDial:LondonLogue, London, UK

    Google Scholar 

  7. Hone KS, Graham R (2000) Towards a tool for the Subjective Assessment of Speech System Interfaces (SASSI). J Natural Lang Eng 6(3–4):287–303

    Article  Google Scholar 

  8. Hurtig T (2006) A mobile multimodal dialogue system for public transportation navigation evaluated. In: Proceedings of MobileHCI, Helsinki, Finland

    Google Scholar 

  9. Kleinberg, S.: 5 ways voice assistance is reshaping consumer behavior - think with google (2018). URL https://www.thinkwithgoogle.com/consumer-insights/voice-assistance-consumer-experience/

  10. Möller S, Ward N (2008) A framework for model-based evaluation of spoken dialog systems. In: Proceedings of SIGDIAL, Columbus, Ohio, USA

    Google Scholar 

  11. Paksima T, Georgila K, Moore JD (2009) Evaluating the effectiveness of information presentation in a full end-to-end dialogue system. In: Proceedings of SIGDIAL, London, UK

    Google Scholar 

  12. Tannen D (1990) You Just Don’t Understand: Women and Men in conversation. Morrow

    Google Scholar 

  13. Tannen D (1993) Gender and conversational interaction. Oxford University Press, Oxford

    Google Scholar 

  14. Tannen D (1994) Gender and discourse. Oxford University Press, Oxford

    Google Scholar 

  15. Tannen D, Kendall S, Adger CT (1997) Conversational Patterns across Gender, Class, and Ethnicity: Implications for Classroom Discourse. Springer, Netherlands, pp 75–85

    Google Scholar 

  16. Walker M, Kamm C, Litman D (2000) Towards developing general models of usability with PARADISE. J Natural Lang Eng 6(3–4):363–377

    Article  Google Scholar 

Download references

Acknowledgements

This work was funded in part by Samsung Electronics Co., Ltd., and partly supported by the U.S. Army. Statements and opinions expressed do not necessarily reflect the policy of the United States Government, and no official endorsement should be inferred.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kallirroi Georgila .

Editor information

Editors and Affiliations

Appendix

Appendix

The following are examples of dialogues for a single task, generated by participant interactions with each of the 8 system profiles. These examples are provided as a means of illustrating the differences in interaction between the 8 system profiles.

The Task. Users were presented with the following task: “Stop the washing machine in the kitchen and then turn it off, then turn the speaker volume to 9 in the living room.”

NoError Systems. Below you will find dialogue examples for the systems which did not make errors (Table 7). These were the 4 worst performing systems overall.

Table 7 Dialogue examples for NoError systems

Error Systems. Below you will find dialogue examples for the systems which did make errors (Tables 8 and  9). These were the 4 best performing systems overall.

It may not be immediately clear what the errors are for the Squirrel and Giraffe systems, since they only gave implicit confirmations of requests. The error in the Squirrel system is that the washing machine is only stopped, and not turned off, requiring the user to restate the request to turn it off in line 3 of the Squirrel dialogue in Table 8. The error for the Giraffe system is that the speaker volume was set to 8 instead of 9, as evidenced by the user restating their request in line 11 of the Giraffe dialogue in Table 9.

Table 8 Dialogue examples for Error systems: Conversational.
Table 9 Dialogue examples for Error systems: Formal

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Gordon, C., Georgila, K., Yanov, V., Traum, D. (2021). Towards Personalization of Spoken Dialogue System Communication Strategies. In: D'Haro, L.F., Callejas, Z., Nakamura, S. (eds) Conversational Dialogue Systems for the Next Decade. Lecture Notes in Electrical Engineering, vol 704. Springer, Singapore. https://doi.org/10.1007/978-981-15-8395-7_11

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-8395-7_11

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-8394-0

  • Online ISBN: 978-981-15-8395-7

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics