Skip to main content

How to Achieve Explainability and Transparency in Human AI Interaction

  • Conference paper
  • First Online:
HCI International 2019 - Posters (HCII 2019)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1033))

Included in the following conference series:

Abstract

It is typically not transparent to end-users, how AI systems derive information or make decisions. This becomes crucial, the more pervasive AI systems enter human daily lives, the more they influence automated decision-making, and the more people rely on them. We present work in progress on explainability to support transparency in human AI interaction. In this paper, we discuss methods and research findings on categorizations of user types, system scope and limits, situational context, and changes over time. Based on these different dimensions and their range and combinations, we aim at individual facets of transparency that address a specific situation best. The approach is human-centered to provide adequate explanations with regard to their depth of detail and level of information, and we outline the different dimensions of this complex task.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Chen, J.Y.C., Procci, K., Boyce, M., Wright, J., Garcia, A., Barnes, M.: Situation awareness–based agent transparency. Technical report, Army Research Laboratory ARL-TR-6905 (2014)

    Google Scholar 

  • Das, S., Dey, A., Pal, A., Roy, N.: Applications of artificial intelligence in machine learning: review and prospect. Int. J. Comput. Appl. 115(9), 31–41 (2015)

    Google Scholar 

  • Došilović, F.K., Brčić, M., Hlupić, N.: Explainable artificial intelligence: a survey. In: Proceedings of the 41st International Convention on Information and Communication Technology, Electronics and Microelectronics MIPRO, pp. 210–215. IEEE Xplore (2018)

    Google Scholar 

  • Endsley, M.R.: Toward a theory of situation awareness in dynamic systems. Hum. Factors J. 37(1), 32–64 (1995)

    Article  Google Scholar 

  • Fitts, P.M.: Human engineering for an effective air navigation and traffic control system. Technical report, National Research Council (1951)

    Google Scholar 

  • Gunning, D.: Explainable artificial intelligence (XAI). DARPA Program (2017). https://www.darpa.mil/program/explainable-artificial-intelligence. Accessed 18 Mar 2019

  • Gil, Y., Selman, B.: A 20-year community roadmap for artificial intelligence research in the US executive summary. https://cra.org/ccc/wp-content/uploads/sites/2/2019/03/AI_Roadmap_Exec_Summary-FINAL-.pdf. Accessed 18 Mar 2019

  • Holliday, D., Wilson, S., Stumpf, S.: User trust in intelligent systems: a journey over time. In: Proceedings of the 21st International Conference on Intelligent User Interfaces, pp. 164–168. ACM, New York (2016)

    Google Scholar 

  • Karapanos, E., Zimmerman, J., Forlizzi, J., Martens, J.-B.: User experience over time: an initial framework. In: CHI 2009 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 729–738. ACM, New York (2019)

    Google Scholar 

  • Lipton, Z.C.: The mythos of model interpretability. Commun. ACM 61(10), 36–43 (2016)

    Article  Google Scholar 

  • Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. J. 267, 1–38 (2019)

    Article  MathSciNet  Google Scholar 

  • Mohseni, S., Zarei, N., Ragan, E.D.: A survey of evaluation methods and measures for interpretable machine learning. Computing Research Repository (CoRR) (2018). http://arxiv.org/abs/1811.11839. Accessed 18 Mar 2019

  • Parasuraman, R., Sheridan, T.B., Wickens, C.D.: Model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cybern. – Part A: Syst. Hum. 30, 286–297 (2000)

    Article  Google Scholar 

  • Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM, New York (2016)

    Google Scholar 

  • Samek, W., Wiegand, T., Müller, K.-R.: Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. ITU J.: ICT Discov. Impact Artif. Intell. (AI) Commun. Netw. Serv. 1(1), 39–48 (2017)

    Google Scholar 

  • Schaefer, K.E., Chen, J.Y.C., Szalma, J.L., Hancock, P.A.: A meta-analysis of factors influencing the development of trust in automation: implications for understanding autonomy in future systems. Hum. Factors: J. Hum. Factors Ergon. Soc. 58(3), 377–400 (2016)

    Article  Google Scholar 

  • Sheridan, T.B., Verplank, W.: Human and computer control of undersea teleoperators. Man-Machine Systems Laboratory, Department of Mechanical Engineering, MIT, USA (1978)

    Google Scholar 

  • Štrumbelj, E., Kononenko, I.: Explaining prediction models and individual predictions with feature contributions. Knowl. Inf. Syst. 41(3), 647–665 (2014)

    Article  Google Scholar 

  • Wagner, A., Robinette, P.: Towards robots that trust: human subject validation of the situational conditions for trust. Interact. Stud. 16(1), 89–117 (2015)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joana Hois .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hois, J., Theofanou-Fuelbier, D., Junk, A.J. (2019). How to Achieve Explainability and Transparency in Human AI Interaction. In: Stephanidis, C. (eds) HCI International 2019 - Posters. HCII 2019. Communications in Computer and Information Science, vol 1033. Springer, Cham. https://doi.org/10.1007/978-3-030-23528-4_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-23528-4_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-23527-7

  • Online ISBN: 978-3-030-23528-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics