Skip to main content

2D Transparency Space—Bring Domain Users and Machine Learning Experts Together

  • Chapter
  • First Online:
Human and Machine Learning

Part of the book series: Human–Computer Interaction Series ((HCIS))

Abstract

Machine Learning (ML) is currently facing prolonged challenges with the user acceptance of delivered solutions as well as seeing system misuse, disuse, or even failure. These fundamental challenges can be attributed to the nature of the “black-box” of ML methods for domain users when offering ML-based solutions. That is, transparency of ML is essential for domain users to trust and use ML confidently in their practices. This chapter argues for a change in how we view the relationship between human and machine learning to translate ML results into impact. We present a two-dimensional transparency space which integrates domain users and ML experts together to make ML transparent. We identify typical Transparent ML (TML) challenges and discuss key obstacles to TML, which aim to inspire active discussions of making ML transparent with a systematic view in this timely field.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Altendorf, E.E., Restificar, A.C., Dietterich, T.G.: Learning from sparse data by exploiting monotonicity constraints. In: Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence (UAI’05), pp. 18–26. Arlington, US (2005)

    Google Scholar 

  2. Amershi, S., Chickering, M., Drucker, S.M., Lee, B., Simard, P., Suh, J.: ModelTracker: redesigning performance analysis tools for machine learning. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 337–346 (2015)

    Google Scholar 

  3. Ankerst, M., Elsen, C., Ester, M., Kriegel, H.P.: Visual classification: an interactive approach to decision tree construction. In: Proceedings of KDD ’99, pp. 392–396 (1999)

    Google Scholar 

  4. Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., Möller, K.R.: How to explain individual classification decisions. J. Mach. Learn. Res. 11, 1803–1831 (2010)

    MathSciNet  MATH  Google Scholar 

  5. Becker, B., Kohavi, R., Sommerfield, D.: Visualizing the simple bayesian classifier. In: Fayyad, U., Grinstein, G.G., Wierse, A. (eds.) Information visualization in data mining and knowledge discovery, pp. 237–249 (2002)

    Google Scholar 

  6. Boukhelifa, N., Perrin, M.E., Huron, S., Eagan, J.: How data workers cope with uncertainty: a task characterisation study. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI ’17, pp. 3645–3656 (2017)

    Google Scholar 

  7. Caragea, D., Cook, D., Honavar, V.G.: Gaining insights into support vector machine pattern classifiers using projection-based tour methods. In: Proceedings of KDD ’01, pp. 251–256 (2001)

    Google Scholar 

  8. Chen, D., Bellamy, R.K.E., Malkin, P.K., Erickson, T.: Diagnostic visualization for non-expert machine learning practitioners: A design study. In: 2016 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), pp. 87–95 (2016)

    Google Scholar 

  9. Chen, D., Fraiberger, S.P., Moakler, R., Provost, F.: Enhancing transparency and control when drawing data-driven inferences about individuals. In: ICML 2016 Workshop on Human Interpretability in Machine Learning, pp. 21–25 (2016)

    Google Scholar 

  10. Dai, J., Cheng, J.: HMMEditor: a visual editing tool for profile hidden Markov model. BMC Genomics 9(Suppl 1), S8 (2008)

    Article  Google Scholar 

  11. de Campos, L.M., Castellano, J.G.: Bayesian network learning algorithms using structural restrictions. Int. J. Approx. Reason. 45(2), 233–254 (2007)

    Article  MathSciNet  Google Scholar 

  12. Erra, U., Frola, B., Scarano, V.: An interactive bio-inspired approach to clustering and visualizing datasets. In: Proceedings of the 15th International Conference on Information Visualisation 2011, pp. 440–447 (2011)

    Google Scholar 

  13. Fisher, D., DeLine, R., Czerwinski, M., Drucker, S.: Interactions with big data analytics. Interactions 19(3), 50–59 (2012)

    Article  Google Scholar 

  14. Gunning, D.: Explainable artificial intelligence (xai). https://www.darpa.mil/program/explainable-artificial-intelligence (2017). Accessed 1 Aug 2017

  15. Guo, Z., Ward, M.O., Rundensteiner, E.A.: Nugget browser: visual subgroup mining and statistical significance discovery in multivariate datasets. In: Proceedings of the 15th International Conference on Information Visualisation, pp. 267–275 (2011)

    Google Scholar 

  16. Harrison, B., Banerjee, S., Riedl, M.O.: Learning from stories: using natural communication to train believable agents. In: IJCAI 2016 Workshop on Interactive Machine Learning. New York (2016)

    Google Scholar 

  17. Huang, M.L., Zhang, J., Nguyen, Q.V., Wang, J.: Visual clustering of spam emails for DDoS analysis. In: Proceedings of the 15th International Conference on Information Visualisation, pp. 65–72 (2011)

    Google Scholar 

  18. ICML: ICML 2016 Workshop on Human Interpretability in Machine Learning. https://sites.google.com/site/2016whi/ (2016). Accessed 30 Jan 2017

  19. ICML: ICML 2017 Workshop on Human in the Loop Machine Learning. http://machlearn.gitlab.io/hitl2017/ (2017). Accessed 1 Aug 2017

  20. IJCAI: IJCAI 2016 Interactive Machine Learning Workshop. https://sites.google.com/site/ijcai2016iml/home (2016). Accessed 30 Jan 2017

  21. Kizilcec, R.F.: How much information?: effects of transparency on trust in an algorithmic interface. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 2390–2395 (2016)

    Google Scholar 

  22. Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: Proceedings of ICML2017, pp. 1885–1894. Sydney, Australia (2017)

    Google Scholar 

  23. Krause, J., Perer, A., Bertini, E.: Using visual analytics to interpret predictive machine learning models. In: 2016 ICML Workshop on Human Interpretability in Machine Learning, pp. 106–110 (2016)

    Google Scholar 

  24. Krause, J., Perer, A., Ng, K.: Interacting with predictions: visual inspection of black-box machine learning models. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 5686–5697 (2016)

    Google Scholar 

  25. Kulesza, T., Stumpf, S., Burnett, M., Yang, S., Kwan, I., Wong, W.K.: Too much, too little, or just right? Ways explanations impact end users’ mental models. In: 2013 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), pp. 3–10 (2013)

    Google Scholar 

  26. Kulesza, T., Stumpf, S., Wong, W.K., Burnett, M.M., Perona, S., Ko, A., Oberst, I.: Why-oriented end-user debugging of naive bayes text classification. ACM Trans. Interact. Intell. Syst. 1(1), 2:1–2:31 (2011)

    Article  Google Scholar 

  27. Landecker, W., Thomure, M.D., Bettencourt, L.M.A., Mitchell, M., Kenyon, G.T., Brumby, S.P.: Interpreting individual classifications of hierarchical networks. In: 2013 IEEE Symposium on Computational Intelligence and Data Mining (CIDM), pp. 32–38 (2013)

    Google Scholar 

  28. Li, Z., Zhang, B., Wang, Y., Chen, F., Taib, R., Whiffin, V., Wang, Y.: Water pipe condition assessment: a hierarchical beta process approach for sparse incident data. Mach. Learn. 95(1), 11–26 (2014)

    Article  MathSciNet  Google Scholar 

  29. McElheran, K., Brynjolfsson, E.: The rise of data-driven decision making is real but uneven. Harvard Business Review (2016)

    Google Scholar 

  30. Paiva, J.G., Florian, L., Pedrini, H., Telles, G., Minghim, R.: Improved similarity trees and their application to visual data classification. IEEE Trans. Vis. Comput. Graph. 17(12), 2459–2468 (2011)

    Article  Google Scholar 

  31. Patel, K., Fogarty, J., Landay, J.A., Harrison, B.: Examining difficulties software developers encounter in the adoption of statistical machine learning. In: Proceedings of the 23rd national conference on Artificial intelligence, pp. 1563–1566. Chicago, USA (2008)

    Google Scholar 

  32. Peng, B., MacGlashan, J., Loftin, R., Littman, M.L., Roberts, D.L., Taylor, M.E.: An empirical study of non-expert curriculum design for machine learners. In: IJCAI 2016 Workshop on Interactive Machine Learning. New York, USA (2016)

    Google Scholar 

  33. Ribeiro, M.T., Singh, S., Guestrin, C.: Why Should I Trust You?: Explaining the Predictions of Any Classifier [cs, stat] (2016). arXiv: 1602.04938

  34. Robnik-Sikonja, M., Kononenko, I., Strumbelj, E.: Quality of classification explanations with PRBF. Neurocomputing 96, 37–46 (2012)

    Article  Google Scholar 

  35. Sacha, D., Senaratne, H., Kwon, B.C., Ellis, G.P., Keim, D.A.: The role of uncertainty, awareness, and trust in visual analytics. IEEE Trans. Vis. Comput. Graph. 22(1), 240–249 (2016)

    Article  Google Scholar 

  36. Scantamburlo, T.: Machine learning in decisional process: a philosophical perspective. ACM SIGCAS Comput. Soc. 45(3), 218–224 (2015)

    Article  Google Scholar 

  37. Sun, Q., DeJong, G.: Explanation-augmented SVM: an approach to incorporating domain knowledge into SVM learning. In: Proceedings of ICML2005, pp. 864–871 (2005)

    Google Scholar 

  38. Talbot, J., Lee, B., Kapoor, A., Tan, D.S.: EnsembleMatrix: interactive visualization to support machine learning with multiple classifiers. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1283–1292 (2009)

    Google Scholar 

  39. Wagstaff, K.: Machine learning that matters. In: Proceedings of ICML2012, pp. 529–536 (2012)

    Google Scholar 

  40. Watanabe, S.: Pattern Recognition: Human and Mechanical. Wiley, USA (1985)

    Google Scholar 

  41. Whiffin, V.S., Crawley, C., Wang, Y., Li, Z., Chen, F.: Evaluation of machine learning for predicting critical main failure. Water Asset Manag. Int. 9(4), 17–20 (2013)

    Google Scholar 

  42. Williams, M.O., Mostafa, H.: Active transfer learning using knowledge of anticipated changes. In: IJCAI 2016 Workshop on Interactive Machine Learning (2016)

    Google Scholar 

  43. Zahavy, T., Zrihem, N.B., Mannor, S.: Graying the black box: Understanding DQNs [cs] (2016). arXiv: 1602.02658

  44. Zhou, J., Bridon, C., Chen, F., Khawaji, A., Wang, Y.: Be informed and be involved: effects of uncertainty and correlation on user confidence in decision making. In: Proceedings of ACM SIGCHI Conference on Human Factors in Computing Systems (CHI2015) Works-in-Progress. Korea (2015)

    Google Scholar 

  45. Zhou, J., Chen, F.: Making machine learning useable. Int. J. Intell. Syst. Technol. Appl. 14(2), 91 (2015)

    MathSciNet  Google Scholar 

  46. Zhou, Y., Fenton, N., Neil, M.: Bayesian network approach to multinomial parameter learning using data and expert judgments. Int. J. Approx. Reason. 55(5), 1252–1268 (2014)

    Article  MathSciNet  Google Scholar 

  47. Zhou, J., Sun, J., Chen, F., Wang, Y., Taib, R., Khawaji, A., Li, Z.: Measurable decision making with GSR and pupillary analysis for intelligent user interface. ACM Trans. Comput. Human Interact. 21(6), 33 (2015)

    Article  Google Scholar 

  48. Zhou, J., Khawaja, M.A., Li, Z., Sun, J., Wang, Y., Chen, F.: Making machine learning useable by revealing internal states update a transparent approach. Int. J. Comput. Sci. Eng. 13(4), 378–389 (2016)

    Article  Google Scholar 

  49. Zrihem, N.B., Zahavy, T., Mannor, S.: Visualizing dynamics: from t-SNE to SEMI-MDPs. In: ICML 2016 Workshop on Human Interpretability in Machine Learning (2016)

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by AOARD under grant No. FA2386-14-1-0022 AOARD 134131.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianlong Zhou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Crown

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Zhou, J., Chen, F. (2018). 2D Transparency Space—Bring Domain Users and Machine Learning Experts Together. In: Zhou, J., Chen, F. (eds) Human and Machine Learning. Human–Computer Interaction Series. Springer, Cham. https://doi.org/10.1007/978-3-319-90403-0_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-90403-0_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-90402-3

  • Online ISBN: 978-3-319-90403-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics