Skip to main content

Explainable Reasoning in Face of Contradictions: From Humans to Machines

  • Conference paper
  • First Online:
Explainable and Transparent AI and Multi-Agent Systems (EXTRAAMAS 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12688))

  • 1195 Accesses

Abstract

A well-studied trait of human reasoning and decision-making is the ability to not only make decisions in the presence of contradictions, but also to explain why a decision was made, in particular if a decision deviates from what is expected by an inquirer who requests the explanation. In this paper, we examine this phenomenon, which has been extensively explored by behavioral economics research, from the perspective of symbolic artificial intelligence. In particular, we introduce four levels of intelligent reasoning in face of contradictions, which we motivate from a microeconomics and behavioral economics perspective. We relate these principles to symbolic reasoning approaches, using abstract argumentation as an exemplary method. This allows us to ground the four levels in a body of related previous and ongoing research, which we use as a point of departure for outlining future research directions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Let us highlight that we do not introduce the so-called AGM postulates [2] here, because the success postulate stipulates (colloquially speaking) that “new” logical formulas are always added to the belief base and never rejected; however, we assume that, intuitively, an intelligent agent should be able to reject new beliefs under some circumstances.

  2. 2.

    Less formal models of human decision-making and reasoning have been, of course, subject of in-depth study for much longer. Indeed, the management of contradictions that is at the center of this paper is also the subject of the Shev Shema’tata, a book on the treatment of doubt in Rabbinic law, written at the turn from the 18th to the 19th century [18].

  3. 3.

    Indeed, empirical studies (conducted decades after the publication of Simon’s paper) show that humans sometimes do exactly this [6].

  4. 4.

    Note that this statement precedes a defense of the approach it describes.

  5. 5.

    See: http://s.cs.umu.se/hlzdqf.

  6. 6.

    More semantics exist, some of which address well-known issues with the semantics whose definitions we provide in this paper. However, we consider an in-depth overview of argumentation semantics out-of-scope.

  7. 7.

    Note that this would be a violation of the language independence principle.

  8. 8.

    In these works, we name the principle weak reference independence.

  9. 9.

    Let us note that stage semantics does not generally establish consistent preferences, given any argumentation framework and any of its normal expansions, see [22].

  10. 10.

    Given an argumentation framework and a semantics’ extension of this framework, the undecided arguments are all arguments that are neither in the extension, nor attacked by any of the arguments in the extension.

  11. 11.

    This is a constructed example that does not fully reflect real-world legal reasoning.

  12. 12.

    This notion is reflected by loop-busting approaches that have been proposed in the context of formal argumentation and that are based on Talmudic logic [1].

  13. 13.

    For the sake of conciseness we do not introduce CF2 semantics in this paper; the semantics is introduced by Baroni et al. in [5].

References

  1. Abraham, M., Gabbay, D.M., Schild, U.J.: The handling of loops in talmudic logic, with application to odd and even loops in argumentation. HOWARD-60: a Festschrift on the Occasion of Howard Barringer’s 60th Birthday (2014)

    Google Scholar 

  2. Alchourrón, C.E., Gärdenfors, P., Makinson, D.: On the logic of theory change: partial meet contraction and revision functions. J. Symbolic Logic 50(2), 510–530 (1985)

    Article  MathSciNet  Google Scholar 

  3. Anjomshoae, S., Najjar, A., Calvaresi, D., Främling, K.: Explainable agents and robots: results from a systematic literature review. In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems. AAMAS 2019, pp. 1078–1088. International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC (2019)

    Google Scholar 

  4. Baroni, P., Giacomin, M.: On principle-based evaluation of extension-based argumentation semantics. Artif. Intell. 171(10), 675–700 (2007). Argumentation in Artificial Intelligence. https://doi.org/10.1016/j.artint.2007.04.004, http://www.sciencedirect.com/science/article/pii/S0004370207000744

  5. Baroni, P., Giacomin, M., Guida, G.: SCC-recursiveness: a general schema for argumentation semantics. Artif. Intell. 168(1), 162–210 (2005). https://doi.org/10.1016/j.artint.2005.05.006

    Article  MathSciNet  MATH  Google Scholar 

  6. Bateman, I., Munro, A., Rhodes, B., Starmer, C., Sugden, R.: A test of the theory of reference-dependent preferences. Q. J. Econ. 112(2), 479–505 (1997)

    Article  Google Scholar 

  7. Baumann, R., Brewka, G.: Expanding argumentation frameworks: enforcing and monotonicity results. COMMA 10, 75–86 (2010)

    Google Scholar 

  8. Cabrio, E., Villata, S.: Five years of argument mining: a data-driven analysis. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence. IJCAI 2018, pp. 5427–5433. AAAI Press (2018)

    Google Scholar 

  9. Calegari, R., Riveret, R., Sartor, G.: The burden of persuasion in structured argumentation. In: Proceedings of the Nineteenth International Conference on Artificial Intelligence and Law. ICAIL 2021, Association for Computing Machinery, New York, NY, USA (2021)

    Google Scholar 

  10. Cramer, M., Guillaume, M.: Empirical cognitive study on abstract argumentation semantics. Front. Artif. Intell. Appl. 305, 413–424 (2018). https://ebooks.iospress.nl/volume/computational-models-of-argument-proceedings-of-comma-2018

  11. Cramer, M., Guillaume, M.: Empirical study on human evaluation of complex argumentation frameworks. In: Calimeri, F., Leone, N., Manna, M. (eds.) JELIA 2019. LNCS (LNAI), vol. 11468, pp. 102–115. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-19570-0_7

    Chapter  Google Scholar 

  12. Cramer, M., van der Torre, L.: SCF2-an argumentation semantics for rational human judgments on argument acceptability. In: Proceedings of the 8th Workshop on Dynamics of Knowledge and Belief (DKB-2019) and the 7th Workshop KI\(\backslash \) & Kognition (KIK-2019) co-located with 44nd German Conference on Artificial Intelligence (KI 2019), Kassel, Germany, pp. 24–35 (2019)

    Google Scholar 

  13. Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif. Intell. 77(2), 321–357 (1995)

    Article  MathSciNet  Google Scholar 

  14. Gabbay, D.M.: Theoretical Foundations for Non-Monotonic Reasoning in Expert Systems. In: Apt, K.R. (ed.) Logics and Models of Concurrent Systems. NATO ASI Series (Series F: Computer and Systems Sciences), vol. 13, pp. 439–457. Springer, Heidelberg (1985). https://doi.org/10.1007/978-3-642-82453-1_15

    Chapter  Google Scholar 

  15. Garcez, A.S.D., Lamb, L.C., Gabbay, D.M.: Neural-symbolic learning systems. In: Lamb, L.C. (ed.) Neural-Symbolic Cognitive Reasoning. COGTECH, Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-540-73246-4_4

    Chapter  MATH  Google Scholar 

  16. Geffner, H.: Model-free, model-based, and general intelligence. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence. IJCAI 2018, pp. 10–17. AAAI Press (2018)

    Google Scholar 

  17. Haidt, J.: The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychol. Rev. 108(4), 814 (2001)

    Article  Google Scholar 

  18. Jacobs, L.: Rabbi aryeh laib heller’s theological introduction to his “shev shema’tata”. Modern Judaism 1(2), 184–216 (1981). http://www.jstor.org/stable/1396060

  19. Kahneman, D.: Maps of bounded rationality: psychology for behavioral economics. Am. Econ. Rev. 93(5), 1449–1475 (2003)

    Article  Google Scholar 

  20. Kahneman, D., Tversky, A.: Prospect theory: an analysis of decision under risk. Econometrica 47(2), 263–291 (1979)

    Article  MathSciNet  Google Scholar 

  21. Kampik, T., Gabbay, D.: Towards DIARG: an argumentation-based dialogue reasoning engine. In: SAFA@ COMMA, pp. 14–21 (2020)

    Google Scholar 

  22. Kampik, T., Nieves, J.C.: Abstract argumentation and the rational man. J. Logic Comput. 31(2), 654–699 (2021). https://doi.org/10.1093/logcom/exab003

    Article  MathSciNet  MATH  Google Scholar 

  23. Landsburg, S.: The Armchair Economist (revised and updated May 2012): Economics and Everyday Life. Free Press (2007)

    Google Scholar 

  24. Lehmann, D., Magidor, M.: What does a conditional knowledge base entail? Artif. Intell. 55(1), 1–60 (1992). http://www.sciencedirect.com/science/article/pii/000437029290041U

  25. Osborne, M.J., Rubinstein, A.: Models in Microeconomic Theory. Open Book Publishers, Cambridge (2020). https://doi.org/10.11647/OBP.0204

    Book  Google Scholar 

  26. Prakken, H., Sartor, G.: A logical analysis of burdens of proof. In: Legal Evidence and Proof: Statistics, Stories, Logic, pp. 223–253 (2009)

    Google Scholar 

  27. Rubinstein, A.: Modeling Bounded Rationality. MIT Press, Cambridge (1998)

    Book  Google Scholar 

  28. Shao, C., Ciampaglia, G.L., Varol, O., Yang, K.C., Flammini, A., Menczer, F.: The spread of low-credibility content by social bots. Nat. Commun. 9(1), 1–9 (2018)

    Article  Google Scholar 

  29. Simon, H.A.: A behavioral model of rational choice. Q. J. Econ. 69(1), 99–118 (1955). https://doi.org/10.2307/1884852

    Article  Google Scholar 

  30. van der Torre, L., Vesic, S.: The principle-based approach to abstract argumentation semantics. IfCoLog J. Logics Appl. 4(8), 34 (2017)

    Google Scholar 

  31. Turing, A.M.: Computing machinery and intelligence. In: Epstein, R., Roberts, G., Beber, G. (eds.) Parsing the Turing Test, pp. 23–65. Springer, Dordrecht (2009). https://doi.org/10.1007/978-1-4020-6710-5_3

    Chapter  Google Scholar 

  32. Verheij, B.: Two approaches to dialectical argumentation: admissible sets and argumentation stages. Proc. NAIC 96, 357–368 (1996)

    Google Scholar 

  33. Zhong, Q., Fan, X., Luo, X., Toni, F.: An explainable multi-attribute decision model based on argumentation. Expert Syst. Appl. 117, 42–61 (2019). https://doi.org/10.1016/j.eswa.2018.09.038, http://www.sciencedirect.com/science/article/pii/S0957417418306158

Download references

Acknowledgments

The authors thank Amro Najjar, Michele Persiani, and the anonymous reviewers for their useful feedback. This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Timotheus Kampik .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kampik, T., Gabbay, D. (2021). Explainable Reasoning in Face of Contradictions: From Humans to Machines. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds) Explainable and Transparent AI and Multi-Agent Systems. EXTRAAMAS 2021. Lecture Notes in Computer Science(), vol 12688. Springer, Cham. https://doi.org/10.1007/978-3-030-82017-6_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-82017-6_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-82016-9

  • Online ISBN: 978-3-030-82017-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics