Abstract
We present an approach to the computational extraction of reasons for the sake of explaining moral judgments in the context of an hybrid ethical reasoning agent (HERA). The HERA agent employs logical representations of ethical principles to make judgments about the moral permissibility or impermissibility of actions, and uses the same logical formulae to come up with reasons for these judgments. We motivate the distinction between sufficient reasons, necessary reasons, and necessary parts of sufficient reasons yielding different types of explanations, and we provide algorithms to extract these reasons.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
References
Lindner, F., Bentzen, M.M.: The hybrid ethical reasoning agent IMMANUEL. In: HRI 2017, pp. 187–188 (2017)
Lindner, F., Bentzen, M.M., Nebel, B.: The HERA approach to morally competent robots. In: IROS 2017, pp. 6991–6997 (2017)
Halpern, Y.: Causality. MIT Press, Cambridge (2016)
Kuhnert, B., Lindner, F., Bentzen, M.M., Ragni, M.: Perceived difficulty of moral dilemmas depends on their causal structure: a formal model and preliminary results. In: CogSci 2017, pp. 2494–2499 (2017)
Anderson, M., Anderson, S.L.: Machine Ethics. Cambridge University Press, Cambridge (2011)
Wachter, S., Mittelstadt, B., Floridi, L.: Transparent, explainable, and accountable AI for robotics. Sci. Robot. 2(6) (2017)
Mittelstadt, B., Russel, C., Wachter, S.: Explaining explanations in AI. In: FAT* 2019, pp. 279–288 (2019)
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Mackie, J.L.: Causes and conditions. Am. Philos. Q. 12, 245–65 (1965)
Lewis, D.: Causation. J. Philos. 70, 556–567 (1973)
Dannenhauer, D., Floyd, M.W., Magazzeni, D., Aha, D.W.: Explaining rebel behavior in goal reasoning agents. In: ICAPS 2018 Workshop on Explainable Planning, pp. 12–18 (2018)
Langley, P., Meadows, B., Sridharan, M., Choi, D.: Explainable agency for intelligent autonomous systems. In: Twenty-Ninth Annual Conference on Innovative Applications of Artificial Intelligence, pp. 4762–4763 (2017)
Russell, C.: Efficient search for diverse coherent explanations. In: FAT* 2019, pp. 20–28 (2019)
Shih, A., Choi, A., Darwiche, A.: A symbolic approach to explaining Bayesian network classifiers. In: IJCAI/ECAI 2018 Workshop on Explainable Artificial Intelligence (XAI), pp. 144–150 (2018)
Ignatiev, A., Morgado, A., Marques-Silva, J.: PySAT: a Python toolkit for prototyping with SAT oracles. In: Beyersdorff, O., Wintersteiger, C.M. (eds.) SAT 2018. LNCS, vol. 10929, pp. 428–437. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-94144-8_26
Jabbour, S., Marques-Silva, J., Sais, L., Salhi, Y.: Enumerating prime implicants of propositional formulae in conjunctive normal form. In: Fermé, E., Leite, J. (eds.) JELIA 2014. LNCS (LNAI), vol. 8761, pp. 152–165. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-11558-0_11
Rosenthal, S., Selvaraj, S. P., Veloso, M.: Verbalization: narration of autonomous robot experience. In: IJCAI 2016, pp. 862–868 (2016)
Baum, K., Hermanns, H., Speith, T.: From machine ethics to explainability and back. In: International Symposium on Artificial Intelligence and Mathematics (ISAIM 2018) (2018)
Hölldobler, S.: Ethical decision making under the weak completion semantics. In: Proceedings of the Workshop on Bridging the Gap Between Human and Automated Reasoning, pp. 1–5 (2018)
Pereira, L.M., Saptawijaya, A.: Programming Machine Ethics. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-29354-7
Shanahan, M.: Prediction is deduction but explanation is abduction. In: IJCAI 1989, pp. 1055–1060 (1989)
Borgo, R., Cashmore, M., Magazzeni, D.: Towards providing justifications for planner decisions. In: Proceedings of IJCAI 2018 Workshop on Explainable AI (2018)
Previti, A., Ignatiev, A., Morgado, A., Marques-Silva, J.: Prime compilation of non-clausal formulae. In: IJCAI 2015, pp. 1980–1987 (2015)
Stocker, M.: The schizophrenia of modern ethical theories. J. Philos. 73(14), 453–466 (1976)
Acknowledgments
We would like to thank the three anonymous reviewers for their constructive comments.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Lindner, F., Möllney, K. (2019). Extracting Reasons for Moral Judgments Under Various Ethical Principles. In: Benzmüller, C., Stuckenschmidt, H. (eds) KI 2019: Advances in Artificial Intelligence. KI 2019. Lecture Notes in Computer Science(), vol 11793. Springer, Cham. https://doi.org/10.1007/978-3-030-30179-8_18
Download citation
DOI: https://doi.org/10.1007/978-3-030-30179-8_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-30178-1
Online ISBN: 978-3-030-30179-8
eBook Packages: Computer ScienceComputer Science (R0)