Abstract
In this article, I address the issue of evil and roboethics in the context of management studies and suggest that management scholars should locate evil in the realm of the human rather than of the artificial. After discussing the possibility of addressing the reality of evil machines in ontological terms, I explore users’ reaction to robots in a social context. I conclude that the issue of evil machines in management is more precisely a case of technology anthropomorphization.
Similar content being viewed by others
Notes
The following offers definitions for some of the most important terms used in this document. ‘Evil’ is an action that is not simply morally wrong, but leaves no room for understanding or redemption. Evil is qualitatively, rather than merely quantitatively, distinct from mere wrongdoing. ‘Evil machine’ is a machine’s action that causes harm to humans and leaves no room for account or expiation. ‘Robot’ stands for both physical robots and virtual agents roaming within computer networks; ‘autonomous machine’ is a decision-making machine; ‘artificial intelligence’ is the ability of autonomous machines to make decisions; ‘intelligent machine’ and ‘autonomous intelligent machine’ are synonymous with ‘autonomous machine.’ ‘Machine’ is an umbrella term to cover robots and autonomous and intelligent machines. ‘Machine learning algorithm’ can be categorized as being supervised or unsupervised. Supervised algorithms can apply what has been learned in the past to new data. Unsupervised algorithms can draw inferences from datasets. An important distinction in this article is played between humans as designers and engineers, i.e., those who build the machine, and humans as users or clients, i.e., those who interact socially with the machine. The former are named ‘designers’ and ‘engineers,’ the latter ‘users,’ ‘investors,’ ‘clients,’ or, when the text moves from the specific case study to more general considerations, ‘humans’ and ‘humanoids.’ Giving human characteristics to artificial objects is a human trait called ‘to anthropomorphize.’ Biblical quotes are from the new revised standard version of the Oxford annotated Bible with Apocrypha (Croogan 2010).
References
Adams G, Balfour DL (2009) Unmasking administrative evil. M.E. Sharpe, New York
Allen RE (2006) Plato: the republic. Yale University Press, New Haven
Arkin R (2009) Governing lethal behavior in autonomous robots. Hall/CRC, London
Asimov I (1942) Runaround. Astounding Sci Fiction 29(2):94–103
Bataille G (2001) Literature and evil. Marion Boyars Publishers, London
Bernstein RJ (2002) Radical evil: a philosophical investigation. Polity Press, Cambridge
Bostrom N (2002) Existential risks: analyzing human extinction scenarios. J Evol Technol 9:1–30
Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, Oxford
Bostrom N, Yudkowsky E (2011) The ethics of artificial intelligence. In: Ramsey William, Frankish Keith (eds) Cambridge handbook of artificial intelligence. Cambridge University Press, Cambridge, pp 316–334
Calder T (2002) Towards a theory of evil: a critique of Laurence Thomas’s theory of evil acts. In: Haybron DM (ed) Earth’s abominations: philosophical studies of evil. Rodopi, New York, pp 51–61
Coeckelbergh M (2009) Personal robots, appearance, and human good: a methodological reflection on roboethics. Int J Soc Robot 1(3):217–221
Coeckelbergh M (2010) You, Robot: on the linguistic construction of artificial others. AI & Soc 26(1):61–69
Coeckelbergh M (2012) Can we trust robots? Ethics Inf Technol 14(1):53–60
Croogan MD et al (2010) The New Oxford annotated Bible with Apocrypha: new revised standard version. Oxford University Press, New York
Darley JM (1992) Social organization for the production of evil. Psychol Inq 3:199–218
Darley JM (1996) How organizations socialize individuals into evildoing. In: Messick David M, Tenbrunsel Ann E (eds) Codes of conduct: behavioral research into business ethics. Russell Sage Foundation, New York, pp 179–204
Dennett DC (1987) The intentional stance. MIT Press, Boston
Dennett Daniel (1998) When HAL kills, who’s to blame? Computer ethics. In: Stork D (ed) HAL’s legacy: 2001’s computer as dream and reality. MIT Press, Boston
Epley N, Waytz A, Cacioppo JT (2007) On seeing human: a three-factor theory of anthropomorphism. Psychol Rev 114:864
Floridi L, Sanders J (2004) On the morality of artificial agents. Mind Mach 14(3):349–379
Garrard E (1998) The nature of evil. Philos Explor Int J Philos Mind Action 1(1):43–60
Garrard E (2002) Evil as an explanatory concept. The Monist 85(2):320–336
Geddes JL (2003) Banal evil and useless knowledge: Hannah Arendt and Charlotte Delbo on evil after the holocaust. Hypatia 18:104–115
Hastie T, Tibshirani R, Friedman J (2009) The elements of statistical learning: data mining, inference, and prediction. 2nd edn. Springer, New York
Irrgang B (2006) Ethical acts in robotics. Ubiquity 7(34). http://www.acm.org/ubiquity. Accessed 12 Oct 2017
Johnson V, Brennan LL, Johnson VE (2004) Social, ethical and policy implications of information technology. Information Science Publishing, Hershey
Kamm F (2007) Intricate ethics: rights, responsibilities, and permissible harm. Oxford University Press, Oxford
Kroll JA, Huey J, Barocas S, Felten EW, Reindenberg JR, Robinson DG, Yu H (eds) (2016). Accountable algorithms. Univ PA Law Rev 165: 633
Lee S, Kiesler S, Lau IY, Chiu C-Y (2005) Human mental models of humanoid robots. In: Proceedings of the 2005 IEEE international conference on robotics and automation (ICRA’05). Barcelona, April 18–22, pp 2767–2772
Lin P, Abney K, Bekey GA (2014) Robot ethics: the ethical and social implications of robotics. The MIT Press, Boston
Loughnan S, Haslan N (2007) Animals and androids: implicit associations between social categories and nonhumans. Psychol Sci 18:116–121
Mittelstadt B, Allo P, Taddeo M, Wachter S, Floridi L (2016) The ethics of algorithms: mapping the debate. Big Data Soc 3(2):1–21
Nadeau JE (2006) Only androids can be ethical. In: Ford K, Glymour C (eds) Thinking about android epistemology. MIT Press, Boston, pp 241–248
Neiman S (2002) Evil in modern thought. an alternative history of philosophy. Princeton University Press, Princeton
Pasquale F (2015) The black box society: the secret algorithm behind money and information. Harvard University Press, Massachusetts
Powers TM (2009) Machines and moral reasoning. Philos Now 72:15–16
Powers TM (2016) Prospects for a Kantian machine. In: Wallach W, Asaro P (eds) Machine ethics and robot ethics. Ashgate Publishing, Farnham
Powers A, Kiesler S, Fussell S, Torrey C (2007) Comparing a computer agent with a humanoid robot. In: Proceedings of HRI07, pp 145–152
Schnall S, Cannon PR (2012) The clean conscience at work: emotions, intuitions and morality. J Manag Spiritual Relig 9(4):295–315
Sofge E (2014) Robots are evil: the sci-fi myth of killer machines. Pop Sci. http://www.popsci.com/blog-network/zero-moment/robots-are-evil-sci-fi-myth-killer-machines. Accessed 13 June 2017
Staub E (1989 reprinted in 1992) The roots of evil: the origins of genocide and other group violence. Cambridge University Press, Cambridge
Steiner H (2002) Calibrating evil. The Monist 85(2):183–193
Styhre A, Sundgren M (2003) Management is evil: management control, technoscience and saudade in pharmaceutical research. Leadersh Organ Dev J 24(8):436–446
Sullins JP (2005) Ethics and artificial life: from modeling to moral agents. Ethics Inf Technol 7:139–148
Sullins JP (2006) When is a robot a Moral Agent? Int Rev Inf Ethics 6(12):24–30
Taddeo M (2010) Trust in technology: a distinctive and a problematic relation. Know Technol Policy 23(3–4):283–286
Tang TL-P (2010) Money, the meaning of money, management, spirituality, and religion. J Manag Spiritual Relig 7(2):173–189
Turing AM (1950) Computing machinery and intelligence. Mind 59:433–460
Wallach W, Allen C (2008) Moral machines: teaching robots right from wrong. Oxford University Press, New York
Waytz A, Cacioppo J, Epley N (2010) ‘Who sees human? The stability and importance of individual differences in anthropomorphism. Perspect Psychol Sci 5:219–232
Zimbardo P (2007) The Lucifer effect: understanding how good people turn evil. Random House, New York
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Beltramini, E. Evil and roboethics in management studies. AI & Soc 34, 921–929 (2019). https://doi.org/10.1007/s00146-017-0772-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-017-0772-x