Skip to main content

Advertisement

Log in

How to Treat Machines that Might Have Minds

  • Research Article
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

This paper offers practical advice about how to interact with machines that we have reason to believe could have minds. I argue that we should approach these interactions by assigning credences to judgements about whether the machines in question can think. We should treat the premises of philosophical arguments about whether these machines can think as offering evidence that may increase or reduce these credences. I describe two cases in which you should refrain from doing as your favored philosophical view about thinking machines suggests. Even if you believe that machines are mindless, you should acknowledge that treating them as if they are mindless risks wronging them. Suppose your considered philosophical view that a machine has a mind leads you to consider dating it. You may have reason to regret that decision should these dates lead on to a life-long relationship with a mindless machine. In the paper’s final section, I suggest that building a machine that is capable of performing all intelligent human behavior should produce a general increase in confidence that machines can think. Any reasonable judge should count this feat as evidence in favor of machines having minds. This rational nudge could lead to broad acceptance of the idea that machines can think.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. See the literature on philosophical disagreement. For example, Christensen (2007) and the essays in Christensen and Lackey (2013).

  2. For a user-friendly philosophical guide to this approach to belief, see Pettigrew (2013). See also Pettigrew (2016).

  3. See Agar (2014) for a description of this approach to uncertainties about the directives of utilitarianism.

  4. I have presented a credence of 0 as reflecting the judgement that there is no chance of wronging a computer in a way that requires it to have a mind. We should also accept very low positive credences—say .001—as reflecting the judgment that there is a negligible chance of harming a being in ways specific to beings with minds. Perhaps it is appropriate to assign a credence of .001 to the proposition that cutting down a tree causes it to suffer. A very low credence such as this suggests that those who cut down a tree can generally ignore the possibility that it suffers. It does not have the practical implications of a .3 credence assigned to the proposition that a computer that produces all human intelligent behavior can have thoughts and feelings.

  5. See Pearcey (2015) for an argument that the theory of evolution contains a contradiction.

  6. See Basl (2014) for discussion of what it would take for a machine to have interests that should be taken into account in our moral deliberations.

  7. An anonymous referee makes the point that there is a moral dimension to this rejection. The decision to not date Sam suggests an assessment that Sam may find offensive. I argue that the principal reasons to not date Sam are prudential. These reasons should not be viewed as morally offensive in the same way as a straightforwardly false racist reason to discontinue a relationship.

  8. See Danaher (2018) and Hauskeller (2018) for interesting discussions of the predicaments of synths in Humans.

  9. See the discussion in Sparrow (2004).

  10. See Erica Neely (2014) for the suggestion that it is appropriate to err on the side of caution in such cases. When we apply her reasoning to Alex’s story, we acknowledge that it is better to cause Alex’s creator the inconvenience of having to do without Alex’s recyclable materials than it is to cause suffering to a being with a mind. See David Gunkel (2018, section 3.11) for discussion of Neely’s argument.

References

  • Agar, N. (2014). How to insure against utilitarian overconfidence. Monash Bioethics Review, 32, 162–171.

    Article  Google Scholar 

  • Agar, N. (2019). How to be human in the digital economy. Cambridge: MIT Press.

    Book  Google Scholar 

  • Basl, J. (2014). Machines as moral patients we shouldn’t care about (yet): the interests and welfare of current machines. Philosophy and Technology, 27.1, 79–96.

    Article  Google Scholar 

  • Bryson, J. (2010). Robots should be slaves. In Y. Wilks (Ed.), Close engagements with artificial companions. Amsterdam: John Benjamins.

    Google Scholar 

  • Christensen, D. (2007). Epistemology of disagreement: the good news. Philosophical Review, 116, 187–218.

    Article  Google Scholar 

  • Christensen, D., & Lackey, J. (Eds.). (2013). The epistemology of disagreement: new essays. New York: Oxford University Press.

    Google Scholar 

  • Danaher, J. (2018). The symbolic-consequences argument in the sex robot debate. In J. Danaher (Ed.). Robot Sex: Social and Ethical Implications. Cambridge: MIT Press.

  • Dennett, D. (1991). Consciousness explained. Boston: Little, Brown, and Co..

    Google Scholar 

  • Gunkel, D. (2018). Robot rights. Cambridge: MIT Press.

    Book  Google Scholar 

  • Hájek, A. (2018). Pascal's wager, the Stanford encyclopedia of philosophy (summer 2018 edition), Edward N. Zalta (Ed.), URL = <https://plato.stanford.edu/archives/sum2018/entries/pascal-wager/>. Accessed 2 Feb 2019.

  • Hauskeller, M. (2018). Automatic sweethearts. In J. Danaher (Ed.), Robot sex: Social and ethical implications. Cambridge: MIT Press.

    Google Scholar 

  • Kurzweil, R. (2005). The singularity is near: when humans transcend biology. London: Penguin.

    Google Scholar 

  • Lycan, W. (1987). Consciousness. Cambridge: MIT Press.

    Google Scholar 

  • Neely, E. L. (2014). Machines and the moral community. Philosophy & Technology, 27(1), 97–111.

    Article  Google Scholar 

  • Pascal, B. (1910). Pensées: Translated by W. F. Trotter: Dent.

    Google Scholar 

  • Pearcey, N. (2015). Finding truth: 5 principles for unmasking atheism secularism, and other god substitutes. Colorado Springs: David C Cook.

    Google Scholar 

  • Pettigrew, R. (2013). Epistemic utility and norms for Credences. Philosophy Compass, 8(10), 897–908.

    Article  Google Scholar 

  • Pettigrew, R. (2016). Epistemic utility arguments for Probabilism. The Stanford encyclopedia of philosophy (spring 2016 edition), Edward N. Zalta (Ed.), URL = <https://plato.stanford.edu/archives/spr2016/entries/epistemic-utility/>. Accessed 2 Feb 2019.

  • Rosenthal, D. (1986). Two concepts of consciousness. Philosophical Studies, 49, 329–359.

    Article  Google Scholar 

  • Schwitzgebel, E., & Garza, M. (2015). A defense of the rights of artificial intelligences. Midwest Studies in Philosophy, 39(1), 89–119.

    Article  Google Scholar 

  • Searle, J. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3, 417–424.

    Article  Google Scholar 

  • Sparrow, R. (2004). The Turing triage test. Ethics and Information Technology, 6, 203–213.

    Article  Google Scholar 

Download references

Acknowledgments

This paper has been improved by the comments of Pablo Barranquero, Stuart Brock, Lucinda Campbell, Juliet Floyd, Michael Hauskeller, Bengt Kayser, Simon Keller, Edwin Mares, Jonathan Pengelly, Russell Powell, Johann Roduit, Rob Sparrow, Nicole Vincent, and Mark Walker. I have also benefited from audiences at the University of Zurich, University of Malaga, Boston University, New Mexico State University, University of Texas at El Paso, Victoria University of Wellington, and Aarhus University, and two anonymous referees for this journal.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nicholas Agar.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Agar, N. How to Treat Machines that Might Have Minds. Philos. Technol. 33, 269–282 (2020). https://doi.org/10.1007/s13347-019-00357-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13347-019-00357-8

Keywords

Navigation