Skip to main content

Where Intelligence Lies: Externalist and Sociolinguistic Perspectives on the Turing Test and AI

  • Conference paper
  • First Online:
Philosophy and Theory of Artificial Intelligence 2017 (PT-AI 2017)

Part of the book series: Studies in Applied Philosophy, Epistemology and Rational Ethics ((SAPERE,volume 44))

Included in the following conference series:

Abstract

Turing’s Imitation Game (1950) is usually understood to be a test for machines’ intelligence; I offer an alternative interpretation. Turing, I argue, held an externalist-like view of intelligence, according to which an entity’s being intelligent is dependent not just on its functions and internal structure, but also on the way it is perceived by society. He conditioned the determination that a machine is intelligent upon two criteria: one technological and one sociolinguistic. The Technological Criterion requires that the machine’s structure enables it to imitate the human brain so well that it displays intelligent-like behavior; the Imitation Game tests if this Technological Criterion was fulfilled. The Sociolinguistic Criterion requires that the machine be perceived by society as a potentially intelligent entity. Turing recognized that in his day, this Sociolinguistic Criterion could not be fulfilled due to humans’ chauvinistic prejudice towards machines; but he believed that future development of machines displaying intelligent-like behavior would cause this chauvinistic attitude to change. I conclude by discussing some implications Turing’s view may have in the fields of AI development and ethics.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    My reading is supported by several pieces of non-orthodox commentaries of Turing scattered throughout the literature, such as Whitby (1996), Boden (2006, pp. 1346–1356), Sloman (2013), and especially Proudfoot (2005; 2013).

    Some of the arguments suggested in this paper have appeared in Danziger (2016).

  2. 2.

    As Piccinini (2000), Proudfoot (2013), and others have pointed out, Turing uses the terms “thought” and “intelligence” interchangeably. Although I will not differentiate between the terms, for reasons of uniformity I shall usually use the term “intelligence”.

  3. 3.

    Almost all commentaries on – and attacks against – Turing’s paper can be classified into one of the two streams of interpretation described in Sects. 2.1 and 2.2; see Proudfoot (2013) for detailed analysis and critique of these interpretations. Other ways of classification can be found in Saygin et al. (2000) and in Oppy and Dowe (2011).

  4. 4.

    Also Searle’s interpretation of Turing is behavioristic: “The Turing Test is typical of the tradition in being unashamedly behavioristic and operationalistic” (Searle 1980, p. 423; cf. next footnote). References to other behavioristic interpretations can be found in Proudfoot (2013), Copeland (2004, pp. 434–435), and Moor (2001, pp. 81–82).

  5. 5.

    This is the crux of perhaps the two most well-known arguments against the IG, namely, Searle’s Chinese Room (Searle 1980) and Block’s Blockhead / Aunt Bubbles Machine (Block 1981; 1995): Intelligence, they maintain, cannot be captured in behavioral terms alone. (Note that both arguments belong to the behavioristic school of interpretation, in that they assume that the IG is intended to be a behavioral test for intelligence.)

  6. 6.

    The main proponent of the school of inductive interpretations is Moor (1976; 2001). Other inductive interpretations can be found in Watt (1996) and Schweizer (1998).

  7. 7.

    The ideas in this section draw partly on Proudfoot (2005; 2013). As I shall show later, my interpretation of Turing differs from Proudfoot’s in small but crucial points; to prevent inaccuracies I shall refrain for now from mentioning her take on the subjects discussed, despite my great debt to her work.

  8. 8.

    Turing’s approach bears resemblance also to Dennett’s “intentional stance” (Dennett 1987a).

  9. 9.

    In Sects. 3.2 and 4.3 I shall bring further textual evidence for this being Turing’s approach, and shall briefly discuss what might have motivated Turing into adopting such a stance.

  10. 10.

    Turing is trying to prove that the existence of an intelligent machine is possible, and is not merely asking if it possible. Therefore he will try to show that machines fulfill a sufficient condition for being (perceived as) intelligent, and will put less emphasis on the necessary conditions.

  11. 11.

    There is no need to point out here which properties of the “learning machine” are necessary conditions for perceiving a system as intelligent; all that is being claimed is that a “learning machine” indeed has these properties, whatever they may be.

  12. 12.

    It will later become clear why this claim is labeled “minor”.

  13. 13.

    Hodges (2014, p. 530) explains in a similar way the difference between Turing’s 1948 and 1950 papers.

  14. 14.

    “Intelligent-like behavior” may be roughly defined as “behavior that under regular circumstances cannot be differentiated from that of a human”.

  15. 15.

    The Technological Criterion (1950) is closely connected to the Minor-Technological Claim (1947, 1948) but is more “demanding” (as explained above, Sect. 4.1); that is why the 1947–1948 claim is labeled “minor”.

  16. 16.

    Bringsjord et al. (2001) mention a similar idea of “restricted epistemic relation”: They suggest the “Lovelace Test” for intelligence in which “not knowing how a system works” is a necessary condition for attributing intelligence to it. The fundamental difference between the Lovelace Test and Turing’s IG will be explained later (fn. 23).

  17. 17.

    Other clear remarks of Wittgenstein in this spirit are Wittgenstein (2009, §281) and Wittgenstein (1958, p. 47). The similarity between Turing’s and Wittgenstein’s ideas here has been pointed out also by Boden (2006, p. 1351) and Chomsky (2008, p. 104).

  18. 18.

    The Sociolinguistic Criterion (1950) is closely connected to the Sociological Claim (1947, 1948) mentioned in Sect. 3.2. The addition of the “linguistic” component will soon be explained.

  19. 19.

    At this point one might raise the following objection: “Your reading boldly ignores the next sentence in Turing’s paper, in which he supposedly predicts that in fifty years there would be intelligent machines (1950, p. 442): ‘Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.’ This implies that Turing identified success in constructing machines that do well in the Game – with success in creating intelligent machines; the timeframe in both sentences is the same (the year 2000), and so they seem to be referring to the same futuristic occurrence!” My reply, in short, is that this objection is based on an incorrect – albeit very common – reading of the passage in Turing’s paper. Turing, I claim, makes two different predictions here, and these predictions are connected causally but not logically. “Doing well in the IG” is not the same as “being intelligent”. The IG, I insist, is not a test for intelligence, but a test only for the Technological Criterion of intelligence: it tests if a system’s behavior is intelligent-like. (I shall return to this issue in Sect. 5.1.)

  20. 20.

    Aaron Sloman, too, sees the IG as Turing’s way of defining a technological challenge, and not as a test for intelligence (Sloman 2013). In an earlier version of his paper Sloman expresses his dissatisfaction with the orthodox interpretations of the IG; I found myself wholly identifying with his words (my italics): “It is widely believed that Turing proposed a test for intelligence. This is false. He was far too intelligent to do any such thing, as should be clear to anyone who has read his paper…”

    (Source: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/turing-test.html. Accessed Oct. 11, 2017.)

  21. 21.

    To develop this point further: A real test for a system’s intelligence would check if the system is perceived as intelligent by society as a whole, in an ongoing manner, in normal life situations. But if that were to happen there would be no need for an intelligence test, because “society perceiving a system as intelligent” is the definition of a system’s being intelligent, not a sign of it! (See the Social-Relationist Premise, Sect. 3.1.)

  22. 22.

    This is how Turing’s prediction was understood by Mays (1952, pp. 149–151), Beran (2014) and others. (Piccinini 2000 understands that Turing hopes such a change will occur.) For an illuminating discussion regarding the possibility of this sort of change (not concerning Turing’s paper) see Torrance (2014).

  23. 23.

    The main difference between Turing’s IG and Bringsjord et al.’s “Lovelace Test” mentioned above (fn. 16) is that while the IG is descriptive, the Lovelace Test is normative (see Bringsjord et al. 2001, p. 9).

  24. 24.

    Sloman makes a similar point and says that while computers are now doing much cleverer things, “increasing numbers of humans have been learning about what computers can and cannot do” (Sloman 2013, p. 3). Indeed, getting humans to attribute intelligence to machines might become harder with time.

  25. 25.

    In his brief reply to the “Argument from Consciousness”, Turing seems to claim that if a machine did well in the IG it would be perceived as conscious too (1950, pp. 445–447; see Michie 1993, pp. 4–7. But cf. Copeland 2004, pp. 566–567). I am of the opinion that likewise intelligence, also consciousness and other mental phenomena can be explained in terms of being perceived by society; I plan to discuss this elsewhere.

  26. 26.

    Both properties mentioned were suggested by Mays (1952), in his analysis of Turing’s 1950 paper. Interestingly, Turing himself seems to have viewed both properties as insignificant for intelligence attribution (see Turing 1950, p. 434; Davidson 1990). For a list of other properties that might shape humans’ attitude towards machines, see Torrance (2014).

  27. 27.

    Discussions regarding the active role of humans in drawing the borders of the “Charmed Circle” of consciousness or intelligence (relevant also to the issue of animal consciousness and to disputes regarding humans’ attitude towards animals) can be found in Dennett (1987b) and Michie (1993).

References

  • Beran, O.: Wittgensteinian perspectives on the Turing test. Studia Philosophica Estonica 7(1), 35–57 (2014)

    Article  Google Scholar 

  • Block, N.: Psychologism and behaviorism. Philos. Rev. 90, 5–43 (1981)

    Article  Google Scholar 

  • Block, N.: The mind as the software of the brain. In: Smith, E.E., Osherson, D.N. (eds.) Thinking, pp. 377–425. MIT Press, Cambridge (1995)

    Google Scholar 

  • Boden, M.A.: Mind as Machine: A History of Cognitive Science. Oxford University Press, Oxford (2006)

    Google Scholar 

  • Bringsjord, S., Bello, P., Ferrucci, D.: Creativity, the Turing test, and the (better) Lovelace test. Mind. Mach. 11, 3–27 (2001)

    Article  Google Scholar 

  • Chomsky, N.: Turing on the “imitation game”. In: Epstein, R., Roberts, G., Beber, G. (eds.) Parsing the Turing Test, pp. 103–106. Springer, New York (2008)

    Google Scholar 

  • Coeckelbergh, M.: Robot rights? Towards a social-relational justification of moral consideration. Ethics Inf. Technol. 12(3), 209–221 (2010)

    Article  Google Scholar 

  • Copeland, B.J. (ed.): The Essential Turing. Oxford University Press, Oxford (2004)

    MATH  Google Scholar 

  • Danziger, S.: Can computers be thought of as thinkers? Externalist and linguistic perspectives on the Turing test. MA thesis, Hebrew University of Jerusalem (2016). [Hebrew]

    Google Scholar 

  • Davidson, D.: Turing’s test. In: Said, K., Newton-Smith, W., Viale, R. Wilkes, K. (eds.) Modelling the Mind, pp. 1–12. Clarendon Press, Oxford (1990)

    Google Scholar 

  • Dennett, D.C.: The Intentional Stance. MIT Press, Cambridge (1987a)

    Google Scholar 

  • Dennett, D.C.: Consciousness. In: Gregory, R.L., Zangwill, O.L. (eds.) The Oxford Companion to the Mind, pp. 160–164. Oxford University Press, Oxford (1987b)

    Google Scholar 

  • French, R.M.: Subcognition and the limits of the Turing test. Mind 99(393), 53–65 (1990)

    Article  MathSciNet  Google Scholar 

  • Hodges, A.: Alan Turing: The Enigma. Princeton University Press, Princeton and Oxford (2014)

    Book  Google Scholar 

  • Mallery, J.C.: Thinking about foreign policy: finding an appropriate role for artificially intelligent computers. The 1988 Annual Meeting of the International Studies Association, St. Louis (1988). 10.1.1.50.3333

    Google Scholar 

  • Mays, W.: Can machines think? Philosophy 27, 148–162 (1952)

    Article  Google Scholar 

  • Michie, D.: Turing’s test and conscious thought. Artif. Intell. 60(10), 1–22 (1993)

    Article  MathSciNet  Google Scholar 

  • Moor, J.H.: An analysis of the Turing test. Philos. Stud. 30, 249–257 (1976)

    Article  Google Scholar 

  • Moor, J.H.: The status and future of the Turing test. Mind. Mach. 11, 77–93 (2001)

    Article  Google Scholar 

  • Oppy, G., Dowe, D.: The Turing test. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy (2011). plato.stanford.edu/archives/spr2011/entries/turing-test. Accessed 13 Oct 2017

  • Piccinini, G.: Turing’s rules for the imitation game. Mind. Mach. 10, 573–582 (2000)

    Article  Google Scholar 

  • Proudfoot, D.: A new interpretation of the Turing test. Rutherford J. N. Z. J. Hist. Philos. Sci. Technol. 1 (2005). Article 010113. rutherfordjournal.org/article010113.html. Accessed 1 Nov 2017

  • Proudfoot, D.: Rethinking Turing’s test. J. Philos. 110(7), 391–411 (2013)

    Article  Google Scholar 

  • Saygin, A., Cicekli, I., Akman, V.: Turing test: 50 years later. Minds Mach. 10, 463–518 (2000)

    Article  Google Scholar 

  • Schweizer, P.: The truly total Turing test. Mind. Mach. 8, 263–272 (1998)

    Article  Google Scholar 

  • Searle, J.R.: Minds, brains, and programs. Behav. Brain Sci. 3, 417–424 (1980)

    Article  Google Scholar 

  • Sloman, A.: The mythical Turing test. In: Cooper, S.B., Van Leeuwen, J. (eds.) Alan Turing: His Work and Impact, pp. 606–611. Elsevier, Amsterdam (2013)

    Google Scholar 

  • Torrance, S.: Artificial consciousness and artificial ethics: between realism and social relationism. Philos. Technol. 27(1), 9–29 (2014)

    Article  Google Scholar 

  • Turing, A.M.: On computable numbers, with an application to the Entscheidungsproblem. Reprinted in Copeland (2004), pp. 58–90 (1936)

    Google Scholar 

  • Turing, A.M.: Lecture on the automatic computing engine. Reprinted in Copeland (2004), pp. 378–394 (1947)

    Google Scholar 

  • Turing, A.M.: Intelligent machinery. Reprinted in Copeland (2004), pp. 410–432 (1948)

    Google Scholar 

  • Turing, A.M.: Computing machinery and intelligence. Mind 50, 433–460 (1950)

    Article  MathSciNet  Google Scholar 

  • Turing, A.M., Braithwaite, R., Jefferson, G., Newman, M.: Can automatic calculating machines be said to think? Reprinted in Copeland (2004), pp. 494–506 (1952)

    Google Scholar 

  • Watt, S.: Naive psychology and the inverted Turing test. Psycoloquy 7(14) (1996). http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.43.2705&rep=rep1&type=pdf. Accessed 2 Nov 2017

  • Whitby, B.: The Turing test: AI’s biggest blind alley? In: Millican, P., Clark, A. (eds.) Machines and Thought: The Legacy of Alan Turing, pp. 53–62. Calderon Press, Oxford (1996)

    Google Scholar 

  • Wittgenstein, L.: The Blue and Brown Books: Preliminary Studies for the “Philosophical Investigations”. Harper & Row, New York (1958)

    Google Scholar 

  • Wittgenstein, L.: Philosophical Investigations (Trans: G.E.M. Anscombe, P.M.S. Hacker, & J. Schulte; revised fourth edition by P.M.S. Hacker & J. Schulte). Wiley-Blackwell, Chichester (2009)

    Google Scholar 

Download references

Acknowledgments

Research for this paper was financially supported by the Sidney M. Edelstein Center for the History and Philosophy of Science, Technology, and Medicine at the Hebrew University of Jerusalem; and by the Centre for Moral and Political Philosophy (CMPP) at the Hebrew University of Jerusalem. I thank Orly Shenker, Oron Shagrir, Netanel Kupfer, Sander Beckers, Anna Strasser, Selmer Bringsjord and Vincent C. Müller, who reviewed this paper and added thoughtful comments. Special thanks to the participants of the PT-AI 2017 Conference for thought-provoking discussions, and to Shira Kramer-Danziger for her assistance in editing and her wise advice.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shlomo Danziger .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Danziger, S. (2018). Where Intelligence Lies: Externalist and Sociolinguistic Perspectives on the Turing Test and AI. In: Müller, V. (eds) Philosophy and Theory of Artificial Intelligence 2017. PT-AI 2017. Studies in Applied Philosophy, Epistemology and Rational Ethics, vol 44. Springer, Cham. https://doi.org/10.1007/978-3-319-96448-5_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-96448-5_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-96447-8

  • Online ISBN: 978-3-319-96448-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics