Abstract
Turing’s Imitation Game (1950) is usually understood to be a test for machines’ intelligence; I offer an alternative interpretation. Turing, I argue, held an externalist-like view of intelligence, according to which an entity’s being intelligent is dependent not just on its functions and internal structure, but also on the way it is perceived by society. He conditioned the determination that a machine is intelligent upon two criteria: one technological and one sociolinguistic. The Technological Criterion requires that the machine’s structure enables it to imitate the human brain so well that it displays intelligent-like behavior; the Imitation Game tests if this Technological Criterion was fulfilled. The Sociolinguistic Criterion requires that the machine be perceived by society as a potentially intelligent entity. Turing recognized that in his day, this Sociolinguistic Criterion could not be fulfilled due to humans’ chauvinistic prejudice towards machines; but he believed that future development of machines displaying intelligent-like behavior would cause this chauvinistic attitude to change. I conclude by discussing some implications Turing’s view may have in the fields of AI development and ethics.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
My reading is supported by several pieces of non-orthodox commentaries of Turing scattered throughout the literature, such as Whitby (1996), Boden (2006, pp. 1346–1356), Sloman (2013), and especially Proudfoot (2005; 2013).
Some of the arguments suggested in this paper have appeared in Danziger (2016).
- 2.
- 3.
Almost all commentaries on – and attacks against – Turing’s paper can be classified into one of the two streams of interpretation described in Sects. 2.1 and 2.2; see Proudfoot (2013) for detailed analysis and critique of these interpretations. Other ways of classification can be found in Saygin et al. (2000) and in Oppy and Dowe (2011).
- 4.
Also Searle’s interpretation of Turing is behavioristic: “The Turing Test is typical of the tradition in being unashamedly behavioristic and operationalistic” (Searle 1980, p. 423; cf. next footnote). References to other behavioristic interpretations can be found in Proudfoot (2013), Copeland (2004, pp. 434–435), and Moor (2001, pp. 81–82).
- 5.
This is the crux of perhaps the two most well-known arguments against the IG, namely, Searle’s Chinese Room (Searle 1980) and Block’s Blockhead / Aunt Bubbles Machine (Block 1981; 1995): Intelligence, they maintain, cannot be captured in behavioral terms alone. (Note that both arguments belong to the behavioristic school of interpretation, in that they assume that the IG is intended to be a behavioral test for intelligence.)
- 6.
- 7.
The ideas in this section draw partly on Proudfoot (2005; 2013). As I shall show later, my interpretation of Turing differs from Proudfoot’s in small but crucial points; to prevent inaccuracies I shall refrain for now from mentioning her take on the subjects discussed, despite my great debt to her work.
- 8.
Turing’s approach bears resemblance also to Dennett’s “intentional stance” (Dennett 1987a).
- 9.
- 10.
Turing is trying to prove that the existence of an intelligent machine is possible, and is not merely asking if it possible. Therefore he will try to show that machines fulfill a sufficient condition for being (perceived as) intelligent, and will put less emphasis on the necessary conditions.
- 11.
There is no need to point out here which properties of the “learning machine” are necessary conditions for perceiving a system as intelligent; all that is being claimed is that a “learning machine” indeed has these properties, whatever they may be.
- 12.
It will later become clear why this claim is labeled “minor”.
- 13.
Hodges (2014, p. 530) explains in a similar way the difference between Turing’s 1948 and 1950 papers.
- 14.
“Intelligent-like behavior” may be roughly defined as “behavior that under regular circumstances cannot be differentiated from that of a human”.
- 15.
The Technological Criterion (1950) is closely connected to the Minor-Technological Claim (1947, 1948) but is more “demanding” (as explained above, Sect. 4.1); that is why the 1947–1948 claim is labeled “minor”.
- 16.
Bringsjord et al. (2001) mention a similar idea of “restricted epistemic relation”: They suggest the “Lovelace Test” for intelligence in which “not knowing how a system works” is a necessary condition for attributing intelligence to it. The fundamental difference between the Lovelace Test and Turing’s IG will be explained later (fn. 23).
- 17.
- 18.
The Sociolinguistic Criterion (1950) is closely connected to the Sociological Claim (1947, 1948) mentioned in Sect. 3.2. The addition of the “linguistic” component will soon be explained.
- 19.
At this point one might raise the following objection: “Your reading boldly ignores the next sentence in Turing’s paper, in which he supposedly predicts that in fifty years there would be intelligent machines (1950, p. 442): ‘Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.’ This implies that Turing identified success in constructing machines that do well in the Game – with success in creating intelligent machines; the timeframe in both sentences is the same (the year 2000), and so they seem to be referring to the same futuristic occurrence!” My reply, in short, is that this objection is based on an incorrect – albeit very common – reading of the passage in Turing’s paper. Turing, I claim, makes two different predictions here, and these predictions are connected causally but not logically. “Doing well in the IG” is not the same as “being intelligent”. The IG, I insist, is not a test for intelligence, but a test only for the Technological Criterion of intelligence: it tests if a system’s behavior is intelligent-like. (I shall return to this issue in Sect. 5.1.)
- 20.
Aaron Sloman, too, sees the IG as Turing’s way of defining a technological challenge, and not as a test for intelligence (Sloman 2013). In an earlier version of his paper Sloman expresses his dissatisfaction with the orthodox interpretations of the IG; I found myself wholly identifying with his words (my italics): “It is widely believed that Turing proposed a test for intelligence. This is false. He was far too intelligent to do any such thing, as should be clear to anyone who has read his paper…”
(Source: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/turing-test.html. Accessed Oct. 11, 2017.)
- 21.
To develop this point further: A real test for a system’s intelligence would check if the system is perceived as intelligent by society as a whole, in an ongoing manner, in normal life situations. But if that were to happen there would be no need for an intelligence test, because “society perceiving a system as intelligent” is the definition of a system’s being intelligent, not a sign of it! (See the Social-Relationist Premise, Sect. 3.1.)
- 22.
This is how Turing’s prediction was understood by Mays (1952, pp. 149–151), Beran (2014) and others. (Piccinini 2000 understands that Turing hopes such a change will occur.) For an illuminating discussion regarding the possibility of this sort of change (not concerning Turing’s paper) see Torrance (2014).
- 23.
The main difference between Turing’s IG and Bringsjord et al.’s “Lovelace Test” mentioned above (fn. 16) is that while the IG is descriptive, the Lovelace Test is normative (see Bringsjord et al. 2001, p. 9).
- 24.
Sloman makes a similar point and says that while computers are now doing much cleverer things, “increasing numbers of humans have been learning about what computers can and cannot do” (Sloman 2013, p. 3). Indeed, getting humans to attribute intelligence to machines might become harder with time.
- 25.
In his brief reply to the “Argument from Consciousness”, Turing seems to claim that if a machine did well in the IG it would be perceived as conscious too (1950, pp. 445–447; see Michie 1993, pp. 4–7. But cf. Copeland 2004, pp. 566–567). I am of the opinion that likewise intelligence, also consciousness and other mental phenomena can be explained in terms of being perceived by society; I plan to discuss this elsewhere.
- 26.
Both properties mentioned were suggested by Mays (1952), in his analysis of Turing’s 1950 paper. Interestingly, Turing himself seems to have viewed both properties as insignificant for intelligence attribution (see Turing 1950, p. 434; Davidson 1990). For a list of other properties that might shape humans’ attitude towards machines, see Torrance (2014).
- 27.
References
Beran, O.: Wittgensteinian perspectives on the Turing test. Studia Philosophica Estonica 7(1), 35–57 (2014)
Block, N.: Psychologism and behaviorism. Philos. Rev. 90, 5–43 (1981)
Block, N.: The mind as the software of the brain. In: Smith, E.E., Osherson, D.N. (eds.) Thinking, pp. 377–425. MIT Press, Cambridge (1995)
Boden, M.A.: Mind as Machine: A History of Cognitive Science. Oxford University Press, Oxford (2006)
Bringsjord, S., Bello, P., Ferrucci, D.: Creativity, the Turing test, and the (better) Lovelace test. Mind. Mach. 11, 3–27 (2001)
Chomsky, N.: Turing on the “imitation game”. In: Epstein, R., Roberts, G., Beber, G. (eds.) Parsing the Turing Test, pp. 103–106. Springer, New York (2008)
Coeckelbergh, M.: Robot rights? Towards a social-relational justification of moral consideration. Ethics Inf. Technol. 12(3), 209–221 (2010)
Copeland, B.J. (ed.): The Essential Turing. Oxford University Press, Oxford (2004)
Danziger, S.: Can computers be thought of as thinkers? Externalist and linguistic perspectives on the Turing test. MA thesis, Hebrew University of Jerusalem (2016). [Hebrew]
Davidson, D.: Turing’s test. In: Said, K., Newton-Smith, W., Viale, R. Wilkes, K. (eds.) Modelling the Mind, pp. 1–12. Clarendon Press, Oxford (1990)
Dennett, D.C.: The Intentional Stance. MIT Press, Cambridge (1987a)
Dennett, D.C.: Consciousness. In: Gregory, R.L., Zangwill, O.L. (eds.) The Oxford Companion to the Mind, pp. 160–164. Oxford University Press, Oxford (1987b)
French, R.M.: Subcognition and the limits of the Turing test. Mind 99(393), 53–65 (1990)
Hodges, A.: Alan Turing: The Enigma. Princeton University Press, Princeton and Oxford (2014)
Mallery, J.C.: Thinking about foreign policy: finding an appropriate role for artificially intelligent computers. The 1988 Annual Meeting of the International Studies Association, St. Louis (1988). 10.1.1.50.3333
Mays, W.: Can machines think? Philosophy 27, 148–162 (1952)
Michie, D.: Turing’s test and conscious thought. Artif. Intell. 60(10), 1–22 (1993)
Moor, J.H.: An analysis of the Turing test. Philos. Stud. 30, 249–257 (1976)
Moor, J.H.: The status and future of the Turing test. Mind. Mach. 11, 77–93 (2001)
Oppy, G., Dowe, D.: The Turing test. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy (2011). plato.stanford.edu/archives/spr2011/entries/turing-test. Accessed 13 Oct 2017
Piccinini, G.: Turing’s rules for the imitation game. Mind. Mach. 10, 573–582 (2000)
Proudfoot, D.: A new interpretation of the Turing test. Rutherford J. N. Z. J. Hist. Philos. Sci. Technol. 1 (2005). Article 010113. rutherfordjournal.org/article010113.html. Accessed 1 Nov 2017
Proudfoot, D.: Rethinking Turing’s test. J. Philos. 110(7), 391–411 (2013)
Saygin, A., Cicekli, I., Akman, V.: Turing test: 50 years later. Minds Mach. 10, 463–518 (2000)
Schweizer, P.: The truly total Turing test. Mind. Mach. 8, 263–272 (1998)
Searle, J.R.: Minds, brains, and programs. Behav. Brain Sci. 3, 417–424 (1980)
Sloman, A.: The mythical Turing test. In: Cooper, S.B., Van Leeuwen, J. (eds.) Alan Turing: His Work and Impact, pp. 606–611. Elsevier, Amsterdam (2013)
Torrance, S.: Artificial consciousness and artificial ethics: between realism and social relationism. Philos. Technol. 27(1), 9–29 (2014)
Turing, A.M.: On computable numbers, with an application to the Entscheidungsproblem. Reprinted in Copeland (2004), pp. 58–90 (1936)
Turing, A.M.: Lecture on the automatic computing engine. Reprinted in Copeland (2004), pp. 378–394 (1947)
Turing, A.M.: Intelligent machinery. Reprinted in Copeland (2004), pp. 410–432 (1948)
Turing, A.M.: Computing machinery and intelligence. Mind 50, 433–460 (1950)
Turing, A.M., Braithwaite, R., Jefferson, G., Newman, M.: Can automatic calculating machines be said to think? Reprinted in Copeland (2004), pp. 494–506 (1952)
Watt, S.: Naive psychology and the inverted Turing test. Psycoloquy 7(14) (1996). http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.43.2705&rep=rep1&type=pdf. Accessed 2 Nov 2017
Whitby, B.: The Turing test: AI’s biggest blind alley? In: Millican, P., Clark, A. (eds.) Machines and Thought: The Legacy of Alan Turing, pp. 53–62. Calderon Press, Oxford (1996)
Wittgenstein, L.: The Blue and Brown Books: Preliminary Studies for the “Philosophical Investigations”. Harper & Row, New York (1958)
Wittgenstein, L.: Philosophical Investigations (Trans: G.E.M. Anscombe, P.M.S. Hacker, & J. Schulte; revised fourth edition by P.M.S. Hacker & J. Schulte). Wiley-Blackwell, Chichester (2009)
Acknowledgments
Research for this paper was financially supported by the Sidney M. Edelstein Center for the History and Philosophy of Science, Technology, and Medicine at the Hebrew University of Jerusalem; and by the Centre for Moral and Political Philosophy (CMPP) at the Hebrew University of Jerusalem. I thank Orly Shenker, Oron Shagrir, Netanel Kupfer, Sander Beckers, Anna Strasser, Selmer Bringsjord and Vincent C. Müller, who reviewed this paper and added thoughtful comments. Special thanks to the participants of the PT-AI 2017 Conference for thought-provoking discussions, and to Shira Kramer-Danziger for her assistance in editing and her wise advice.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Danziger, S. (2018). Where Intelligence Lies: Externalist and Sociolinguistic Perspectives on the Turing Test and AI. In: Müller, V. (eds) Philosophy and Theory of Artificial Intelligence 2017. PT-AI 2017. Studies in Applied Philosophy, Epistemology and Rational Ethics, vol 44. Springer, Cham. https://doi.org/10.1007/978-3-319-96448-5_15
Download citation
DOI: https://doi.org/10.1007/978-3-319-96448-5_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-96447-8
Online ISBN: 978-3-319-96448-5
eBook Packages: Computer ScienceComputer Science (R0)