Abstract
Which notion of computation (if any) is essential for explaining cognition? Five answers to this question are discussed in the paper. (1) The classicist answer: symbolic (digital) computation is required for explaining cognition; (2) The broad digital computationalist answer: digital computation broadly construed is required for explaining cognition; (3) The connectionist answer: sub-symbolic computation is required for explaining cognition; (4) The computational neuroscientist answer: neural computation (that, strictly, is neither digital nor analogue) is required for explaining cognition; (5) The extreme dynamicist answer: computation is not required for explaining cognition. The first four answers are only accurate to a first approximation. But the “devil” is in the details. The last answer cashes in on the parenthetical “if any” in the question above. The classicist argues that cognition is symbolic computation. But digital computationalism need not be equated with classicism. Indeed, computationalism can, in principle, range from digital (and analogue) computationalism through (the weaker thesis of) generic computationalism to (the even weaker thesis of) digital (or analogue) pancomputationalism. Connectionism, which has traditionally been criticised by classicists for being non-computational, can be plausibly construed as being either analogue or digital computationalism (depending on the type of connectionist networks used). Computational neuroscience invokes the notion of neural computation that may (possibly) be interpreted as a sui generis type of computation. The extreme dynamicist argues that the time has come for a post-computational cognitive science. This paper is an attempt to shed some light on this debate by examining various conceptions and misconceptions of (particularly digital) computation.
Similar content being viewed by others
Notes
This claim raises some ontological quandaries about semantics being confined to some physical boundaries. To avoid a metaphysical debate, let me clarify. In conventional digital computers, computer programs are translated into machine language, which drives the operation of the computer at the hardware level. Take the following code example in assembly (a low-level programming language that works very close to the hardware level).
__asm__ (“movl $2, %eax;”
“movl $25, %ebx;”
“imull %ebx, %eax;”)
This instruction tells the computer to multiply 2 and 25 and store the result into register %eax. The end result might represent, say, a total of 50 apples for a field trip of 25 children. But that makes no difference to the execution of the instruction above. The semantics of that instruction (i.e., moving data between registers, multiplying values, etc.) is contained within the boundaries of the computer.
If, for some technical reasons, this mechanism is replaced with a soft-wired mechanism (i.e., either through explicit how-to rules or a soft-constraint learning mechanism), the overall principle will still hold. Even in the case of the soft-constraint learning mechanism, it will eventually learn (say, by heuristics) how to perform effectively without knowing what it is doing.
At the program level, any factual information entered by a user is converted into something recognisable by the computing system by using an implicit semantics dictionary. This dictionary is used to translate any factual information into some data structure that is recognisable by the program. The ace of hearts card, for instance, is translated into a data structure with properties such as a shape, a number, etc. This data structure can be processed by the program and when appropriate, the processed data can be translated back into some form of human readable information as output.
Operators (such as ‘+’, ‘−’, or ‘copy’) are symbols or symbolic expressions that have an external semantics built into them (Newell 1980: p. 159).
The semantic level, for example, is sometimes equated with Marr’s top/computational level, but it should not be. Marr’s top level characterises the function computed by the cognitive system. This computation may (but need not) involve the assignment of semantic contents.
This imposed representational constraint is unsurprising, as the motivation of the classicists, who promote either the FSM or PSS accounts, was advancing a substantive empirical hypothesis about how human cognition works.
Gordana Dodig-Crnkovic asserts that to make pancomputationalism a substantial thesis that plays a key role in a scientific theory about the universe, we should adopt a realist weak version of pancomputationalism (Dodig-Crnkovic and Burgin 2011: pp. 154–155). All processes can be described as computational processes, since such a description happens to be useful in a scientific theory. It is ‘weak’ in the sense that it focuses on ways of description, rather than on realist ontology.
A digit, on this account, is a stable state of a component that is processed by the computing system. In ordinary electronic computers digits are states of physical components of the machine (e.g., memory cells).
A Gandy machine is a deterministic discrete machine that can perform operations in parallel. It can be conceptualised as multiple TMs working in parallel, sharing the same tape and possibly writing on overlapping regions of it.
It is worth noting that Robert Cummins, for instance, also holds the view that digital computation is the execution of algorithms (or programs), but his view does presuppose extrinsic representations. “[B]eing able to track computations under their semantic interpretations allows us to see how a physical engine—a computer—can satisfy epistemic constraints” (Cummins 1996: p. 66). But his account of computation proper is ultimately inadequate for other reasons as well. On his account, Searle’s wall also computes (Copeland 1996: p. 353).
Neural (or connectionist) networks consist of multiple interconnected homogenous units called ‘neurons’. These nets can be classified into two general categories: feedforward nets and feedback (or recurrent) nets. In the former case, units are arranged across multiple layers such that the output of units in one layer depends only on those in previous layers. The outputs of units are updated layer by layer with the first one being the input layer and the last one being the output layer. In the latter case, feedback loops in the network allow signals between units to travel in both directions (rather than just in a unidirectional forward manner). A source of controversy arises in regard to representations in connectionist nets. On the localist interpretation, each individual unit, which is active in a particular distributed activation pattern, realises an individual representation contributing to the overall content of the activation pattern. On the distributive interpretation, a representation is realised by either an activation pattern or an activation pattern and its weighted connections. For further discussion on neural networks see, for example, Tienson’s introduction (1988).
Otherwise, if McCulloch and Pitts networks were classified as analogue computing systems, then digital computers would be analogue too.
By implementing soft constraints, connectionist networks arguably allow the task demands, rather than the designer's biases (like in rule-driven digital computing systems) to be the primary driver shaping the operation of the network. To some extent, this approach reflects a shift in methodology when compared with Marr’s classical top-down approach (which is overtly endorsed by classicists).
Still, this does not completely resolve the classicist main beef with connectionist networks, which do not process structured symbolic representations. Fodor and Pylyshyn think that cognition is syntactically governed manipulation of structured representations. Connectionism, so they conclude, is hopeless as a (competence) theory of cognition (Fresco 2010).
Of course, some degree of simplification is needed to make any model viable, since models, by definition, abstract away from some of the particulars of the modelled system. The question here is whether connectionist networks simplify too much in the process of modelling cognition.
For one thing, neural activity has many sources of noise making the underlying computation imprecise sometimes. This suggests that, unlike digital computation, natural computation itself is noisy and imprecise (MacLennan 2004: p. 129).
The label ‘extreme dynamicism’ is used to alert the reader that in some sense, any cognitive scientist is by definition a dynamicist. For there seems to be a consensus that cognition is a dynamical phenomenon, and as such it requires some application of dynamical systems theory. So, for clarity, the label ‘extreme dynamicism’ is chosen to denote the anti-computationalist position.
To be sure, these different approaches are logically autonomous. One can subscribe to any particular approach without necessarily subscribing to the others. For a nice discussion on the history and differences amongst those approaches see, for example, Thompson (2007: pp. 3–15).
More precisely, Brooks only rejects what I dubbed extrinsic representations for the computations performed by these mobots. “[T]here need be no explicit representation of either the world or the intentions of the system” (Brooks 1991: p. 149).
I follow Craver and Bechtel (2006: p. 469) in labelling this characteristic ‘phenomenal’ in a manner unrelated to phenomenology.
Weiskopf cites some researchers corroborating this claim. For example:
“[P]sychological primitives are functional abstractions for brain networks that contribute to the formation of neuronal assemblies that make up each brain state” (Lisa Barrett, as cited by Weiskopf 2011: p. 330).
“Almost every cognitive task involves the activation of a network of brain regions (say, 4-10 per hemisphere) rather than a single area” (Marcel Just et al. as cited by Weiskopf 2011: p. 330).
Piccinini and Craver (2011: p. 303) argue that Marr’s three levels are not levels of mechanism, since they do not describe relations among components or subcomponents. On their interpretation, the computational and algorithmic levels are mechanistic sketches. The computational level describes the mechanism’s task and the algorithmic level describes the computational vehicles as well as the processes that manipulate these vehicles.
References
Anderson, J. R. (1983). The architecture of cognition. Cambridge, MA: Harvard University Press.
Bechtel, W. (1998a). Representations and cognitive explanations: Assessing the dynamicist’s challenge in cognitive science. Cognitive Science, 22, 295–318.
Bechtel, W. (1998b). Dynamicists versus computationalists: Whither mechanists? Behavioral and Brain Sciences, 21, 629.
Bechtel, W. (2001). Representations: From neural systems to cognitive systems. In W. Bechtel, P. Mandik, J. Mundale, & R. S. Stufflebeam (Eds.), Philosophy and the neurosciences: A reader. Oxford: Basil Blackwell.
Bechtel, W. (2009). Constructing a philosophy of science of cognitive science. Topics in Cognitive Science, 1, 548–569.
Bechtel, W., & Abrahamsen, A. (2002). Connectionism and the mind: Parallel processing, dynamics, and evolution in networks (2nd ed.). Oxford: Basil Blackwell.
Bechtel, W., & Richardson, R. C. (2010). Discovering complexity: Decomposition and localization as strategies in scientific research (2nd ed.). Cambridge: The MIT Press.
Beer, R. (forthcoming). Dynamical systems and embedded cognition. In K. Frankish and W. Ramsey (Eds.), The Cambridge handbook of artificial intelligence. Cambridge University Press.
Boden, M. A. (2008). Information, computation, and cognitive science. In P. Adriaans and J. van Benthem (Eds.). Handbook of the philosophy of science, volume 8: Philosophy of information, pp. 741–761. Amsterdam: Elsevier.
Brooks, R. A. (1991). Intelligence without representation. Artificial Intelligence, 47, 139–159.
Burks A. W., Goldstine, H. H., & von Neumann, J. (1946). Preliminary discussion of the logical design of an electronic computing instrument. In B. Randell (Ed.), The origins of digital computers: Selected papers. (3rd ed. 1982). pp. 399–414. New York: Springer.
Chalmers, D. (1992). Subsymbolic computation and the Chinese room. In J. Dinsmore (Ed.), The symbolic and connectionist paradigms: Closing the gap (pp. 25–47). Hillsdale, NJ: Lawrence Erlbaum.
Chalmers, D. (1993). Why Fodor and Pylyshyn were wrong: The simplest refutation. Philosophical Psychology, 6, 305–319.
Chomsky, N. (1992). Language and interpretation: Philosophical reflections and empirical Inquiry. In J. Earman (Ed.), Inference, explanation, and other frustrations: Essays in the philosophy of Science (pp. 99–128). Berkeley: University of California Press.
Churchland, P. S., Koch, C., & Senjowsky, T. J. (1988). What is computational neuroscience? In E. L. Schwartz (Ed.), Computational neuroscience (pp. 46–55). Cambridge, MA: The MIT press.
Churchland, P. S., & Senjowsky, T. J. (1992). The computational brain. Cambridge, MA: The MIT Press.
Clark, A. (1990). Connectionism, competence, and explanation. The British Journal for the Philosophy of Science, 41, 195–222.
Copeland, B. J. (1996). What is computation? Synthese, 108, 335–359.
Copeland, B. J. (1997). The broad conception of computation. The American Behavioral Scientist, 40, 690–716.
Craver, C. F., & Bechtel, W. (2006). Mechanism. In S. Sarkar & J. Pfeifer (Eds.), Philosophy of science: An encyclopedia (pp. 469–478). New York: Routledge.
Cummins, R. (1996). Representations, targets, and attitudes. Cambridge: The MIT Press.
Dayan, P., & Abbott, L. F. (2001). Theoretical neuroscience: Computational and mathematical modeling of neural systems. Cambridge, MA: The MIT Press.
Dennett, D. C. (1991). Consciousness explained. New York: Little, Brown and Company.
Diederich, J. (1990). Spreading activation and connectionist models for natural language processing. Theoretical Linguistics, 16, 25–64.
Dodig-Crnkovic, G., & Burgin, M. (2011). Information and computation: Essays on scientific and philosophical understanding of foundations of information and computation. Singapore: World Scientific Publishing Company.
Dreyfus, H. (1972). What computers can’t do. New York: Harper and Row.
Egan, F. (2011). Two kinds of representational contents for cognitive theorizing. Paper presented at the 2011 Philosophy and the Brain conference at the Institute for Advanced Studies, Hebrew University in Jerusalem, Israel. Retrieved May 9, 2011, from https://sites.google.com/site/philosophybrainias2011/home/conference-papers-1/Egan-TwoKindsofRepContent.pdf?attredirects=0&d=1.
Eliasmith, C. (2003). Moving beyond metaphors: Understanding the mind for what it is. The Journal of Philosophy, 100, 493–520.
Eliasmith, C. (2007). Computational neuroscience. In P. Thagard (Ed.), Philosophy of psychology and cognitive science: Handbook of philosophy of science. Amsterdam: Elsevier.
Eliasmith, C., & Anderson, C. H. (2003). Neural engineering: Computation, representation and dynamics in neurobiological systems. Cambridge: MIT Press.
Fodor, J. A. (1980). Methodological solipsism considered as a research strategy in cognition psychology. Behavioral and Brain Sciences, 3, 63–109.
Fodor, J. A. (1981). The mind-body problem. Scientific American, 244, 114–123.
Fodor, J. A., & Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28, 3–71.
Fresco, N. (2010). A computational account of connectionist networks. Recent Patents on Computer Science, 3, 20–27.
Froese, T. (2011). Breathing new life into cognitive science. Avant, 2, 113–129.
Gandy, R. (1980). Church’s thesis and principles for mechanisms. In J. Barwise, H. J. Keisler, & K. Kunen (Eds.), The Kleene symposium (pp. 123–148). Amsterdam: North-Holland.
Hamann, H., & Wörn, H. (2007). Embodied computation. Parallel Processing Letters, 17, 287–298.
Haugeland, J. (1985). AI: The very idea. Cambridge, MA: The MIT Press.
Horst, S. (1999). Symbols and computation: A critique of the computational theory of mind. Minds and Machines, 9, 347–381.
Izhikevich, E. M. (2007). Dynamical systems in neuroscience. Cambridge, MA: The MIT Press.
Kaplan, D. M., & Bechtel, W. (2011). Dynamical models: An alternative or complement to mechanistic explanations? Topics in Cognitive Science, 3, 438–444.
Kremer, S. C. (2007). Spatio-temporal connectionist networks. In P. A. Fishwick (Ed.), Handbook of dynamic system modelling. London: Chapman & Hall/CRC.
Maass, W., & Markram, H. (2004). On the computational power of circuits of spiking neurons. Journal of Computer and System Sciences, 69, 593–616.
Machamer, P. (2004). Activities and causation: The metaphysics and epistemology of mechanisms. International Studies in the Philosophy of Science, 18, 27–39.
MacLennan, B. J. (2001). Connectionist approaches. In N. J. Smelser & P. B. Baltes (Eds.), International encyclopedia of the social and behavioral sciences (pp. 2568–2573). Oxford: Elsevier.
MacLennan, B. J. (2004). Natural computation and non-Turing models of computation. Theoretical Computer Science, 317, 115–145.
Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information. New York: W. H. Freeman & Co.
Matthews, R. (1997). Can connectionists explain systematicity? Mind and Language, 12, 154–177.
McCulloch, W., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biology, 5, 115–133.
Newell, A. (1980). Physical symbol systems. Cognitive Science, 4, 135–183.
Newell, A., & Simon, H. A. (1976). Computer science as an empirical enquiry: Symbols and search. Communications of the ACM, 19, 113–126.
O’Brien, G. (1999). Connectionism, analogicity and mental content. Acta Analytica, 22, 111–131.
O’Brien, G., & Opie, J. (2006). How do connectionist networks compute? Cognitive Processing, 7, 30–41.
Pfeiffer, R., & Scheier, C. (1999). Understanding intelligence. Cambridge, MA: The MIT Press.
Piccinini, G. (2008a). Computation without representation. Philosophical Studies, 137, 205–241.
Piccinini, G. (2008b). Computers. Pacific Philosophical Quarterly, 89, 32–73.
Piccinini, G. (2008c). Some neural networks compute, others don’t. Neural Networks, 21, 311–321.
Piccinini, G., & Bahar, S. (2011). Neural computation and the computational theory of cognition. Paper presented at the Computation and the Brain workshop 2011 at the Institute for Advanced Studies, Hebrew University in Jerusalem, Israel. Retrieved June 2, 2011, from https://sites.google.com/site/iascomputationbrainhuji2011/home/previous-lectures/Piccinini%26Bahar_NeuralComputationandtheComputationalTheoryofCognition.doc?attredirects=0&d=1.
Piccinini, G., & Craver, C. F. (2011). Integrating psychology and neuroscience: Functional analyses as mechanism sketches. Synthese, 183, 283–311.
Piccinini, G., & Scarantino, A. (2011). Information processing, computation and cognition. Journal of Biological Physics, 37, 1–38.
Poggio, T., & Koch, C. (1985). Ill-posed problems in early vision: From computational theory to analogue networks. In the proceedings of the Royal Society of London, series B, Biological Sciences, 226, 303–323.
Putnam, H. (1988). Representation and reality. Cambridge, MA: The MIT Press.
Pylyshyn, Z. W. (1984). Computation and cognition. Cambridge, MA: The MIT Press.
Pylyshyn, Z. W. (1993). Computers and the symbolization of knowledge. In R. Morelli, W. M. Brown, D. Anselmi, K. Haberlandt, & D. Lloyd (Eds.), Minds brains and computers: Perspectives in cognitive science and artificial intelligence. Norwood, NJ: Ablex.
Pylyshyn, Z. W. (1999). What’s in your mind? In E. Lepore & Z. W. Pylyshyn (Eds.), What is cognitive science? (pp. 1–25). MA: Blackwell Publishers.
Rubel, L. A. (1985). The brain as an analog computer. Journal of Theoretical Neurobiology, 4, 73–81.
Rumelhart, D. E., & McClelland, J. L. (1986). Parallel distributed processing: Explorations in the microstructure of cognition. Cambridge: The MIT Press.
Ryle, G. (1949). The concept of mind. London: Hutchinson.
Searle, J. (1980). Minds, brains and programs. Behavioral and Brain Sciences, 3, 417–424.
Searle, J. R. (1990). Is the brain a digital computer? Proceedings and Addresses of the American Philosophical Association, 64, 21–37.
Siegelmann, H. T. (1999). Neural networks and analogue computation: Beyond the Turing limit. Boston: Birkhauser.
Siu, K. Y., Roychowdhury, V., & Kailath, T. (1995). Discrete neural computation: A theoretical foundation. Englewood Cliffs, NJ: Prentice Hall.
Smolensky, P. (1988). On the proper treatment of connectionism. The behavioural and brain sciences, 11, 1–23.
Smolensky, P. (1991). The constituent structure of connectionist mental states: A reply to Fodor and Pylyshyn. In T. Horgan & J. L. Tienson (Eds.), Connectionism and the philosophy of mind (pp. 281–308). Dordrecht: Kluwer.
Smolensky, P. (1995). Constituent structure and explanation in an integrated connectionist/symbolic cognitive architecture. In C. MacDonald & G. MacDonald (Eds.), Connectionism: Debates on psychological explanation (pp. 223–290). U.K.: Blackwell Publishers.
Smolensky, P., & Legendre, G. (2006). The harmonic mind: From neural computation to optimality-theoretic grammar. Cambridge, MA: The MIT Press.
Spivey, M. (2007). The continuity of mind. Oxford: Oxford University Press.
Stanley, J., & Williamson, T. (2001). Knowing how. Journal of Philosophy, 98, 411–444.
Stepp, N., Chemero, A., & Turvey, M. T. (2011). Philosophy for the rest of cognitive science. Topics in Cognitive Science, 3, 425–437.
Thagard, P. (2005). Mind: Introduction to cognitive science (2nd ed.). Cambridge: MIT Press.
Thelen, E., & Smith, L. B. (1994). A dynamical systems approach to the development of cognition and action. Cambridge, MA: The MIT press.
Thompson, E. (2007). Mind in life: Biology, phenomenology and the sciences of mind. Cambridge, MA: Harvard University Press.
Tienson, J. L. (1988). An introduction to connectionism. Southern Journal of Philosophy, 26, 1–16.
Trappenberg, T. (2010). Fundamentals of computational neuroscience (2nd ed.). Oxford: Oxford University Press.
van Gelder, T. (1998). The dynamical hypothesis in cognitive science. The behavioural and brain sciences, 21, 615–665.
van Gelder, T., & Port, R. F. (1995). It’s about time: An overview of the dynamical approach to cognition. In T. van Gelder & R. F. Port (Eds.), Mind as motion. Cambridge, MA: The MIT Press.
Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind: Cognitive science and human experience. Cambridge, MA: The MIT Press.
von Eckardt, B. (1993). What is cognitive science? Cambridge, MA: The MIT Press.
Wallace, B., Ross, A., Davies, J., & Anderson, T. (2007). The mind, the body and the world: Psychology after cognitivism?. Exeter: Imprint Academic.
Waltz, D. L., & Pollack, J. B. (1985). Massively parallel parsing: A strongly interactive model of natural language interpretation. Cognitive Science, 9, 51–74.
Weiskopf, D. A. (2011). Models and mechanisms in psychological explanation. Synthese, 183, 313–338.
White, G. (2011). Descartes among the robots: Computer science and the inner/outer distinction. Minds and Machines, 21, 179–202.
Zednik, C. (2011). The nature of dynamical explanation. Philosophy of Science, 78, 238–263.
Acknolwledgments
Many thanks to Gualtiero Piccinini and Chris Eliasmith for insightful comments on earlier drafts of this paper. I am grateful to Phillip Staines for his constructive and useful remarks on various drafts of this paper. A much earlier version of this paper was presented at the 2009 AAP conference in Melbourne, Australia. I thank several anonymous referees for their helpful comments and criticisms that resulted in a drastically improved paper. All the people mentioned above contributed to the final draft of the paper, but I am solely responsible for any remaining mistakes.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Fresco, N. The Explanatory Role of Computation in Cognitive Science. Minds & Machines 22, 353–380 (2012). https://doi.org/10.1007/s11023-012-9286-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11023-012-9286-y