Skip to main content
Log in

Does Computation Reveal Machine Cognition?

  • Published:
Biosemiotics Aims and scope Submit manuscript

Abstract

This paper seeks to understand machine cognition. The nature of machine cognition has been shrouded in incomprehensibility. We have often encountered familiar arguments in cognitive science that human cognition is still faintly understood. This paper will argue that machine cognition is far less understood than even human cognition despite the fact that a lot about computer architecture and computational operations is known. Even if there have been putative claims about the transparency of the notion of machine computations, these claims do not hold out in unraveling machine cognition, let alone machine consciousness (if there is any such thing). The nature and form of machine cognition remains further confused also because of attempts to explain human cognition in terms of computation and to model/simulate (aspects of) human cognitive processing in machines. Given that these problems in characterizing machine cognition persist, a view of machine cognition that aims to avoid these problems is outlined. The argument that is advanced is that something becomes a computation in machines only when a human interprets it, which is a kind of semiotic causation. From this it follows that a computing machine is not engaged in a computation unless a human interprets what it is doing; instead, it is engaged in machine cognition, which is defined as a member or subset of the set of all possible mappings of inputs to outputs. The human interpretation, which is a semiotic process, gives meaning to what a machine does, and then what it does becomes a computation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. In this connection, one may also relate this question to Rosen’s (1991) notion of a complex system within which organization is significant insofar as organizational principles within a system as a whole causally determine the relations and processes that obtain in a system. In the current context, this means that the organizational principles pervade and encompass a systemic whole that connects machine cognition by way of computation to the human cognitive system at a more fundamental (ontological) level of organization.

  2. Not everybody who believes in computationalism thinks that human cognition must be identified with computation, and thus the claim that human cognition is something that cannot be implemented or simulated in machines is a weak objection in this sense. For instance, it is possible to say, by following Rapaport (2012), that human cognition is computable, regardless of whether human cognition is computation or not. However, this does not affect the force of the arguments marshaled in this paper.

  3. One needs to be cautious about relating this to the causal-informational view of semantics, as evident in Dretske (1981), Fodor (1998). The causal-informational view of semantics demands that a causal relation—direct or mediated—obtain between the objects and the concepts or expressions that refer to those objects. If the human interpretaion by virtue of humans’ intrinsic intentionality possesses causal powers, these causal powers must have derived from the human intrinsic intentionality which is a primitive concept and cannot be further decomposed (Jacquette 2011). Taken in this sense, neither expressions/signs nor objects can in themselves cause or causally determine anything in the mind (in contrast to the view espoused by the proponents of causal-informational semantics), since all relations—causal or otherwise—are distilled and derived from the the human intrinsic intentionality. And if this is so, the causality of the human interpretation process derived from humans’ intrinsic intentionality does not also need to be caused by anything else, mainly because humans’ intrinsic intentionality is primary and more fundamental than anything else in nature.

  4. As has been pointed out by an anonmyous reviewer of this paper, if machine cognition is potential computation that is ‘harvested’ by human cognition through a semiotic interpretation process, it would be interesting to see why this could not change. One may thus wonder what would happen when a machine can repair itself or perhaps even ‘reproduce’. It needs to be made explicit that the current view does not say that machine cognition can be derived from human cognition. Rather, machine cognition exists in a different domain—more particularly, in a domain of possibilities of mapping from inputs at some level of a machine state to outputs at some level of description of any other physical system, whereas (machine) computation is a consequence of the human inetrpretation defining some relation (that may well be a function) on the abstract trajectory through s constituting machine cognition. In this sense, when a machine can repair itself or perhaps even ‘reproduce’, and if the repair and reproduction have been possible through the implementaion of some program(s) designed by humans, it is computations all the way repairing (or renovating) and reproducing computations further and further into a direction away from machine cognition. Therefore, machine cognition precedes any such implementaion of some program(s) designed by humans in a machine that can repair itself or perhaps even ‘reproduce’. And thus machine cognition may remain where it is, irrespective of whether the machine concerned repairs itself or even ‘reproduces’, or not.

References

  • Bishop, J. M. (2009). A cognitive computation fallacy? Cognition, computations and panpsychism. Cognitive Computation, 1, 221–233.

    Article  Google Scholar 

  • Chalmers, J. D. (2012). A computational foundation for the study of cognition. The Journal of Cognitive Science, 12(4), 323–357.

    Google Scholar 

  • Deacon, T. (2012). Incomplete nature: How mind emerged from matter. New York: Norton.

    Google Scholar 

  • Dennett, D. (1996). The intentional stance. Cambridge: MIT Press.

    Google Scholar 

  • Dietrich, E., & Markman, A. (2003). Discrete thoughts: why cognition must use discrete representations. Mind and Language, 18(1), 95–119.

    Article  Google Scholar 

  • Dretske, F. (1981). Knowledge and the flow of information. Cambridge: MIT Press.

    Google Scholar 

  • Dreysus, H. (1992). What computers still can’t do: A critique of artificial reason. Cambridge: MIT Press.

    Google Scholar 

  • Fodor, J. (1975). The language of thought. Cambridge: Harvard University Press.

    Google Scholar 

  • Fodor, J. (1998). Concepts: Where cognitive science went wrong. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Fresco, N. (2011). Concrete digital computation: what does it take for a physical system to compute? Journal of Logic, Language and Information, 20, 513–537.

    Article  Google Scholar 

  • Fresco, N. (2012). The explanatory role of computation in cognitive science. To appear in Minds and Machines.

  • Gazzaniga, M. S. (2009). The cognitive neurosciences (4th ed.). Cambridge: MIT Press.

    Google Scholar 

  • Goertzel, B. (2007). Human level artificial intelligence and the possibility of a technological singularity. Artificial Intelligence, 171, 1161–1173.

    Article  Google Scholar 

  • Haugeland, J. (1998). Having thought: Essays in the metaphysics of mind. Cambridge: Harvard University Press.

    Google Scholar 

  • Hoffmeyer, J. (2007). Semiotic scaffolding of living systems. In M. Barbieri (Ed.), Introduction to biosemiotics (pp. 149–166). Berlin: Springer.

    Chapter  Google Scholar 

  • Jacquette, D. (2011). Intentionality as a conceptually primitive relation. Acta Analytica, 26, 15–35.

    Article  Google Scholar 

  • McCarthy, J. (2007). From here to human-level AI. Artificial Intelligence, 171, 1174–1182.

    Article  Google Scholar 

  • Minsky, M. (2006). The emotion machine. New York: Simon & Schuster.

    Google Scholar 

  • Pattee, H. H. (2008). Physical and functional conditions for symbols, codes and languages. Biosemiotics, 1, 147–168.

    Article  Google Scholar 

  • Penrose, R. (1994). Shadows of the mind: A search for the missing science of consciousness. Oxford: Oxford University Press.

    Google Scholar 

  • Piccinini, G., & Scarantino, A. (2011). Information processing, computation and cognition. Journal of Biological Physics, 37, 1–38.

    Article  PubMed Central  PubMed  Google Scholar 

  • Proudfoot, D. (2011). Anthropomorphism and AI: turing’s much misunderstood imitation game. Artificial Intelligence, 175, 950–957.

    Article  Google Scholar 

  • Putnam, H. (1988). Representation and reality. Cambridge: MIT Press.

    Google Scholar 

  • Pylyshyn, Z. (1984). Computation and cognition: Toward a foundation for cognitive science. Cambridge: MIT Press.

    Google Scholar 

  • Rapaport, W. J. (2012). Semiotic systems, computers, and the mind: how cognition could be computing. International Journal of Signs and Semiotic Systems, 2(1), 32–71.

    Article  Google Scholar 

  • Rosen, R. (1991). Life itself: A comprehensive inquiry into the nature, origin, and fabrication of life. New York: Columbia University Press.

    Google Scholar 

  • Rosen, R. (2000). Essays on life itself. New York: Columbia University Press.

    Google Scholar 

  • Searle, J. (1992). The rediscovery of the mind. Cambridge: MIT Press.

    Google Scholar 

  • Shagrir, O. (2006). Why we view the brain as a computer. Synthese, 153, 393–416.

    Article  Google Scholar 

  • Shagrir, O. (2012). Computation, implementation, cognition. Minds and Machines, 2(22), 137–148.

    Article  Google Scholar 

  • Smith, B. C. (1996). On the origin of objects. Cambridge: MIT Press.

    Google Scholar 

  • Starzyk, J. A., & Prasad, D. K. (2011). A computational model of machine consciousness. International Journal of Machine Consciousness, 3, 237–253.

    Article  Google Scholar 

  • Tallis, R. (2011). Aping mankind: Neuromania, darwinitis and the misrepresentation of humanity. Durham: Acumen.

    Google Scholar 

  • Taylor, J. (1991). Can neural networks ever be made to think? Neural Network World, 1, 4–11.

    Google Scholar 

  • Tønnessen, M. (2010). Steps to a semiotics of being. Biosemiotics, 3, 375–392.

    Article  Google Scholar 

  • Torey, Z. (2009). The crucible of consciousness: An integrated theory of mind and brain. Cambridge: MIT Press.

    Book  Google Scholar 

Download references

Acknowledgments

I am thankful to one anonymous reviewer of this paper for making significant comments on certain issues dealt with in this paper, and for drawing my attention to some points that I overlooked.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Prakash Mondal.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Mondal, P. Does Computation Reveal Machine Cognition?. Biosemiotics 7, 97–110 (2014). https://doi.org/10.1007/s12304-013-9179-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12304-013-9179-3

Keywords

Navigation