Abstract
Deception has recently received a significant amount of attention. One of main reasons is that it lies at the intersection of various areas of research, such as the evolution of cooperation, animal communication, ethics or epistemology. This essay focuses on the biological approach to deception and argues that standard definitions put forward by most biologists and philosophers are inadequate. We provide a functional account of deception which solves the problems of extant accounts in virtue of two characteristics: deceptive states have the function of causing a misinformative states and they do not necessarily provide direct benefits to the deceivers and losses to the targets.
Similar content being viewed by others
Notes
In the remainder of the paper, we will be explicit when we refer to this weak version; by default, talk of ‘harm-benefit condition’ refers to the necessary version.
This set of cases is sometimes labelled 'functional deception'. However, given that we will defend a functional theory of deception (according to which all cases of deception are functional), we will sometimes refer to these cases of deception as non-intentional.
If messages are costly, these functions should also take signals into account, becoming πs, πr: QxMxA → R.
This measure derives from Kullback and Leibler (1951).
Note that any signal that satisfies the first inequality will also satisfy the second one; since the total probability should be equal to 1, a decrease of the probability of the actual state q implies an increase of the probability of a non-actual state.
Cases of organisms that withhold information may be interpreted as additional examples of deception without signalling. For instance, many avian and mammalian species perform a call when discovering food, but in certain occasions (e.g. when the food is unlikely to be discovered by others) an individual might not send the call (Hauser 1997). However, such cases may be accommodated by D, by interpreting silence as a further signal. (Skyrms 2010).
Note that states that are not signals might still carry information (and misinformation) about the world. This follows, for instance, from Skyrms’ proposal: a state carries information about another state iff it changes its prior probability, and this condition can be fulfilled by non-signals. However, note that this notion of information-carrying is extremely liberal. In this sense, for instance, any state carries information about its actual cause and carries misinformation about all other possible (non-actual) causes.
Several authors have recently criticised Skyrms’ account as too liberal, because of its focus on misinformation; they identify intuitive cases of non-deceptive signals that Skyrms’ definition would rule as deceptive (Martínez 2015; McWhirter 2015). As a result, they would probably see Skyrms’ account as providing necessary (but not sufficient) conditions for deception. By contrast, if our arguments are correct the conditions put forward by Skyrms are not even necessary for deception.
An extreme example of an altruistic white lie can be found in the movie “Love Me No More”, in which the terminably-ill main character, rather than to reveal his condition, chooses to start behaving obnoxiously with his family and friends, so that they are less affected when he dies.
We discuss possible reasons for this in Sect. 5. At that point it suffices to say that this fact connects with the weak version of the harm-benefit condition: since deceptive states tend to benefit the deceiver and harm the deceived, cases where this does not happen should be harder to spot.
It may be objected that the example involves meaning change rather than deception: the meaning of the alarm call would now be a disjunction (‘one of the two predators is close’). However, the fact that the fleeing behaviour is not adapted to the new predator, but merely preferable to standing still, mitigates this intuition. In Skyrms’ (2010) vocabulary, this signalling system would feature a bottleneck: a signal used for two different states of the world, for which different actions would be preferable. And bottlenecks are misinformative—at least according to Skyrms’ account.
Could these scenarios hold at equilibrium? This is a difficult question we cannot address here (see, for instance, Wagner 2012). Nonetheless, note that deception can obviously take place out of equilibrium (see below), so these cases would constitute counterexamples to standard definitions of deception, even if they were wiped out at equilibrium.
McWhirter adds that this definition usually works for signalling models because sender benefit and receiver harm are built in.
Note that the Error Condition is distinct from what is called the problem of error in the context of defining propositional content. The latter is the problem of defining propositional content in such a way that a signal can have false propositional content (which lies beyond the scope of this essay). The former concerns the conceptual distinction between deceptive and erroneous states.
Note that FD is similar to Smith’s (2014) recent proposal in the context of self-deception, with two significant differences. First, Smith’s notion of function does not cover intentional cases (see footnote 11). Consequently, his approach does not satisfy the Extensional Condition. Second, his account is committed to teleosemantics, which, although a promising and increasingly popular theory of representation (Millikan 1984; Shea 2007; Martinez 2013; Neander 2013), renders his account more specific than ours.
Mahon (2007, p. 185) argues that the false belief must also be caused in a normal way (e.g. inserting something into your brain might cause you to have a false representation, but it is not a case of deception). This is not just a philosophical quibble. Many parasites affect the nervous system of a host species in order to increase its susceptibility of predation and, in this way, reach their final host (Lafferty 1999). For instance, some members of the Microphallus species induce profound behavioural changes in their amphipod hosts, which makes them swim at the surface, rather than the bottom, of the water. A full theory of deception might need to take that into account.
For instance, in (D) (McWhirter’s reconstruction of Skyrms’ account, given in Sect. 3.2), condition 1 concerns misinformation, while conditions 2 and 3 concern harm and benefit aspects. This isolation is also a feature of alternative definitions—for instance Searcy and Nowicki (2005: 5) and McWhirter’s (2015) own.
It may be argued that the account risks being circular because misinformation itself may be defined in functional terms (for instance following Millikan’s functional account of misrepresentation). However, our definition is compatible even with such a view. That a deceptive signal has the function of producing a misinformative state is consistent with a receiver’s misinformation being in turn defined functionally. IN any cases, Authors such as Skyrms, McWhirter and Martinez for instance all adopt nun-functional concepts of misinformation.
As an additional virtue, this approach easily accounts for cases in which there is a (non-successful) attempt to deceive; here condition 1 is satisfied, but not condition 2.
Furthermore, FD can account for different cases of camouflaging depending on the kind of misinformative state they involve. In crypsis, when the organism is not supposed to be detected, misinformation involves the presence of a prey. In masquerading, where an organism is detected but pretends to be something else, misinformation involves a miscategorization of the object. In disruptive coloration, misinformation involves the organism's form or shape, and in cases of motion dazzle the predator wrongly estimates speed and trajectory (Stevens and Mirailta 2011: 5). Thus, misinformation may result from the misidentification of an object or from the misattribution of properties.
Smith’s (2014) account also relies on the etiological approach to function. According to him (and following Millikan´s 1984), for a trait to have a function it has to be a reproduction of a past item. Consequently, one-shot intentions cannot be said to warrant functions. Smith concludes that his approach supports non-intentionalist accounts of self-deception. By contrast, as said above, our more liberal etiological approach is compatible with both kinds of deception, which further fulfils the extensional condition.
Similarly, Searcy and Nowicki remark that “deception defined in this way has sometimes been termed ‘functional deception’ (Hauser 1997), meaning that the behavior has the effects of deception without necessarily having the cognitive underpinnings that we would require of deception in humans” (2005: 5). This further reveals the appeal of a functional approach in the light of the extensional condition. However, their definition falls short of meeting the condition, as it still includes deceiver benefit.
This argument echoes Godfrey-Smith’s (1994) point that it would be “vacuous to say that [a] trait persisted because some specific effect was its function” (p. 354; original emphasis). This is because mentioning an etiological function presupposes that its effect has persisted. Similarly, it would be not vacuous, but redundant to add conditions on fitness that justify the persistence of a trait while calling this trait functional.
Birch (2014) suggest that we define the meaning of a signal by what it would have at the closest separating equilibrium of a signalling system (“an equilibrium at which there is a one-to-one mapping from states of the world to signals” (503). If ‘closest’ is interpreted as one to which the population will converge, this view, once applied to deceptive signals, would amount to another forward-looking account of deception (which is not Birch’s focus). However, we could reply that deception also depends on what happened before the current state, and that one should add the condition that it has appeared after enough evolutionary time has passed. Just as it would be strange to grant that signals become meaningful immediately after their first appearance, so too traits need time to be considered deceptive.
Cases of interspecific deception may appear as counterexamples (in mimicking, all members of one species typically mimick something that helps them escape some predators). However, McWhirter only tackles interspecific deception (in which members of the same population exchange signals and anyone can be sender or receiver).
References
Artiga, M. (2014). Signaling without cooperation. Biology and Philosophy, 29(3), 357–378.
Birch, J. (2014). Propositional content in signalling systems. Philosophical Studies, 171(3), 493–512.
Carson, T. (2010). Lying and deception. Oxford: Oxford University Press.
de Waal, F. (1982). Chimpanzee politics: Power and sex among apes. Baltimore: The John Hopkins University Press.
Erat, S., Gneezy, U. (2012). White Lies, Management Science 58(4):732–733.
Fallis, D. (2010) Lying and Deception. Philosophers' Imprint, 10(11): 1–22
Fallis, D. (2015). Skyrms on the possibility of universal deception. Philosophical Studies, 172(2), 375–397.
Foster, K. R., Wenseleers, T., & Ratnieks, F. L. W. (2001). Spite: Hamilton’s unproven theory. Annales Zooligici Fennici, 38, 229–238.
Godfrey-Smith, P. (1994). A modern history theory of functions. Noûs, 28(3), 344–362.
Godfrey-Smith, P. (2011). Signals: Evolution, learning & information, by brian skyrms. Mind, 120(480), 1288–1297.
Godfrey-Smith, P., & Martínez, M. (2013). Communication and common interest. PLOS Computational Biology, 9(11), e1003282. doi:10.1371/journal.pcbi.1003282.
Godfrey-Smith, P., & Sterelny, K. (2016). Biological information. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/information-biological/.
Güzeldere, G., Nahmias, E., & Deaner, R. (2002). Darwin’s continuum and the building blocks of deception. In M. Bekoff, C. Allen, & G. Burghardt (Eds.), The cognitive animal: Empirical and theoretical perspectives on animal cognition. Cambridge: MIT Press.
Hasson, O. (1994). Cheating signals. Journal of Theoretical Biology, 167(3), 223–238.
Hauser, M. D. (1996). The Evolution of Communication. MIT Press, Cambridge, MA.
Hauser, M. (1997). Minding the behaviour of deception. In A. Wythen & R. Byrne (Eds.), Machiavellian intelligence II: Extensions and evaluations. Cambridge: Cambridge University Press.
Kirkpatrick, C. (2007). Tactical deception and the great apes: Insight into the question of theory of mind. Totem, 1, 31–37.
Kullback, S., & Leibler, R. A. (1951). On information and sufficiency. Annals of Mathematical Statistics, 22(1), 79–86.
Lachmann, M., & Bergstrom, C. T. (2004). The disadvantage of combinatorial communication. Proceedings of the Royal Society of London B: Biological Sciences, 271, 2337–2343.
Lafferty, K. D. (1999). The evolution of trophic transmission. Parasitology Today, 15(3), 111–115.
Lewis, D. (1969). Convention: A philosophical study. Hoboken: Wiley.
Lloyd, J. (1975). Aggressive mimicry in photuris fireflies: Signal repertoires by femmes fatales. Science, 187(4175), 452–453.
Mahon, J. A. (2007). A definition of deceiving. International Journal of Applied Philosophy, 21(2), 181–194.
Mahon, J. A. (2015). The definition of lying and deception. Stanford Encyclopedia of Philosophy.
Martinez, M. (2013). Teleosemantics and indeterminacy. Dialectica, 67(4), 427–453.
Martínez, M. (2015). Deception in sender–receiver games. Erkenntnis, 80(1), 215–227.
Maynard-Smith, J., & Harper, D. (2003). Animal signals. Oxford: Oxford Series in Ecology and Evolution.
McWhirter, G. (2016). Behavioural deception and formal models of communication. British Journal for the Philosophy of Science, 67(3):757–780.
Millikan, R. (1984). Language, thought and other biological categories. Cambridge: MIT Press.
Mitchell, R. (1986). A framework for discussing deception. In R. Mitchell & N. Thomson (Eds.), New essays on singular thought. Albany: SUNY Press.
Neander, K. (2013). Toward an informational teleosemantics. In D. Ryder, J. Kingsbury, & K. Williford (Eds.), Millikan and her critics. Hoboken: Wiley.
Platt, D. R. (1969). Natural history of the hognose snakes, Heterodon platyrhinos and Heterodon nasicus. University of Kansas Publications, Museum of Natural History, 18, 253–420.
Ruxton, G., Sherratt, Th., & Speed, M. (2004). Avoiding attack: The evolutionary ecology of crypsis, warning signals and mimicry. Oxford: Oxford University Press.
Scott-Phillips, T., & Kirby, S. (2013). Information, influence and inference in language evolution. In U. Stegmann (Ed.), Animal communication theory: Information and influence (pp. 421–442). Cambridge: Cambridge University Press.
Searcy, W., & Nowicki, S. (2005). The evolution of animal communication. Princeton: Princeton University Press.
Semple, S., & McComb, K. (1996). Behavioural deception. Trends in Ecology & Evolution, 11(10), 434–437.
Shea, N. (2007). Consumers need information: Supplementing teleosemantics with an input condition. Philosophy and Phenomenological Research, 75(2), 404–435.
Skyrms, B. (2010). Signals: Evolution, learning, and information. Oxford: Oxford University Press.
Smith, D. (2014). Self-deception: A teleofunctional approach. Philosophia, 42(1), 181–199.
Stegmann, U. (2009). A consumer-based teleosemantics for animal signals. Philosophy of Science, 76(5), 864–875.
Stegmann, U. (2013). Animal communication theory: Information and influence. Cambridge: Cambridge University Press.
Sterelny, K., Joyce, R., Calcott, B., & Fraser, B. (Eds.). (2013). Cooperation and its evolution. Cambridge: MIT Press.
Stevens, M., & Mirailta, S. (2011). Animal camouflage. Cambridge: Cambridge University Press.
Talwar V., Lee, K. (2002) Emergence of White-Lie Telling in Children Between 3 and 7 Years of Age. Merrill-Palmer Quarterly, 48(2):160–181.
Thornhill, R. (1979). Adaptive female-mimicking behavior in a scorpionfly. Science, 205(4404), 412–414.
Trivers, R. (2011). Deceit and self-deception: Fooling ourselves the better to fool others. London: Penguin.
Von Hippel, W., & Trivers, R. (2011). The evolution and psychology of self-deception. Behavioral and Brain Sciences, 34(1), 1.
Wagner, E. (2012). Deterministic Chaos and the Evolution of Meaning. British Journal for the Philosophy of Science, 63(3), 547–575.
West, S. A., Griffin, A. S., & Gardner, A. (2007). Social semantics: Altruism, cooperation, mutualism, strong reciprocity and group selection. Journal of Evolutionary Biology, 20(2), 415–432.
Wheeler, B. C. (2009). Monkeys crying wolf? Tufted capuchin monkeys use anti-predator calls to usurp resources from conspecifics. Proceedings of the Royal Society B, 276(1669), 3013–3018.
Wiley, R. H. (1994). Errors, exaggeration, and deception in animal communication. In L. A. Real (Ed), Behavioral Mechanisms in Evolutionary Ecology (pp. 157–189). University Chicago Press, Chicago.
Williams, K., & Gilbert, L. (1981). Insects as selective agents on plant vegetative morphology: Egg mimicry reduces egg laying by butterflies. Science, 212(4493), 467–469.
Wilson, R., & Angilletta, M. (2015). Dishonest signaling during aggressive interactions: Theory and empirical evidence. In D. J. Irschick & M. Briffa (Eds.), Animal signaling and function: An integrative approach (pp. 205–227). Hoboken: Wiley.
Acknowledgements
We thank the members of the Munich Center for Mathematical Philosophy 2015 reading group on biological information and the audience of the 3rd Philosophy of Biology in the UK Conference (Bristol, 2016), the 21st Valencian Philosophy Conference (Castelló de la Plana, 2016) and the 4th Catalan Philosophy Conference (Vilafranca del Penedés, 2015). This research was partly supported by the Alexander von Humboldt Foundation (at the Munich Center for Mathematical Philosophy), the postdoctoral Grant FPDI-2013-16764 and the project “La Complejidad de la Percepción: Un Enfoque Multidimensional” (FFI2014-51811-P).