Skip to main content

Epistemic Conditions for Nash Equilibrium

  • Chapter
Readings in Formal Epistemology

Part of the book series: Springer Graduate Texts in Philosophy ((SGTP,volume 1))

Abstract

Game theoretic reasoning has been widely applied in economics in recent years. Undoubtedly, the most commonly used tool has been the strategic equilibrium of Nash (Ann Math 54:286–295, 1951), or one or another of its so-called “refinements.” Though much effort has gone into developing these refinements, relatively little attention has been paid to a more basic question: Why consider Nash equilibrium in the first place?

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Selten (1965), (1975), Myerson (1978), Kreps and Wilson (1982), Kohlberg and Mertens (1986), and many others.

  2. 2.

    An event is called common knowledge if all players know it, all know that all know it, and so on ad infinitum (Lewis 1969).

  3. 3.

    See section “Structure of the game” for a discussion of why the payoff functions can be identified with the “structure of the game.”

  4. 4.

    I.e., that the players are optimizers; that given the opportunity, they will choose a higher payoff. A formal definition is given below.

  5. 5.

    Other epistemic conditions for Nash equilibrium have been obtained by Armbruster and Boege (1979) and Tan and Werlang (1988).

  6. 6.

    When a game is presented in strategic form, as here, knowledge of one’s own payoff function may be considered tautologous. See section “Interactive belief systems.”

  7. 7.

    Harsanyi (1973), Armbruster and Boege (1979), Aumann (1987), Tan and Werlang (1988), Brandenburger and Dekel (1989), among others.

  8. 8.

    The marginal on j’s strategy space of i’s overall conjecture.

  9. 9.

    Aumann (1987); for a formal definition, see section “Interactive belief systems.” Harsanyi (1967–1968) uses the term “consistency” to describe this situation.

  10. 10.

    That is, what O knows is common knowledge among the players; each player knows everything that O knows.

  11. 11.

    Readers unfamiliar with measure theory may think of the type spaces S i as finite throughout the paper. All the examples involve finite S i only. The results, too, are stated and proved without reference to measure theory, and may be understood completely in terms of finite S i . On the other hand, we do not require finite S i ; the definitions, theorems, and proofs are all worded so that when interpreted as in section “General (infinite) belief systems,” they apply without change to the general case. One can also dispense with finiteness of the action spaces A i ; but that is both more involved and less important, and we will not do it here.

  12. 12.

    Thus is actual payoff at the state s is g i (s)(a(s)).

  13. 13.

    In particular, i always knows whether or not H obtains.

  14. 14.

    That is, Exp g j (a j , a −j) ≥ Exp g j (b j , a −j) for all b j in A j , when a j is distributed according to φ j.

  15. 15.

    We denote Q (a −i) := Q (A i  × {a −i}), Q (a i ) := Q (A i × {a i }), and so on.

  16. 16.

    It is not what drives Example 41.3; since the individual conjectures are commonly known there, they must agree (Aumann 1976).

  17. 17.

    They can easily be chosen so that rationality is common knowledge.

  18. 18.

    I.e., to which she assigns positive probability.

  19. 19.

    Alternatively, one can use a single probability system with countably many states, represented by a staircase anchored at the top left and extending infinitely downwards to the right (Jacob’s ladder?). By choosing a state sufficiently far from the top, one can get mutual knowledge of any given order.

  20. 20.

    Private communication. The essential difference between the 1982 example of Geanakoplos and Polemarchakis and the above example of Geanakoplos is that in the former, Rowena’s and Colin’s probabilities for A approach each other as the other m of mutual knowledge approaches ∞, whereas in the latter, they remain at 2/3 and 1/2 no matter how large m is.

  21. 21.

    Rather than the upper row, say.

  22. 22.

    Though similar in form, this condition neither implies nor is implied by common priors. We saw in Example 41.4 that common priors do not even imply agreement between individual forecasts; a fortiori, they do not imply Arrow’s condition. In the opposite direction, Example 41.6 satisfies Arrow’s condition, but has no common prior.

  23. 23.

    If one wishes, one can introduce a type E 1 of Matt to which Rowena’s and Colin’s types ascribe probability 0, and whose theory is, say, \( {\scriptscriptstyle \frac{1}{4}}Hh+{\scriptscriptstyle \frac{1}{4}}Tt+{\scriptscriptstyle \frac{1}{4}}Th+{\scriptscriptstyle \frac{1}{4}}Ht \).

  24. 24.

    The σ-field of measurable sets is the smallest σ-field containing all the “rectangles” ×  ji T j , where T j is measurable in S j .

  25. 25.

    For related ideas, see Armbuster and Boege (1979), Boege and Eisele (1979), and Tan and Werlang (1988).

  26. 26.

    As is apparent from the examples in sections “Illustrations,” “The main counterexamples,” and “Additional counterexamples,” in most of which the game g is common knowledge.

  27. 27.

    See also Armbuster and Boege (1979) and Boege and Eisele (1979).

  28. 28.

    The variables about which beliefs—of all orders—are held.

  29. 29.

    At each state s of such a component S, the identity of that component is commonly known (i.e., it is commonly known at s that the true state is in S, though it need not be commonly known that the true state is s).

  30. 30.

    This is because each type in B determines a belief hierarchy (see (a)). The set of n-tuples of all these belief hierarchies is the subspace of the universal belief space that is isomorphic to B.

  31. 31.

    Unlike that of partitions (see (e) below).

  32. 32.

    Another reason is that “belief” often refers to a general probability distribution, which does not go well with using “know” to mean “ascribe probability 1 to.”

  33. 33.

    This is natural, as the results are slightly weaker (see (c)).

  34. 34.

    Stated, but not proved, in Aumann (1987).

  35. 35.

    Constant throughout the belief system.

  36. 36.

    It would be desirable to see whether and how one can derive this kind of modified belief system from a Mertens–Zamir-type construction.

  37. 37.

    This observation, for which we are indebted to Ben Polak, is of particular interest because in many applied contexts there is only one game under consideration, so it is of necessity commonly known.

  38. 38.

    The proof would be simpler with a formalism in which known events are true (K j E ⊂ E). See section “Alternative formalisms.”

References

  • Armbruster, W., & Boege, W. (1979). Bayesian game theory. In O. Moeschlin & D. Pallaschke (Eds.), Game theory and related topics. Amsterdam: North-Holland.

    Google Scholar 

  • Aumann, R. (1976). Agreeing to disagree. Annals of Statistics, 4, 1236–1239.

    Article  Google Scholar 

  • Aumann, R. (1987). Correlated equilibrium as an expression of Bayesian rationality. Econometrica, 55, 1–18.

    Article  Google Scholar 

  • Boege, W., & Eisele, T. (1979). On solutions of Bayesian games. International Journal of Game Theory, 8, 193–215.

    Article  Google Scholar 

  • Brandenburger, A., & Dekel, E. (1989). The role of common knowledge assumptions in game theory. In F. Hahn (Ed.), The economics of missing markets, information, and games. Oxford: Oxford University Press.

    Google Scholar 

  • Geanakoplos, J., & Polemarchakis, H. (1982). We can’t disagree forever. Journal of Economic Theory, 28, 192–200.

    Article  Google Scholar 

  • Harsanyi, J. (1967–1968). Games of incomplete information played by ‘Bayesian’ players, I-III. Management Science, 14, 159–182, 320–334, 486–502.

    Google Scholar 

  • Harsanyi, J. (1973). Games with randomly disturbed payoffs: A new rationale for mixed strategy equilibrium points. International Journal of Game Theory, 2, 1–23.

    Article  Google Scholar 

  • Kohlberg, E., & Mertens, J.-F. (1986). On the strategic stability of equilibria. Econometrica, 54, 1003–1037.

    Article  Google Scholar 

  • Kreps, D., & Wilson, R. (1982). Sequential equilibria. Econometrica, 50, 863–894.

    Article  Google Scholar 

  • Lewis, D. (1969). Conventions: A philosophical study. Cambridge: Harvard University Press.

    Google Scholar 

  • Maynard Smith, J. (1982). Evolution and the theory of games. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Mertens, J.-F., & Zamir, S. (1985). Formulation of Bayesian analysis for games with incomplete information. International Journal of Game Theory, 14, 1–29.

    Article  Google Scholar 

  • Myerson, R. (1978). Refinements of the Nash equilibrium concept. International Journal of Game Theory, 7, 73–80.

    Article  Google Scholar 

  • Nash, J. (1951). Non-cooperative games. Annals of Mathematics, 54, 286–295.

    Article  Google Scholar 

  • Savage, L. (1954). The foundations of statistics. New York: Wiley.

    Google Scholar 

  • Selten, R. (1965). Spieltheoretische Behandlung eines Oligopolmodels mit Nachfragetragheit. Zietschrift fur die gesante Staatswissenschaft, 121, 301–324.

    Google Scholar 

  • Selten, R. (1975). Reexamination of the perfectness concept for equilibrium points in extensive games. International Journal of Game Theory, 4, 25–55.

    Article  Google Scholar 

  • Tan, T., & Werlang, S. (1988). The Bayesian foundations of solution concepts of games. Journal of Economic Theory, 45, 370–391.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robert J. Aumann .

Editor information

Editors and Affiliations

Appendix: Extensions and Converses

Appendix: Extensions and Converses

We start with some remarks on Theorem 41.3 and its proof. First, the conclusions of Theorem 41.3 continue to hold under the slightly weaker assumption that the common prior assigns positive probability to φ = φ being commonly known, and there is a state at which φ = φ is commonly known and g = g and the rationality of the players are mutually known.

Second, note that the rationality assumption is not used until the end of Theorem 41.3’s proof, after (41.3) is established. Thus if we assume only that there is a common prior that assigns positive probability to the conjectures φ i being commonly known, we may conclude that all players i have the same conjecture σ j for other players j, and that each φ i is the product of the σ j with j ≠ i; that is, the n − 1 conjectures of each player about the other players are independent.

Third, if in Theorem 41.3 we assume that the game being played is commonly (not just mutually) known, then we can conclude that also the rationality of the players is commonly known.Footnote 37 That is, we have

Proposition 41.A1

Suppose that at some state s, the game g and the conjectures φ i are commonly known and rationality is mutually known. Then at s, rationality is commonly known. (Note that common priors are not assumed here.)

Proof

Set G := [g], F := [φ], R j := [j is rational], and R := [all players are rational] = R 1∩⋯∩R n . In these terms, the proposition says that CK(GF) ∩ K 1 R ⊂ CKR. We assert that it is sufficient for this to prove

$$ {K}^{{}^2}\!\!\left(G\cap F\right)\cap {K}^1R\subset {K}^2R. $$
(41.A1)

Indeed, if we have (A1), an inductive argument using Lemma 41.5 and that E ⊂ E′ implies K 1(E) ⊂ K 1(E′) (which follows from Lemma 41.2) yields K m(GF) ∩ K 1 R ⊂ K m R for any m; so taking intersections, CK (GF) ∩ K 1 R ⊂ CKR follows.

Let j be a player, B j the set of actions a j of j to which the conjecture φ i of some other player i assigns positive probability. Let E j := [a j B j ] (the event that the action chosen by j is in B j ). Since the game g and the conjectures φ i are commonly known at s, they are a fortiori mutually known there; so by Lemma 41.6, each action in B j maximizes g j against \( \varphi^{j} \). Hence E j ∩ G ∩ F ⊂ R j . At each state in F, each player other than j knows that j’s action is in B j ; that is, F ⊂ ∩ i≠j K i E j . So GF ⊂ (∩ i≠j K i E j ) ∩ (GF). So Lemmas 41.2 and 41.5 yield

$$ \begin{array}{l}{K}^{{}^2}\left(G\cap F\right)\subset {K}^2\left({\cap}_{i\ne j}{K}_i{E}_j\right)\cap {K}^2\left(G\cap F\right)\subset {K}^1\left({\cap}_{i\ne j}{K}_i{E}_j\right)\cap {K}^2\left(G\cap F\right)\subset \\ {K}^1\left({\cap}_{i{\ne} j}{K}_i{E}_j\right)\cap {K}^1\left({\cap}_{i{\ne} j}{K}_i\left(G\cap F\right)\right){=}{K}^1\left({\cap}_{i{\ne} j}{K}_i\left({E}_j\cap G\cap F\right)\right)\subset\\ {} \times {K}^1 \left({\cap}_{i\ne j}{K}_i{R}_j\right).\end{array} $$

Hence using R ⊂ R j and R j  = K j R j (Corollary 41.2), we obtain

$$ \begin{array}{l}{K}^2\left(G\cap F\right)\cap {K}^{{}^1}R\subset {K}^1\left({\cap}_{i\ne j}{K}_i{R}_j\right)\cap {K}^1R\subset {K}^1\left({\cap}_{i\ne j}{K}_i{R}_j\right)\cap {K}^1{R}_j= \\ {K}^1\left({\cap}_{i\ne j}{K}_i{R}_j\right)\cap {K}^1{K}_j{R}_j={K}^2{R}_j.\end{array} $$

Since this holds for all j, Lemma 41.2 yieldsFootnote 38 (A.2). ■

Our fourth remark is that in both Theorems 41.2 and 41.3, mutual knowledge of rationality may be replaced by the assumption that each player knows the others to be rational; in fact, all players may themselves be irrational at the state in question. (Recall that “know” means “ascribe probability 1”; thus a player may be irrational even though another player knows that he is rational.)

We come next to the matter of converses to our theorems. We have already mentioned (at the end of section “Description of the results”) that the conditions are not necessary, in the sense that it is quite possible to have a Nash equilibrium even when they are not fulfilled. In Theorem 41.1, the action n-tuple a(s) at a state s may well be a Nash equilibrium even when a(s) is not mutually known, whether or not the players are rational. (But if the actions are mutually known at s and a(s) is a Nash equilibrium, then the players are rational at s; cf. Remark 41.2.) In Theorem 41.2, the conjectures at a state s in a two-person game may constitute a Nash equilibrium even when, at s, they are not mutually known and/or rationality is not mutually known. Similarly for Theorem 41.3.

Nevertheless, there is a sense in which the converses hold: Given a Nash equilibrium in a game g, one can construct a belief system in which the conditions are fulfilled. For Theorem 41.1, this is immediate: Choose a belief system where each player i has just one type, whose action is i’s component of the equilibrium and whose payoff function is g i . For Theorems 41.2 and 41.3, we may suppose that as in the traditional interpretation of mixed strategies, each player chooses an action by an independent conscious randomization according to his component σ i of the given equilibrium σ. The types of each player correspond to the different possible outcomes of the randomization; each type chooses a different action. All types of player i have the same theory, namely, the product of the mixed strategies of the other n − 1 players appearing in σ, and the same payoff function, namely g i . It may then be verified that the conditions of Theorems 41.2 and 41.3 are met.

These “converses” show that the sufficient conditions for Nash equilibrium in our theorems are not too strong, in the sense that they do not imply more than Nash equilibrium; every Nash equilibrium is attainable with these conditions. Another sense in which they are not too strong—that the conditions cannot be dispensed with or even appreciably weakened—was discussed in sections “The main counterexamples” and “Additional counterexamples.”

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Aumann, R.J., Brandenburger, A. (2016). Epistemic Conditions for Nash Equilibrium. In: Arló-Costa, H., Hendricks, V., van Benthem, J. (eds) Readings in Formal Epistemology. Springer Graduate Texts in Philosophy, vol 1. Springer, Cham. https://doi.org/10.1007/978-3-319-20451-2_41

Download citation

Publish with us

Policies and ethics