1 Introduction

This paper considers games with countably many players, each of whom has finitely many pure strategies. Such games are used to model interactions among a “very large” number (practically infinity) of agents, in which each agent matters. As such, they differ from games with a continuum of agents, in which each individual agent is negligible (Khan and Sun 2002).Footnote 1

In games with countably many players Nash equilibrium may fail to exist. The literature’s classic example for such non-existence is the following, due to Peleg (1969): the set of players is \(\mathbb {N}\), each player has the set of pure strategies \(\{0,1\}\), and each player i’s preferences over pure profiles \(a\in \{0,1\}^\mathbb {N}\) are given by the following utility function:

$$\begin{aligned} u_i(a)=\left\{ \begin{array}{ll} a_i &{} \quad \text {if}\; \sum _{j=1}^\infty a_j<\infty \\ -a_i &{} \quad \text {otherwise} \end{array}\right. \end{aligned}$$

The sum \(\sum _{j=1}^\infty a_j\) is finite if and only if there is a finite number of 1’s. Since each \(a_j\) is an independent random variable, the occurrence of the event \(\{\sum _{j=1}^\infty a_j<\infty \}\) depends on a countable sequence of independent random variables; since it is invariant to the realization of any finite number of them, it is a tail event. Kolmogorov’s 0–1 Law (henceforth, the 0–1 Law) states that the probability of a tail event is either zero or one.Footnote 2 It follows from the 0–1 Law that this game does not have a Nash equilibrium. To see this, let p denote the probability of \(\{\sum _{j=1}^\infty a_j<\infty \}\) in a putative equilibrium. If \(p=1\) then the unique best-response of each player is to play 1, which implies \(p=0\). If, on the other hand, \(p=0\), then each player’s unique best-response is to play 0, which implies \(p=1\).

When an exact Nash equilibrium does not exist, it is natural to resort to an approximate or \(\epsilon \) equilibrium (Radner 1980): a strategy profile such that no player can increase his expected payoff by more than \(\epsilon \) through a unilateral deviation. Since \(\epsilon \) equilibrium imposes no restrictions on the individual actions in the strategy’s support, it is possible that an overwhelmingly suboptimal action is played with a strictly positive probability in an \(\epsilon \) equilibrium, as long as this probability is sufficiently small. If one interprets a mixed action as a lottery that the player uses for the choice of an action, and the lottery picks an action with an extremely low payoff, then the player may become reluctant to execute this action.Footnote 3 This drawback motivates the following concept: a strategy profile is a strong \(\epsilon \) equilibrium (Solan and Vielle 2001) if each player assigns a positive probability only to pure strategies that are \(\epsilon \) best responses.Footnote 4 If we interpret mixed actions according to Harsanyi’s (1973) purification paradigm, then strong \(\epsilon \) equilibrium reflects interim rationalizability: that is, the players play \(\epsilon \) optimal strategies after observing their types.

It is easy to see that due to the 0–1 Law, Peleg’s game does not have a strong \(\epsilon \) equilibrium for \(\epsilon \in (0,1)\).Footnote 5 It does, however, poses an \(\epsilon \) equilibrium, for any \(\epsilon >0\). To see this, consider the profile where each player plays the action 1 with probability \(\epsilon >0\). Under this profile, the probability of the event \(\big \{\sum _{j=1}^\infty a_j<\infty \big \}\) is zero;Footnote 6 therefore, this profile is an \(\epsilon \) equilibrium. Thus, Peleg’s classic game not only constitutes an example of a game (with countably many players and finitely many actions) in which exact equilibrium does not exist—it is also an example of a game where the existence of strong \(\epsilon \) equilibrium is not guaranteed by the existence of \(\epsilon \) equilibria.

Below I construct a game (with countably many players and finitely many actions) for which a strong \(\epsilon \) equilibrium exists for all \(\epsilon >0\), but a Nash equilibrium does not exist.

Models with infinitely many players are useful for describing interactions among small and anonymous agents. In a large finite game, the combination of “smallness” and anonymity finds its expression in the constraint that each agent cares only about the empirical distribution of his opponents’ actions (Kalai 2004; Azrieli and Shmaya 2013). In an infinite game, a simple counterpart of this requirement is that a change of finitely many of the opponents’ actions does not affect a player’s payoff; hereafter, a utility function (in an infinite game) with this property will be called a tail function. The requirement of anonymity, which is often central to many-agents models,Footnote 7 finds its expression in symmetry conditions which are imposed on the utility function; however, there is a significant difference between imposing symmetry conditions in finite games and doing so in infinite ones. Let \(a_i\) and \(a_{-i}\) denote player i’s own action and the profile of his opponents’ actions. In a finite game, symmetry is defined as the requirement that the utility of each player i is given by \(u(a_i,a_{-i})\), where the function u—which takes as its arguments a player’s own-action and the profile of his opponents’ actions—is the same for all players, and is invariant under permutations on the actions of others (Nash 1951; Moulin 1986). For infinite games, the requirement of utility-invariance under all permutations of opponents’ actions is extremely demanding.Footnote 8 In particular, it implies that the common utility function has a countable range, and if, in addition, it is a tail function, then it has a finite range. A natural way to weaken the invariance requirement is to impose utility-invariance only under finite permutations of opponents’ actions. I show that this weaker invariance condition allows for the game’s common utility function to be a tail function with a range that has the cardinality of the continuum.

Finally, I construct a symmetric infinite game in which Nash equilibria exist, but all of them are asymmetric. This example complements the ones of Fey (2012), who constructed games with finitely many players and infinitely many pure strategies, in which Nash equilibria exist, but all of them are asymmetric.

All the results of this paper involve games with discontinuous utility functions. This is no coincidence: Peleg (1969) proved that continuity of the utility functions guarantees the existence of Nash equilibrium [see also Salonen (2010)]. Therefore, under continuity there is no room for a discussion about the differences among \(\epsilon \), strong \(\epsilon \), and Nash equilibrium. The abovementioned example of a symmetric game with only asymmetric equilibria also necessitates discontinuity, because continuity of the utility function in a symmetric game (with countably many players and finitely many actions) implies that this game has a symmetric Nash equilibrium.Footnote 9

The rest of the paper is organized as follows. The next Section describes formal preliminaries. Section 3 contains the results about approximate equilibria and Sect. 4 contains the results about symmetric games.

2 Model

A game is a tuple \(G=[N,(A_i)_{i\in N}, (u_i)_{i\in N}]\). N is the set of players. For each \(i\in N\), \(A_i\) is the set of player i’s pure strategies (or actions), which is a finite non-empty set. The set of i’s mixed strategies is the set of probability distributions on \(A_i\), denoted \(\Delta (A_i)\). A typical element of \(\Delta (A_i)\) is denoted \(\alpha _i\). Let \((\times _{i\in N}A_i,\mathcal {A}_P)\) be the product measurable space \(\times _{i\in N}(A_i,2^{A_i})\). That is, \(\mathcal {A}_P\) is the product \(\sigma \)-algebra on \(\times _{i\in N}A_i\) generated by cylinder sets of the form \(\times _{i\in N}S_i\), where \(S_i\subset A_i\) for all \(i\in N\) and \(S_i=A_i\) for all but finitely many i’s. Each player i has a measurable utility function \(u_i\) on \((\times _{i\in N}A_i,\mathcal {A}_P)\), which is integrable w.r.t each product measure \(\alpha \in \times _{i\in N}\Delta (A_i)\) on \(\mathcal {A}_P\). In the sequel, I will restrict my attention to such tuples G where N is countably infinite.Footnote 10

Player i’s expected utility under \(\alpha \) is denoted \(U_i(\alpha )\). The utility function \(u_i\) is a tail function if for every own-action x and any two profiles of the others’ actions, \(a_{-i}\) and \(b_{-i}\), the following holds: \(\big |\{j: a_j\ne b_j\}\big |<\infty \Rightarrow u_i(x,a_{-i})=u_i(x,b_{-i})\).

A Nash equilibrium is a profile \(\alpha \) such that the following holds for each i: \(U_i(\alpha )\ge U_i(\alpha ')\), where \(\alpha '\) is any alternative profile that satisfies \(\alpha _j'=\alpha _j\) for all \(j\in N\setminus \{i\}\). An action \(a_i\) that maximizes \(U_i\) given \(\alpha _{-i}\) is a best response (for i, against \(\alpha _{-i}\)); it is an \(\epsilon \) best response if \(U_i(a_i,\alpha _{-i})+\epsilon \ge \text {max}_{x\in A_i}U_i(x,\alpha _{-i})\). A profile \(\alpha \) is a strong \(\epsilon \) equilibrium (Solan and Vielle 2001) if for each \(i\in N\) the following holds: \(\alpha _i(a_i)>0\) implies that \(a_i\) is an \(\epsilon \) best response for i against \(\alpha _{-i}\). The mixed strategy \(\alpha _i\) is \(\epsilon \) optimal for i against \(\alpha _{-i}\) if \(U_i(\alpha )+\epsilon \ge U_i(\alpha ')\), where \(\alpha '\) is any alternative profile that satisfies \(\alpha _j'=\alpha _j\) for all \(j\in N\setminus \{i\}\). The profile \(\alpha \) is an \(\epsilon \) equilibrium if \(\alpha _i\) is \(\epsilon \) optimal against \(\alpha _{-i}\) for all \(i\in N\).

In a game where \(A_i=A_j\) for all \(i,j\in N\), an equilibrium (Nash, \(\epsilon \), strong \(\epsilon \)) is symmetric if all players use the same strategy; otherwise, it is asymmetric.

3 Approximate equilibria

Existence of strong \(\epsilon \) equilibrium does not guarantee that of Nash equilibrium. The following game, \(G^{*}\), exemplifies this. The player set is \(\mathbb {N}\), everybody has the strategy set \(\{0,1\}\), and the utility function is:

$$\begin{aligned} u_i(a)=\left\{ \begin{array}{ll} \frac{a_i}{1+\big |\{k: a_k=1\}\big |} &{} \quad \text {if}\; \sum _{j=1}^\infty a_j<\infty \\ -a_i &{} \quad \text {otherwise} \end{array}\right. \end{aligned}$$

Note that this utility function is obtained from that of Peleg’s game by a relatively minor change: replacing \(a_i\) by \(\frac{a_i}{1+\big |\{k: a_k=1\}\big |}\). That this game does not have a Nash equilibrium follows from the 0–1 Law.

Proposition 1

The game \(G^{*}\) has a strong \(\epsilon \) equilibrium for all \(\epsilon >0\).

Proof

Let \(\epsilon >0\). Let \(m\in \mathbb {N}\) be such that \(\frac{1}{1+m}<\epsilon \). Consider the following pure strategy profile: each player in \(\{1,\ldots ,m\}\) plays the action 1, and every other player plays the action 0. Obviously, each player \(i\le m\) is playing a best-response; each \(i>m\) can only improve his payoff by \(\frac{1}{2+m}<\epsilon \) via a unilateral deviation.\(\square \)

A question that presents itself naturally is whether every game (in the class of games considered in this paper) admits an \(\epsilon \) equilibrium. As the following result shows, the answer to this question is negative.

Proposition 2

There exists a game that does not have an \(\epsilon \) equilibrium, for any \(\epsilon >0\).

Proof

Consider the following modification of Peleg’s (1969) game: the player set and common strategy set are \(\mathbb {N}\) and \(\{0,1\}\), respectively, and the utility function of player i is:

$$\begin{aligned} u_i(a)=\left\{ \begin{array}{ll} i^2\,\times \, a_i &{} \quad \text {if}\; \sum _{j=1}^\infty a_j<\infty \\ -i^2\,\times \, a_i &{} \quad \text {otherwise} \end{array}\right. \end{aligned}$$

I argue that this game does not have an \(\epsilon \) equilibrium, for any \(\epsilon >0\). To see this, fix an \(\epsilon >0\) and assume by contradiction that \(\alpha \) is an \(\epsilon \) equilibrium of the above game. Let p denote the probability of \(\sum _{j=1}^\infty a_j<\infty \) under \(\alpha \). By the 0–1 Law, \(p\in \{0,1\}\). If \(p=0\) then the probability that i assigns to the pure action 1 is bounded from above by \(\frac{\epsilon }{i^2}\). But this implies, by the Borel–Cantelli Lemma, that \(p=1\). On the other hand, if \(p=1\), then the probability that each i assigns to 1 is bounded from below by \(1-\frac{\epsilon }{i^2}\), and hence, by the Borel–Cantelli Lemma, \(p=0\).\(\square \)

The utilities in the game from the above proof are not uniformly bounded—there is no \(M\in \mathbb {R}_+\) such that \(|u_i(a)|\le M\) for all i and a. It is an open problem whether \(\epsilon \) equilibrium is guaranteed to exist in the class of games with uniformly bounded utilities.

4 Symmetric games

Consider the case where all players have the same set of strategies, and there is a function u such that for each player i the utility function takes the form \(u_i(a)=u(a_i,a_{-i})\). In a finite game, if the utilities assume this form where u is invariant to permutations in its second argument, the game is called symmetric. Consider the extension of this definition to our model: namely, a game G is symmetric if there is a set A and a function \(u:A^N\rightarrow \mathbb {R}\) such that (i) for all \(i\in N\) it is true that \(A_i=A\) and \(u_i(a)=u(a_i,a_{-i})\), and (ii) the function u satisfies \(u(a_i,a_{-i})=u(a_i,\pi \circ a_{-i})\) for all profiles a and all permutations \(\pi :(N\setminus \{i\})\rightarrow (N\setminus \{i\})\).

Our first result in this section shows that this naive extension of the symmetry definition is extremely demanding.

Proposition 3

Let A and u be the strategy set and utility function of a symmetric game. Then \(\big |\{u(a): a\in A^N\}\big |\le \aleph _0\). Moreover, if u is a tail function then \(\big |\{u(a): a\in A^N\}\big |\le |A|\big [2^{|A|}-1\big ]\).

Proof

Let A and u be the strategy set and utility function of a symmetric game. Note that for every two profiles, a and b, the combination of (I) and (II) implies that \(u(a_i,a_{-i})=u_i(b_i,b_{-i})\), where:

  • (I) \(a_i=b_i\),

  • (II) \(\forall x\in A\): \(\Big |\big \{j\in N\setminus \{i\}:a_j=x\big \}\Big |=\Big |\big \{j\in N\setminus \{i\}:b_j=x\big \}\Big |\).

Since, in any given pure profile, each \(x\in A\) has infinitely many appearances or exactly k such appearances, for each \(k\in \mathbb {N}\), it follows that the cardinality of u’s range is bounded by the cardinality of \(\mathbb {N}^{A}\), which is \(\aleph _0\).

Now suppose in addition that u is a tail function. Let \(\{A_1,\ldots ,A_K\}\) be the collection of the non-empty subsets of A (so \(K=2^{|A|}-1\)). For each \(k\in \{1,\ldots ,K\}\) let \(X_k\subset A^N\) be the set of profiles \(a\in A^N\) with the following property: \(\big \{x\in A: \big |\{i\in N: a_i=x\}\big |=\aleph _0\big \}=A_k\). In words, \(X_k\) is the set of profiles under which the actions that appear infinitely many times are the elements of \(A_k\). By invariance to permutations and the fact that u is a tail function, it follows that given each \(x\in A\), the function u(x, .) is constant on each \(X_k\). The fact that \(A^N=\cup _k X_k\) completes the proof.\(\square \)

The above constraints on the utility’s range are due to the stringent demand that any permutation on \(a_{-i}\) leaves the utility intact. The obvious weakening of this requirement is to impose it only on finite permutations: permutations \(\pi \) such that \(\big \{j\in N: \pi (j)\ne j\big \}<\infty \). Say that a game for which this condition is satisfied, and for which all other requirements in the definition of a symmetric game are met, is a weakly symmetric game.

If a game is weakly symmetric and its common utility function is a tail function, then it is possible for this utility to have a range with the cardinality \(2^{\aleph _0}\). An example is the game where the player set is \(\mathbb {N}\), everybody has the strategy set \(\{0,1\}\), and the utility function is \(u(a)=\text {limsup}_{n\rightarrow \infty }\frac{\big |\left\{ 1\le k \le n: a_k=1\right\} \big |}{n}\).

Next is an example of a symmetric game (not just weakly symmetric) in which Nash equilibria exist, but all of them are asymmetric. Let \(G^{**}\) be the following game: the player and action sets are \(\mathbb {N}\) and \(\{0,1\}\), respectively, and the utility of player i from the pure profile a is:

$$\begin{aligned} u_i(a)=\left\{ \begin{array}{ll} -1 &{} \quad \text {if}\; a_j=1\hbox { for all }j\in \mathbb {N}\\ a_i &{} \quad \text {otherwise} \end{array}\right. \end{aligned}$$

Proposition 4

\(G^{**}\) has a Nash equilibrium.

Proof

Consider the profile where player 1 plays the action 0 and every other player plays the action 1. This is a Nash equilibrium.\(\square \)

Despite having Nash equilibria and being a symmetric game, \(G^{**}\) does not have a symmetric equilibrium. This fact parallels the results of Fey (2012), who constructed symmetric games with infinitely many strategies and finitely many players, in which equilibrium exists, but all equilibria are asymmetric.

Proposition 5

\(G^{**}\) does not have a symmetric Nash equilibrium.

Proof

Assume by contradiction that there is a symmetric Nash equilibrium. Let q be the probability that a given player is playing the action 1 under this equilibrium. Clearly, \(q\notin \{0,1\}\). This means that each player is indifferent between the two actions. Consider an arbitrary player, i. Let p be the probability that \(a_j=1\) for all \(j\ne i\). The indifference condition for player i is \(p(-1)+(1-p)=0\), and so \(p=0.5\). However, \(p=\text {lim}_{n\rightarrow \infty }q^n=0\), a contradiction.\(\square \)

It is easy to see that the utility function of \(G^{**}\) is not a tail function. For example, if all \(i\ge 3\) take the action 1, we cannot deduce player 1’s payoff from the knowledge of his own action; for instance, if he also plays the action 1 in the aforementioned situation then his payoff would either be \(-1\) or \(+1\), depending on player 2’s action. The following result shows that the fact that this utility is not a tail function is inevitable.

Proposition 6

Let G be a symmetric game the utility of which is a tail function, and suppose that G has some Nash equilibrium. Then G has a symmetric Nash equilibrium.

Proof

Let G be as above and consider some Nash equilibrium of it, e. Let A be the set of pure strategies in G and let \(X\subset A\) be those actions that are realized in the equilibrium e infinitely many times with probability one (clearly, \(X\ne \emptyset \)). Let \(\alpha ^*\) be the uniform lottery on the elements of X. I argue that \(\tilde{\alpha }=(\tilde{\alpha }_i)_{i\in N}\), where \(\tilde{\alpha }_i=\alpha ^*\) for each player i, is a symmetric Nash equilibrium. To see this consider the abovementioned equilibrium, e, and consider an arbitrary \(x\in X\). Let i be a player who assigns a strictly positive probability to x in the equilibrium e. Note that with probability 1 the following event occurs: the profile \((a_j)_{j\ne i}\) consists of infinitely many appearances of each \(x'\in X\). By assumption, \(x\in X\) is a best-response for i. Since this is true for each \(x\in X\), the symmetry of the game implies that \(\tilde{\alpha }\) is indeed a Nash equilibrium.\(\square \)

Similarly to the fact that the tail function assumption cannot be dispensed with in Proposition 6, the symmetry assumption is also crucial; specifically, it cannot be replaced by weak symmetry.

Proposition 7

There exists a weakly symmetric game the utility of which is a tail function, that has some Nash equilibrium but does not have any symmetric Nash equilibrium.

Proof

Consider the following game. The player set is \(N=\mathbb {Q}\cap (0,1)\) and each player has the pure strategies \(\{0,1\}\). To define the utility functions, some preliminary notation is needed. Let \(\mathcal {A}\) be the set of those profiles \(a\in \{0,1\}^N\) such that there exists a cutoff \(\delta \in (0,1)\) such that all players in \(N\cap (0,\delta )\) play the same action and all \(i\in N\setminus (0,\delta )\) play the other action. Given \(a\in \mathcal {A}\), let \(\delta (a)\) be the cutoff associated with a.

Let \(\mathcal {A}^*\) consist of those profiles \(a\in \{0,1\}^N\) such that there is a \(b\in \mathcal {A}\) such that \(\big |\{i\in N: b_i\ne a_i\}\big |<\infty \). That is, a profile belongs to \(\mathcal {A}^*\) if and only if one can produce out of it an element of \(\mathcal {A}\) via finitely many individual-strategy changes. Since for each such a corresponds a unique such b, I write, with a little abuse of notation, \(\delta (a)\) to denote the associated cutoff [i.e., \(\delta (b)\)]. The utilities in our game can now be defined:

$$\begin{aligned} u_i(a)=\left\{ \begin{array}{ll} \delta (a) &{} \quad \text {if}\; a\in \mathcal {A}^*\\ a_i &{}\quad \text {if}\; a\notin \mathcal {A}^*\hbox { and }\big |\{j: a_j=0\}\big |>\big |\{j: a_j=1\}\big |\\ -a_i &{} \quad \text {if}\; a\notin \mathcal {A}^*\hbox { and }\big |\{j: a_j=0\}\big |\le \big |\{j: a_j=1\}\big | \end{array}\right. \end{aligned}$$

Call this game \(G^{***}\). Note that \(G^{***}\) is a weakly symmetric game the utility function of which is a tail function.

It is easy to see that \(G^{***}\) has a Nash equilibrium. To see this, pick \(\delta \in (0,1)\) and consider the following profile: all \(i\in N\cap (0,\delta )\) play the action 1 and all other players play 0. Clearly, this is a Nash equilibrium. However, \(G^{***}\) does not have a symmetric Nash equilibrium. To see this, assume by contradiction that \(\alpha \) is such an equilibrium. Let p denote the probability of \(\mathcal {A}^*\) (given \(\alpha \)). By the 0–1 Law, \(p\in \{0,1\}\). Let q be the probability that a player takes the action 1 under \(\alpha \). It is easy to see that \(q\notin \{0,1\}\). Therefore, each player is indifferent between his pure strategies; therefore, \(p=1\). I argue, however, that this is impossible—that p must be zero. To see this, note that:

$$\begin{aligned} p\!=\!Pr({{\mathcal {{A}}}^*})\!=\!\underbrace{Pr\Biggr (\Big \{a\in {{\mathcal {{A}}}^*}:0<\delta (a)< \frac{1}{2}\Big \}\Biggr )}_{A}+\underbrace{Pr\Biggr (\Big \{a\in {{\mathcal {{A}}}^*}:\frac{1}{2}< \delta (a)<1\Big \}\Biggr )}_{B}. \end{aligned}$$

Since \(A=B\) and, by the 0–1 Law, they are both in \(\{0,1\}\), it follows that \(A=B=p=0\). \(\square \)