Skip to main content

Adjustment Dynamics in Network Games with Stochastic Parameters

  • Conference paper
  • First Online:
Frontiers of Dynamic Games

Part of the book series: Static & Dynamic Game Theory: Foundations & Applications ((SDGTFA))

  • 369 Accesses

Abstract

In this paper we introduce stochastic parameters into the network game model with production and knowledge externalities. This model was proposed by V. Matveenko and A. Korolev as a generalization of the two-period Romer model. Agents differ in their productivities which have deterministic and stochastic (Wiener) components. We study the dynamics of a single agent and the dynamics of a dyad where two agents are aggregated. We derive explicit expressions for the dynamics of a single agent and dyad dynamics in the form of Brownian random processes, and qualitatively analyze the solutions of stochastic equations and systems of stochastic equations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Borodin, A.N., Salminen, P.: Handbook of Brownian Motion. Facts and Formulae. Birkhauser, Basel (1996)

    Google Scholar 

  2. Bramoullé, Y., Kranton, R.: Public goods in networks. J. Econ. Theory 135, 478–494 (2007)

    Article  MathSciNet  Google Scholar 

  3. Galeotti, A., Goyal, S., Jackson, M.O., Vega-Redondo, F., Yariv, L.: Network games. Rev. Econ. Stud. 77, 218–244 (2010)

    Article  MathSciNet  Google Scholar 

  4. Garmash, M.V., Kaneva, X.A.: Game equilibria and adjustment dynamics in full networks and in triangle with heterogeneous agents. Autom. Remote Control. (2020) (Math. Game Theory Appl. (2018) 10(2), 3–26 in Russian, and accepted to print in 2020 year in Automation and Remote Control)

    Google Scholar 

  5. Granovetter, M.S.: The strength of weak ties. Am. J. Sociol. 78, 1360–1380 (1973)

    Article  Google Scholar 

  6. Jackson, M.O.: Social and Economic Networks. Princeton University, Princeton (2008)

    Book  Google Scholar 

  7. Jackson, M.O., Zenou, Y.: Games on networks. In: Young, P., Zamir, S. (eds.) Handbook of Game Theory, vol. 4, pp. 95–163. Elsevier, Amsterdam (2014)

    Google Scholar 

  8. Kiselev, A.O., Yurchenko, N.I.: Game equilibria and transient dynamics in dyad with heterogeneous agents. Math. Game Theory Appl. 10(1), 40–64 (2018, in Russian)

    Google Scholar 

  9. Lamperti, J.: Stochastic Processes. Springer, New York (1977)

    Book  Google Scholar 

  10. Martemyanov, Y.P., Matveenko, V.D.: On the dependence of the growth rate on the elasticity of substitution in a network. Int. J. Process Manag. Benchmarking 4(4), 475–492 (2014)

    Article  Google Scholar 

  11. Matveenko, V.D., Korolev, A.V.: Network game with production and knowledge externalities. Contrib. Game Theory Manag. 8, 199–222 (2015)

    MathSciNet  MATH  Google Scholar 

  12. Matveenko, V., Korolev, A., Zhdanova, M.: Game equilibria and unification dynamics in networks with heterogeneous agents. Int. J. Eng. Bus. Manag. 9, 1–17 (2017)

    Article  Google Scholar 

  13. Matveenko, V., Garmash, M., Korolev, A.: Chapter 10: game equilibria and transition dynamics in networks with heterogeneous agents. In: Petrosyan, L.A., Mazalov, V.V., Zenkevich, N.A. (eds.) Frontiers of Dynamic Games, vol. 10, pp. 165–188. Springer, Birkhauser (2018)

    Chapter  Google Scholar 

  14. Romer, P.M.: Increasing returns and long-run growth. J. Polit. Econ. 94, 1002–1037 (1986)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Appendix

Appendix

The proof of Proposition 6.4.

Proof

The system of differential equations in the deterministic case has the form

$$\displaystyle \begin{aligned} \left\{ \begin{array}{c} {\dot{k}_1=\left(\frac{A_1}{2a}-1\right)k_1+\frac{A_1}{2a}k_2-\frac{\varepsilon(1-2a)}{2a},} \\ {\dot{k}_2=\frac{A_2}{2a}k_1+\left(\frac{A_2}{2a}-1\right)k_2-\frac{\varepsilon(1-2a)}{2a}.} \end{array} \right. \end{aligned} $$
(6.24)

The characteristic equation for system (6.11) is as follows

$$\displaystyle \begin{aligned} (\lambda+1)^2-\frac{A_1+A_2}{2a}(\lambda+1)=0, \end{aligned}$$

therefore eigenvalues are

$$\displaystyle \begin{aligned} \lambda_1=-1;\quad \lambda_2=-1+\frac{\bar{A}}{a}, \end{aligned}$$

where \(\bar {A}=\frac {A_1+A_2}{2}\). Obviously, we can choose as the eigenvectors of the matrix A the vectors

$$\displaystyle \begin{aligned} e_1= \left( \begin{array}{c} 1 \\ -1 \end{array} \right),\quad e_2= \left( \begin{array}{c} A_1 \\ A_2 \end{array} \right). \end{aligned}$$

So the transition matrix is

$$\displaystyle \begin{aligned} S= \left( \begin{array}{rr} 1 &\quad A_1 \\ -1 &\quad A_2 \end{array} \right), \end{aligned}$$

then

$$\displaystyle \begin{aligned} AS=SJ,\quad e^{tA}=Se^{tJ}S^{-1}, \end{aligned}$$

where

$$\displaystyle \begin{aligned} J= \left( \begin{array}{cc} -1 & 0 \\ 0 & \quad -1+\frac{\bar{A}}{a} \end{array} \right),\quad e^{tJ}= \left( \begin{array}{cc} e^{-t} & 0 \\ 0 & \quad \exp\left(\left(\frac{\bar{A}}{a}-1\right)t\right) \end{array} \right), \end{aligned}$$
$$\displaystyle \begin{aligned} S^{-1}=\frac{1}{A_1+A_2} \left( \begin{array}{cc} A_2 & -A_1 \\ 1 & 1 \end{array} \right). \end{aligned}$$

The general solution of system (6.11) is as follows

$$\displaystyle \begin{aligned} \left( \begin{array}{c} k_1 \\ k_2 \end{array} \right)=C_1 \left( \begin{array}{c} 1 \\ -1 \end{array} \right)\exp(-t)+C_2 \left( \begin{array}{c} A_1 \\ A_2 \end{array} \right)\exp\left(\left(\frac{\bar{A}}{a}-1\right)t\right)+ \left( \begin{array}{c} D_1 \\ D_2 \end{array} \right). \end{aligned}$$

We find the constants D 1 and D 2 by solving the system of equations

$$\displaystyle \begin{aligned} \left\{ \begin{array}{c} {\left(\frac{A_1}{2a}-1\right)k_1+\frac{A_1}{2a}k_2=\frac{\varepsilon(1-2a)}{2a},} \\ {\frac{A_2}{2a}k_1+\left(\frac{A_2}{2a}-1\right)k_2=\frac{\varepsilon(1-2a)}{2a}.} \end{array} \right. \end{aligned}$$

It is easy to verify that they are determined by the expression (6.18). We find the integration constants C 1 and C 2 from the initial conditions:

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} {k_1^0=C_1+A_1 C_2+D_1,} \\ {k_2^0=-C_1+A_2 C_2+D_2,} \end{array} \right. \end{aligned}$$

so

$$\displaystyle \begin{aligned} C_2=\frac{\bar{k}^0-\bar{D}}{\bar{A}}, \end{aligned}$$

where

$$\displaystyle \begin{aligned} \bar{k}^0=\frac{k_1^0+k_2^0}{2},\quad \bar{D}=\frac{D_1+D_2}{2}=\frac{\varepsilon(1-2a)}{2(\bar{A}-a)},\quad A_2D_1-A_1D_2=\frac{\varepsilon(1-2a)(A_1-A_2)}{2a}. \end{aligned}$$

Then

$$\displaystyle \begin{aligned} C_1=\frac{A_2k_1^0-A_1k_2^0}{4\bar{A}}-\frac{\varepsilon(1-2a)(A_1-A_2)}{8a\bar{A}}. \end{aligned}$$

Thus, the solution is determined by expression (6.17). ■

The proof of Theorem 6.3.

Proof

It is clear that the matrices A and α commute; therefore, for the matrix exponentials, the relation

$$\displaystyle \begin{aligned} e^{At}e^{\alpha W_t}=e^{At+\alpha W_t} \end{aligned}$$

holds and we can solve the matrix equation (6.16) by multiplying from the left by the matrix exponent

$$\displaystyle \begin{aligned} e^{-At-\alpha W_t+\frac{\alpha ^2}{2}t}. \end{aligned}$$

Denote, as in the one-dimensional case, for brevity

$$\displaystyle \begin{aligned} \varPsi=-At-\alpha W_t+\frac{\alpha^2}{2}t. \end{aligned}$$

Then we have

$$\displaystyle \begin{aligned} d\left(e^\varPsi k\right)=e^\varPsi dk+de^\varPsi\dot k+de^\varPsi\dot dk= \end{aligned}$$
$$\displaystyle \begin{aligned} =e^\varPsi\left(Akdt+\alpha kdW_t+\bar{E}dt\right) +e^\varPsi\left(-Adt-\alpha dW_t+\frac{\alpha^2}{2}dt +\frac{\alpha^2}{2}dt\right)k-e^\varPsi k\alpha^2dt= \end{aligned}$$
$$\displaystyle \begin{aligned} =e^\varPsi\bar{E}dt. \end{aligned}$$

Thus, Eq. (6.16) takes the form

$$\displaystyle \begin{aligned} d\left(e^{-At-\alpha W_t+\frac{\alpha^2}{2}t}k\right)= e^{-At-\alpha W_t+\frac{\alpha^2}{2}t}\bar{E}dt, \end{aligned}$$

therefore, the solution of matrix equation (6.8) can be written as

$$\displaystyle \begin{aligned} k(t)=e^{At+\alpha W_t-\frac{\alpha^2}{2}t}k_0 +e^{At+\alpha W_t-\frac{\alpha^2}{2}t} \left(\int_0^t e^{-A\tau-\alpha W_\tau+\frac{\alpha^2}{2}\tau}d\tau\right)\bar{E}. \end{aligned} $$
(6.25)

Notice, that

$$\displaystyle \begin{aligned} \alpha^2=\frac{1}{2a} \left( \begin{array}{cc} \alpha_1 & \alpha_1 \\ \alpha_2 & \alpha_2 \end{array} \right)\cdot\frac{1}{2a} \left( \begin{array}{cc} \alpha_1 & \alpha_1 \\ \alpha_2 & \alpha_2 \end{array}\cdot \right)=\frac{1}{4a^2} \left( \begin{array}{cc} \alpha_1^2+\alpha_1\alpha_2 & \quad \alpha_1^2+\alpha_1\alpha_2 \\ \alpha_1\alpha_2+\alpha_2^2 & \quad \alpha_1\alpha_2+\alpha_2^2 \end{array}\cdot \right). \end{aligned}$$

The eigenvalues of the matrix

$$\displaystyle \begin{aligned} A-\frac{\alpha^2}{2}= \left( \begin{array}{cc} \frac{A_1}{2a}-\frac{\alpha_1^2+\alpha_1\alpha_2}{8a^2}-1 & \frac{A_1}{2a}-\frac{\alpha_1^2+\alpha_1\alpha_2}{8a^2}\\ \frac{A_2}{2a}-\frac{\alpha_1^2+\alpha_1\alpha_2}{8a^2} & \frac{A_2}{2a}-\frac{\alpha_1^2+\alpha_1\alpha_2}{8a^2}-1 \end{array}\cdot \right). \end{aligned}$$

are obviously λ 1 = −1 and \(\lambda _2=\frac {\bar {A}}{a}-\frac {(\alpha _1+\alpha _2)^2}{8a^2}-1\). As eigenvectors we can take

$$\displaystyle \begin{aligned} e_1= \left( \begin{array}{c} 1 \\ -1 \end{array} \right) \end{aligned}$$

and

$$\displaystyle \begin{aligned} e_2= \left( \begin{array}{c} A_1 \\ A_2 \end{array} \right) \end{aligned}$$

or in view of (6.14)

$$\displaystyle \begin{aligned} e_2= \left( \begin{array}{c} \alpha_1 \\ \alpha_2 \end{array} \right). \end{aligned}$$

The eigenvalues of the matrix α are λ 1 = 0 and λ 2 = α 1 + α 2, and obviously we can choose the same e 1 and e 2 as eigenvectors as for the matrix \(A-\frac {\alpha ^2}{2}\). Therefore, to reduce to the diagonal form of the matrices \(\left (A-\frac {\alpha ^2}{2}\right )t\) and αW t we can use the same transition matrices

$$\displaystyle \begin{aligned} S= \left( \begin{array}{rr} 1 & \quad A_1 \\ -1 & A_2 \end{array} \right),\quad S^{-1}=\frac{1}{A_1+A_2} \left( \begin{array}{rr} 1 & \quad A_1 \\ -1 & A_2 \end{array} \right), \end{aligned}$$

so we get

$$\displaystyle \begin{aligned} \left(A-\frac{\alpha^2}{2}\right)t+\alpha W_t=S(Jt+\varLambda W_t)S^{-1}, \end{aligned}$$

where

$$\displaystyle \begin{aligned} J= \left( \begin{array}{cc} -1 & 0 \\ \quad 0 & \quad \frac{\bar{A}}{a}-\frac{(\alpha_1+\alpha_2)^2}{8a^2}-1 \end{array} \right),\quad \varLambda= \left( \begin{array}{cc} 0 & 0 \\ 0 & \frac{\alpha_1+\alpha_2}{2a} \end{array} \right), \end{aligned}$$

and correspondingly

$$\displaystyle \begin{aligned} \exp\left(\left(A-\frac{\alpha^2}{2}\right)t+\alpha W_t\right)= \end{aligned}$$
$$\displaystyle \begin{aligned} =S \left( \begin{array}{cc} \exp(-t) & 0 \\ 0 & \quad \exp\left(\left(\frac{\bar{A}}{a}-\frac{(\alpha_1+\alpha_2)^2}{8a^2}-1\right)t +\frac{\alpha_1+\alpha_2}{2a}W_t\right) \end{array} \right) S^{-1}. \end{aligned} $$
(6.26)

Substituting (6.26) into (6.25) we obtain

$$\displaystyle \begin{aligned} \left( \begin{array}{c} k_1(t) \\ k_2(t) \end{array} \right)=\frac{1}{A_1+A_2} \left( \begin{array}{cc} 1 & \quad A_1 \\ -1 & \quad A_2 \end{array} \right)\times \end{aligned}$$
$$\displaystyle \begin{aligned} \times \left( \begin{array}{cc} \exp(-t) & 0 \\ 0 & \quad \exp\left(\left(\frac{\bar{A}}{a}- \frac{(\alpha_1+\alpha_2)^2}{8a^2}-1\right)t +\frac{\alpha_1+\alpha_2}{2a}W_t\right) \end{array} \right) \left( \begin{array}{cc} A_2 & -A_1 \\ 1 & 1 \end{array} \right) \left( \begin{array}{c} k_1^0 \\ k_2^0 \end{array} \right)- \end{aligned}$$
$$\displaystyle \begin{aligned} -\frac{e(1-2a)}{2a}\cdot\frac{1}{A_1+A_2} \left( \begin{array}{cc} 1 & \quad A_1 \\ -1 & \quad A_2 \end{array} \right)\times \end{aligned}$$
$$\displaystyle \begin{aligned} \times \left( \begin{array}{cc} \exp(-t) & 0 \\ 0 & \quad \exp\left(\left(\frac{\bar{A}}{a}- \frac{(\alpha_1+\alpha_2)^2}{8a^2}-1\right)t +\frac{\alpha_1+\alpha_2}{2a}W_t\right) \end{array} \right)\times \end{aligned}$$
$$\displaystyle \begin{aligned} \times \left( \begin{array}{cc} \int_0^t\exp(\tau)d\tau & 0 \\ 0 & \quad \int_0^t \exp\left(\left(-\frac{\bar{A}}{a} +\frac{(\alpha_1+\alpha_2)^2}{8a^2}+1\right)\tau -\frac{\alpha_1+\alpha_2}{2a}W_\tau\right)d\tau \end{array} \right)\times \end{aligned}$$
$$\displaystyle \begin{aligned} \times \left( \begin{array}{cc} A_2 & -A_1 \\ 1 & 1 \end{array} \right) \left( \begin{array}{c} 1 \\ 1 \end{array} \right). \end{aligned} $$
(6.27)

Calculating expression (6.27) we get expressions (6.19)–(6.20). ■

Rights and permissions

Reprints and permissions

Copyright information

© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Korolev, A. (2020). Adjustment Dynamics in Network Games with Stochastic Parameters. In: Petrosyan, L.A., Mazalov, V.V., Zenkevich, N.A. (eds) Frontiers of Dynamic Games. Static & Dynamic Game Theory: Foundations & Applications. Birkhäuser, Cham. https://doi.org/10.1007/978-3-030-51941-4_6

Download citation

Publish with us

Policies and ethics