Skip to main content
Log in

A switching microstructure model for stock prices

  • Published:
Mathematics and Financial Economics Aims and scope Submit manuscript

Abstract

This article proposes a microstructure model for stock prices in which parameters are modulated by a Markov chain determining the market behaviour. In this approach, called the switching microstructure model (SMM), the stock price is the result of the balance between the supply and the demand for shares. The arrivals of bid and ask orders are represented by two mutually- and self-excited processes. The intensities of these processes converge to a mean reversion level that depends upon the regime of the Markov chain. The first part of this work studies the mathematical properties of the SMM. The second part focuses on the econometric estimation of parameters. For this purpose, we combine a particle filter with a Markov chain Monte Carlo algorithm. Finally, we calibrate the SMM with two and three regimes to daily returns of the S&P 500 and compare them with a non switching model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Chosen parameters are in the same range of values as real estimates reported in Sect. 4.2. In order to clearly vizualize changes of regimes, the gap between mean reversion levels in each regimes is increased. For the same reason, we have also modified transition probabilities in order to observe a sufficient number of changes of regime during the simulation.

References

  1. Ait-Sahalia, Y., Cacho-Diaz, J., Laeven, R.J.A.: Modeling financial contagion using mutually exciting jump processes. J. Financ. Econ. 117(3), 586–606 (2015)

    Article  Google Scholar 

  2. Al-Anaswah, N., Wilfing, B.: Identification of speculative bubbles using state-space models with Markov-switching. J. Bank. Finance 35(5), 1073–1086 (2011)

    Article  Google Scholar 

  3. Bacry, E., Delattre, S., Hoffmann, M., Muzy, J.F.: Modelling microstructure noise with mutually exciting point processes. Quant. Finance 13(1), 65–77 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  4. Bacry, E., Delattre, S., Hoffmann, M., Muzy, J.F.: Scaling limits for Hawkes processes and application to financial statistics. Stoch. Process. Appl. 123(7), 2475–2499 (2013)

    Article  MATH  Google Scholar 

  5. Bacry, E., Muzy, J.F.: Hawkes model for price and trades high-frequency dynamics. Quant. Finance 14(7), 1147–1166 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  6. Bacry, E., Mastromatteo, I., Muzy, J.F.: Hawkes processes in finance. Mark. Microstruct. Liq. 1(1), 1–59 (2015)

    Article  Google Scholar 

  7. Bacry, E., Muzy, J.F.: Second order statistics characterization of Hawkes processes and non-parametric estimation. IEEE Trans. Inf. Theory 62(4), 2184–2202 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  8. Bormetti, G., Calcagnile, L.M., Treccani, M., Corsi, F., Marmi, S., Lillo, F.: Modelling systemic price cojumps with Hawkes factor models. Quant. Finance 15(7), 1137–1156 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  9. Bouchaud, J.P.: Price impact. In: Cont, R. (ed.) Encyclopedia of Quantitative Finance. Wiley, Hoboken (2010)

    Google Scholar 

  10. Bouchaud, J.P., Farmer, J.D., Lillo, F.: How markets slowly diggest changes in supply and demand. In: Hens, T., Reiner, K., Schenk-Hoppé. (eds.) Handbook of Financial Markets. Elsevier, New York (2009)

  11. Bowsher, C.G.: Modelling security markets in continuous time: intensity based, multivariate point process models. Economics Discussion Paper No. 2002- W22, Nuffield College, Oxford (2002)

  12. Branger, N., Kraft, H., Meinerding, C.: Partial information about contagion risk, self-exciting processes and portfolio optimization. J. Econ. Dyn. Control 39, 18–36 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  13. Chavez-Demoulin, V., McGill, J.A.: High-frequency financial data modeling using Hawkes processes. J. Bank. Finance 36, 3415–3426 (2012)

    Article  Google Scholar 

  14. Cont, R., Kukanov, A., Stoikov, S.: The price impact of order book events. J. Financ. Econ. 12(1), 47–88 (2013)

    Google Scholar 

  15. Da Fonseca, J., Zaatour, R.: Hawkes process: fast calibration, application to trade clustering, and diffusive limit. J. Futures Mark. 34(6), 548–579 (2014)

    Google Scholar 

  16. Doucet, A., Godsill, S., Andrieu, C.: On sequential Monte Carlo sampling methods for Bayesian filtering. Stat. Comput. 10, 197–208 (2000)

    Article  Google Scholar 

  17. Errais, E., Giesecke, K., Goldberg, L.: Affine point processes and portfolio credit risk. SIAM J. Financ. Math. 1, 642–665 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  18. Filimonov, V., Sornette, D.: Apparent criticality and calibration issues in the Hawkes self-excited point process model: application to high-frequency financial data. Quant. Finance 15(8), 1293–1314 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  19. Gatumel, M., Ielpo, F.: The number of regimes across asset returns: identification and economic value. Int. J. Theor. Appl. Finance 17(06), 25 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  20. Guidolin, M., Timmermann, A.: Economic implications of bull and bear regimes in UK stock and bond returns. Econ. J. 115, 11–143 (2005)

    Article  Google Scholar 

  21. Guidolin, M., Timmermann, A.: International asset allocation under regime switching, skew, and kurtosis preferences. Rev. Financ. Stud. 21(2), 889–935 (2008)

    Article  Google Scholar 

  22. Hainaut, D.: A model for interest rates with clustering effects. Quant. Finance 16(8), 1203–1218 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  23. Hainaut, D.: A bivariate Hawkes process for interest rate modeling. Econ. Model. 57, 180–196 (2016)

    Article  Google Scholar 

  24. Hainaut, D.: Clustered Lévy processes and their financial applications. J. Comput. Appl. Math. 319, 117–140 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  25. Hainaut, D., MacGilchrist, R.: Strategic asset allocation with switching dependence. Ann. Finance 8(1), 75–96 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  26. Hardiman, S.J., Bouchaud, J.P.: Branching ratio approximation for the self-exciting Hawkes process. Phys. Rev. E 90(6), 628071–628076 (2014)

    Article  Google Scholar 

  27. Hautsch, N.: Modelling Irregularly Spaced Financial Data. Springer, Berlin (2004)

    Book  MATH  Google Scholar 

  28. Hawkes, A.: Point sprectra of some mutually exciting point processes. J. R. Stat. Soc. Ser. B 33, 438–443 (1971)

    MATH  Google Scholar 

  29. Hawkes, A.: Spectra of some self-exciting and mutually exciting point processes. Biometrika 58, 83–90 (1971)

    Article  MathSciNet  MATH  Google Scholar 

  30. Hawkes, A., Oakes, D.: A cluster representation of a self-exciting process. J. Appl. Probab. 11, 493–503 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  31. Horst, U., Paulsen, M.: A law of large numbers for limit order books. Math. Oper. Res. (2017). https://doi.org/10.1287/moor.2017.0848

  32. Jaisson, T., Rosenbaum, M.: Limit theorems for nearly unstable Hawkes processes. Ann. Appl. Probab. 25(2), 600–631 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  33. Kelly, F., Yudovina, E.: A Markov model of a limit order book: thresholds, recurrence, and trading strategies. Math. Oper. Res. (2017). https://doi.org/10.1287/moor.2017.0857

  34. Kyle, A.S.: Continuous auction and insider trading. Econometrica 53, 1315–1335 (1985)

    Article  MATH  Google Scholar 

  35. Large, J.: Measuring the resiliency of an electronic limit order book. Working Paper, All Souls College, University of Oxford (2005)

  36. Lee, K., Seo, B.K.: Modeling microstructure price dynamics with symmetric Hawkes and diffusion model using ultra-high-frequency stock data. J. Econ. Dyn. Control 79, 154–183 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  37. Protter, P.E.: Stochastic Integration and Differential Equations. Springer, Berlin (2004)

    MATH  Google Scholar 

  38. Wang, T., Bebbington, M., Harte, D.: Markov-modulated Hawkes process with stepwise decay. Ann. Inst. Stat. Math. 64, 521–544 (2012)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We thank for its support the Chair “Data Analytics and Models for insurance” of BNP Paribas Cardif, hosted by ISFA (Université Claude Bernard, Lyon France). We also thank the two anonymous referees and the editor, Ulrich Horst, for their recommandations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Donatien Hainaut.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Proof of Lemma 2.1

To prove this relation, we differentiate the expression of \(\lambda _{t}^{i}\) to retrieve its dynamic:

$$\begin{aligned} d\lambda _{t}^{i}= & {} \kappa _{i}c_{i,t}-\kappa _{i}\left( \lambda _{0}^{i}-\kappa _{i}\int _{0}^{t}e^{\kappa _{i}(s-t)}\left( \lambda _{0}^{i}-c_{i,s}\right) ds\right. \\&+\left. \int _{0}^{t}\delta _{i,1}e^{\kappa _{i}(s-t)}dL_{s}^{1}dt+\int _{0}^{t}\delta _{i,2}e^{\kappa _{i}(s-t)}dL_{s}^{2}\right) +\delta _{i,1}dL_{t}^{1}+\delta _{i,2}dL_{t}^{2}\\= & {} \kappa _{i}(c_{i,t}-\lambda _{t}^{i})dt+\delta _{i,1}dL_{t}^{1}+\delta _{i,2}dL_{t}^{2}\qquad i=1,2. \end{aligned}$$

To prove the positivity, we first remind that \(\int _{0}^{t}\delta _{i,1}e^{\kappa _{i}(s-t)}dL_{s}^{1}\) and \(\int _{0}^{t}\delta _{i,2}e^{\kappa _{i}(s-t)}dL_{s}^{2}\) are positive by construction. According to Eq. (12), the process \(\lambda _{t}^{i}\) admit the following lower bound:

$$\begin{aligned} \lambda _{t}^{i}> & {} \lambda _{0}^{i}+\left( \min \left( c_{i}\right) -\lambda _{0}^{i}\right) \kappa _{i}\int _{0}^{t}e^{\kappa _{i}(s-t)}ds. \end{aligned}$$
(46)

Given that \(\kappa _{i}\int _{0}^{t}e^{\kappa _{i}(s-t)}ds=\left( 1-e^{-\kappa _{i}t}\right) >0\), we conclude that

$$\begin{aligned} \lambda _{t}^{i}> & {} \lambda _{0}^{i}e^{-\kappa _{i}t}+\min \left( c_{i}\right) \left( 1-e^{-\kappa _{i}t}\right) >0. \end{aligned}$$

\(\square \)

Proof of Proposition 3.1

As \(\mathcal {F}_{s}\subset \mathcal {F}_{s}\vee \mathcal {G}_{t}\), using nested expectations leads to the following expression for the expected intensity:

$$\begin{aligned} \mathbb {E}(\lambda _{t}^{i}|\mathcal {F}_{s})= & {} \mathbb {E}\left( \mathbb {E}\left( \lambda _{t}^{i}|\mathcal {F}_{s}\vee \mathcal {G}_{t}\right) |\mathcal {F}_{s}\right) . \end{aligned}$$

If we remember the expression (13) of the intensity, using the Fubini’s theorem leads to the following expression for the expectation of \(\lambda _{t}^{i}\), conditionally to the augmented filtration \(\mathcal {F}_{s}\vee \mathcal {G}_{t}\) :

$$\begin{aligned} \mathbb {E}\left( \lambda _{t}^{i}|\mathcal {F}_{s}\vee \mathcal {G}_{t}\right)= & {} \lambda _{s}^{i}-\kappa _{i}\int _{s}^{t}e^{\kappa _{i}(u-t)}\left( \lambda _{s}^{i}-c_{i,u}\right) du\nonumber \\&+\int _{s}^{t}\delta _{i,1}e^{\kappa _{i}(u-t)}\mathbb {E}\left( dL_{u}^{1}|\mathcal {F}_{s}\vee \mathcal {G}_{t}\right) +\int _{s}^{t}\delta _{i,2}e^{\kappa _{i}(u-t)}\mathbb {E}\left( dL_{u}^{2}|\mathcal {F}_{s}\vee \mathcal {G}_{t}\right) .\nonumber \\ \end{aligned}$$
(47)

Using the same approach as in Errais et al. [17], \(dL_{u}^{i}\) is rewritten as follows:

$$\begin{aligned} dL_{u}^{i}&={\left\{ \begin{array}{ll} O^{i} &{} \quad { if}\quad dN_{u}^{1}=1\\ 0 &{} \quad { otherwise} \end{array}\right. }. \end{aligned}$$

The order size \(O_{i}\) being independent from all processes and then from \(\mathcal {F}_{s}\vee \mathcal {G}_{t}\), we infer that

$$\begin{aligned} \mathbb {E}\left( dL_{u}^{i}|\mathcal {F}_{s}\vee \mathcal {G}_{t}\right)&=\mathbb {E}\left( O^{i}\right) \times \mathbb {E}\left( dN_{u}^{1}\,|\,\mathcal {F}_{s}\vee \mathcal {G}_{t}\right) \\&=\mu _{i}\times \mathbb {E}\left( dN_{u}^{1}\,|\,\mathcal {F}_{s}\vee \mathcal {G}_{u}\right) . \end{aligned}$$

Using nested expectations and conditioning with respect to the sample path of \(\lambda _{u}^{1}\) contained in the subfiltration \(\mathcal {H}_{u}\) of \(\mathcal {F}_{u}\) leads to equality for \(u\le t\)

$$\begin{aligned} \mathbb {E}\left( dL_{u}^{i}|\mathcal {F}_{s}\vee \mathcal {G}_{t}\right)&=\mu _{i}\times \mathbb {E}\left( \mathbb {E}\left( dN_{u}^{1}\,|\,\mathcal {F}_{s}\vee \mathcal {G}_{u}\vee \mathcal {H}_{u}\right) \,|\,\mathcal {F}_{s}\vee \mathcal {G}_{u}\right) . \end{aligned}$$

Conditionally to the sample path of \(\lambda _{u}^{1}\), \(N_{u}^{1}\) is a non-homogeneous Poisson process,

$$\begin{aligned} \mathbb {E}\left( dN_{u}^{1}\,|\,\mathcal {F}_{s}\vee \mathcal {G}_{u}\vee \mathcal {H}_{u}\right) =\lambda _{u-}^{i} \end{aligned}$$

therefore, we infer that

$$\begin{aligned} \mathbb {E}\left( dL_{u}^{i}|\mathcal {F}_{s}\vee \mathcal {G}_{t}\right)&=\mu _{i}\times \mathbb {E}\left( \lambda _{u-}^{i}\,|\,\mathcal {F}_{s}\vee \mathcal {G}_{u}\right) du\quad \forall \,u\le t, \end{aligned}$$

If we derive Eq. (47) with respect to time, we find that \(\mathbb {E}\left( \lambda _{t}^{i}|\mathcal {F}_{s}\vee \mathcal {G}_{t}\right) \) is solution of an ordinary differential equation (ODE):

$$\begin{aligned} \frac{\partial }{\partial t}\mathbb {E}\left( \lambda _{t}^{i}|\mathcal {F}_{s}\vee \mathcal {G}_{t}\right)= & {} -\kappa _{i}\left( \lambda _{s}^{i}-c_{i,t}\right) +\kappa _{i}^{2}\int _{s}^{t}e^{\kappa _{i}(u-t)}\left( \lambda _{s}-c_{i,u}\right) du\\&+\,\delta _{i,1}\mu _{1}\mathbb {E}\left( \lambda _{t}^{1}|\mathcal {F}_{s}\vee \mathcal {G}_{t}\right) -\kappa _{i}\delta _{i,1}\mu _{1}\int _{s}^{t}e^{-\kappa _{i}(t-u)}\mathbb {E}\left( \lambda _{u-}^{1}\,|\,\mathcal {F}_{s}\vee \mathcal {G}_{t}\right) du\\&+\,\delta _{i,2}\mu _{2}\mathbb {E}\left( \lambda _{t}^{2}|\mathcal {F}_{s}\vee \mathcal {G}_{t}\right) -\kappa _{i}\delta _{i,2}\mu _{2}\int _{s}^{t}e^{-\kappa _{i}(t-u)}\mathbb {E}\left( \lambda _{u-}^{2}\,|\,\mathcal {F}_{s}\vee \mathcal {G}_{t}\right) du. \end{aligned}$$

Using Eq. ((47)), allows us to rewrite these ODE’s as follows:

$$\begin{aligned} \left( \begin{array}{c} \frac{\partial }{\partial t}\mathbb {E}\left( \lambda _{t}^{1}|\mathcal {F}_{s}\vee \mathcal {G}_{t}\right) \\ \frac{\partial }{\partial t}\mathbb {E}\left( \lambda _{t}^{2}|\mathcal {F}_{s}\vee \mathcal {G}_{t}\right) \end{array}\right)= & {} \left( \begin{array}{c} \kappa _{1}c_{1,t}\\ \kappa _{2}c_{2,t} \end{array}\right) +\left( \begin{array}{c@{\quad }c} \delta _{1,1}\mu _{1}-\kappa _{1} &{} \delta _{1,2}\mu _{2}\\ \delta _{2,1}\mu _{1} &{} \delta _{2,2}\mu _{2}-\kappa _{2} \end{array}\right) \left( \begin{array}{c} \mathbb {E}\left( \lambda _{t}^{1}|\mathcal {F}_{s}\vee \mathcal {G}_{t}\right) \\ \mathbb {E}\left( \lambda _{t}^{2}|\mathcal {F}_{s}\vee \mathcal {G}_{t}\right) \end{array}\right) .\nonumber \\ \end{aligned}$$
(48)

Solving this system of equation requires to determine eigenvalues \(\gamma \) and eigenvectors \((v_{1},v_{2})\) of the matrix present in the right term of this system:

$$\begin{aligned} \left( \begin{array}{c@{\quad }c} (\delta _{1,1}\mu _{1}-\kappa _{1}) &{} \delta _{1,2}\mu _{2}\\ \delta _{2,1}\mu _{1} &{} (\delta _{2,2}\mu _{2}-\kappa _{2}) \end{array}\right) \left( \begin{array}{c} v_{1}\\ v_{2} \end{array}\right) =\gamma \left( \begin{array}{c} v_{1}\\ v_{2} \end{array}\right) . \end{aligned}$$

We know that eigenvalues cancel the determinant of the following matrix:

$$\begin{aligned} \det \left( \begin{array}{c@{\quad }c} (\delta _{1,1}\mu _{1}-\kappa _{1})-\gamma &{} \delta _{1,2}\mu _{2}\\ \delta _{2,1}\mu _{1} &{} (\delta _{2,2}\mu _{2}-\kappa _{2})-\gamma \end{array}\right) =0, \end{aligned}$$

and are solutions of the second order equation:

$$\begin{aligned} \gamma ^{2}-\gamma \left( (\delta _{1,1}\mu _{1}-\kappa _{1})+(\delta _{2,2}\mu _{2}-\kappa _{2})\right) +(\delta _{1,1}\mu _{1}-\kappa _{1})(\delta _{2,2}\mu _{2}-\kappa _{2})-\delta _{1,2}\delta _{2,1}\mu _{1}\mu _{2}=0. \end{aligned}$$

Roots of the last equation are \(\gamma _{1}\) and \(\gamma _{2}\), as defined by the Eq. (14). One way to find an eigenvector is to note that it must be orthogonal to each rows of the matrix:

$$\begin{aligned} \left( \begin{array}{c@{\quad }c} (\delta _{1,1}\mu _{1}-\kappa _{1})-\gamma &{} \delta _{1,2}\mu _{2}\\ \delta _{2,1}\mu _{1} &{} (\delta _{2,2}\mu _{2}-\kappa _{2})-\gamma \end{array}\right) \left( \begin{array}{c} v_{1}\\ v_{2} \end{array}\right) =0, \end{aligned}$$

then necessary,

$$\begin{aligned} \left( \begin{array}{c} v_{1}^{i}\\ v_{2}^{i} \end{array}\right) =\left( \begin{array}{c} -\delta _{1,2}\mu _{2}\\ (\delta _{1,1}\mu _{1}-\kappa _{1})-\gamma _{i} \end{array}\right) \quad for\,i=1,2. \end{aligned}$$

If we note \(D:=diag(\gamma _{1},\gamma _{2})\), the matrix in the right term of Eq. (48) admits the decomposition:

$$\begin{aligned} \left( \begin{array}{c@{\quad }c} \delta _{1,1}\mu _{1}-\kappa _{1} &{} \delta _{1,2}\mu _{2}\\ \delta _{2,1}\mu _{1} &{} \delta _{2,2}\mu _{2}-\kappa _{2} \end{array}\right)= & {} VDV^{-1}, \end{aligned}$$

where V is the matrix of eigenvectors, as defined in Eq. (16). Its determinant, \(\Upsilon \), and its inverse are respectively provided by Eqs. (18) and (17). If two new variables are defined as follows:

$$\begin{aligned} \left( \begin{array}{c} u_{1}\\ u_{2} \end{array}\right)= & {} V^{-1}\left( \begin{array}{c} m_{1}\\ m_{2} \end{array}\right) . \end{aligned}$$

The system (48) is decoupled into two independent ODEs:

$$\begin{aligned} \frac{\partial }{\partial t}\left( \begin{array}{c} u_{1}\\ u_{2} \end{array}\right)= & {} V^{-1}\left( \begin{array}{c} \kappa _{1}c_{1,t}\\ \kappa _{2}c_{2,t} \end{array}\right) +\left( \begin{array}{c@{\quad }c} \gamma _{1} &{} 0\\ 0 &{} \gamma _{2} \end{array}\right) \left( \begin{array}{c} u_{1}\\ u_{2} \end{array}\right) . \end{aligned}$$
(49)

And introducing the following notations

$$\begin{aligned} V^{-1}\left( \begin{array}{c} \kappa _{1}c_{1,t}\\ \kappa _{2}c_{2,t} \end{array}\right)= & {} \left( \begin{array}{c} \epsilon _{1}(t)\\ \epsilon _{2}(t) \end{array}\right) , \end{aligned}$$

leads to the solutions for the system (49):

$$\begin{aligned} \left( \begin{array}{c} u_{1}(t)\\ u_{2}(t) \end{array}\right)= & {} \left( \begin{array}{c} \int _{s}^{t}\epsilon _{1}(u)e^{\gamma _{1}\left( t-u\right) }du\\ \int _{s}^{t}\epsilon _{2}(u)e^{\gamma _{2}\left( t-u\right) }du \end{array}\right) +\left( \begin{array}{l@{\quad }l} e^{\gamma _{1}(t-s)} &{} 0\\ 0 &{} e^{\gamma _{2}(t-s)} \end{array}\right) V^{-1}\left( \begin{array}{c} \lambda _{s}^{1}\\ \lambda _{s}^{2} \end{array}\right) \end{aligned}$$

that allows us to infer the expression (15) for moments of \(\lambda _{t}^{i}\). Notice that the determinant \(\Upsilon \) is always real and if parameters of mutual excitations \(\delta _{1,2},\,\delta _{2,1}\), are positive. As \(\mu _{1},\mu _{2}>0\), the determinant is also strictly positive and the matric V is invertible. Finally, Eq. (15) states that conditionally to the sample path of the Markov chain \(\theta _{t}\), processes \(\lambda _{t}^{1}\) and \(\lambda _{t}^{2}\) are Markov given that their \(\mathcal {F}_{s}\vee \mathcal {G}_{t}\)-expectations only depend on the pair \(\left( \lambda _{t}^{1},\lambda _{t}^{2}\right) \). \(\square \)

Proof of Proposition 3.2

From the previous proposition, we infer that the unconditional expectations of OAI are the solutions of the following system

$$\begin{aligned} \left( \begin{array}{c} \mathbb {E}\left( \lambda _{t}^{1}\,|\,\mathcal {F}_{s}\right) \\ \mathbb {E}\left( \lambda _{t}^{2}\,|\,\mathcal {F}_{s}\right) \end{array}\right)= & {} V\int _{s}^{t}\left( \begin{array}{l@{\quad }l} e^{\gamma _{1}(t-u)} &{} 0\\ 0 &{} e^{\gamma _{2}(t-u)} \end{array}\right) V^{-1}\left( \begin{array}{c} \kappa _{1}\mathbb {E}\left( c_{1,u}\,|\,\mathcal {F}_{s}\right) \\ \kappa _{2}\mathbb {E}\left( c_{2,u}\,|\,\mathcal {F}_{s}\right) \end{array}\right) du\nonumber \\&+V\left( \begin{array}{l@{\quad }l} e^{\gamma _{1}(t-s)} &{} 0\\ 0 &{} e^{\gamma _{2}(t-s)} \end{array}\right) V^{-1}\left( \begin{array}{c} \lambda _{s}^{1}\\ \lambda _{s}^{2} \end{array}\right) . \end{aligned}$$
(50)

Given that \(\theta _{t}\) is a finite state Markov chain of generator \(Q_{0}\) and if we remember that \(c_{i}=\left( \begin{array}{c} c_{i,1}\\ \vdots \\ c_{i,l} \end{array}\right) \) for \(i=1,2\) are l- vectors, the expected level of mean reversion at time u is equal to:

$$\begin{aligned} \mathbb {E}\left( c_{i,u}|\mathcal {F}_{s}\right)= & {} \theta _{s}^{\top }\,\exp \left( Q_{0}\left( u-s\right) \right) \,c_{i} \end{aligned}$$

then expectation of intensities, conditionally to \(\mathcal {F}_{s}\):

$$\begin{aligned} \left( \begin{array}{c} \mathbb {E}\left( \lambda _{t}^{1}\,|\,\mathcal {F}_{s}\right) \\ \mathbb {E}\left( \lambda _{t}^{2}\,|\,\mathcal {F}_{s}\right) \end{array}\right)= & {} V\int _{s}^{t}\left( \begin{array}{c@{\quad }c} e^{\gamma _{1}(t-u)} &{} 0\\ 0 &{} e^{\gamma _{2}(t-u)} \end{array}\right) V^{-1}\left( \begin{array}{c} \kappa _{1}\theta _{s}^{\top }\,\exp \left( Q_{0}(u-s)\right) \,c_{1}\\ \kappa _{2}\theta _{s}^{\top }\,\exp \left( Q_{0}(u-s)\right) \,c_{2} \end{array}\right) du\nonumber \\&+V\left( \begin{array}{c@{\quad }c} e^{\gamma _{1}(t-s)} &{} 0\\ 0 &{} e^{\gamma _{2}(t-s)} \end{array}\right) V^{-1}\left( \begin{array}{c} \lambda _{s}^{1}\\ \lambda _{s}^{2} \end{array}\right) . \end{aligned}$$
(51)

If we replace \(V^{-1}\) by its definition (17), we obtain that

$$\begin{aligned}&V^{-1}\left( \begin{array}{c} \kappa _{1}\theta _{s}^{\top }\,\exp \left( Q_{0}\left( u-s\right) \right) \,c_{1}\\ \kappa _{2}\theta _{s}^{\top }\,\exp \left( Q_{0}\left( u-s\right) \right) \,c_{2} \end{array}\right) \\&\qquad =\frac{1}{\Upsilon }\left( \begin{array}{c} \left( \begin{array}{c} \kappa _{1}\left( (\delta _{1,1}\mu _{1}-\kappa _{1})-\gamma _{2}\right) \theta _{s}^{\top }\,\exp \left( Q_{0}\left( u-s\right) \right) \,c_{1}\\ +\,\kappa _{2}\delta _{1,2}\mu _{2}\theta _{s}^{\top }\,\exp \left( Q_{0}\left( u-s\right) \right) \,c_{2} \end{array}\right) \\ \left( \begin{array}{c} \kappa _{1}\left( \gamma _{1}-(\delta _{1,1}\mu _{1}-\kappa _{1})\right) \theta _{s}^{\top }\,\exp \left( Q_{0}\left( u-s\right) \right) \,c_{1}\\ -\,\kappa _{2}\delta _{1,2}\mu _{2}\theta _{s}^{\top }\,\exp \left( Q_{0}\left( u-s\right) \right) \,c_{2} \end{array}\right) \end{array}\right) . \end{aligned}$$

The integrand in Eq. (51) becomes then:

$$\begin{aligned}&\left( \begin{array}{c@{\quad }c} e^{\gamma _{1}(t-u)} &{} 0\\ 0 &{} e^{\gamma _{2}(t-u)} \end{array}\right) V^{-1}\left( \begin{array}{c} \kappa _{1}\theta _{s}^{\top }\,\exp \left( Q_{0}(u-s)\right) \,c_{1}\\ \kappa _{2}\theta _{s}^{\top }\,\exp \left( Q_{0}(u-s)\right) \,c_{2} \end{array}\right) \\&\quad =\frac{1}{\Upsilon }\left( \begin{array}{c} \left( \begin{array}{c} e^{\gamma _{1}t}\kappa _{1}\left( (\delta _{1,1}\mu _{1}-\kappa _{1})-\gamma _{2}\right) \theta _{s}^{\top }\,\exp \left( \left( Q_{0}-\gamma _{1}I\right) (u-s)\right) \,c_{1}\\ +\,e^{\gamma _{1}t}\kappa _{2}\delta _{1,2}\mu _{2}\theta _{s}^{\top }\,\exp \left( \left( Q_{0}-\gamma _{1}I\right) (u-s)\right) \,c_{2} \end{array}\right) \\ \left( \begin{array}{c} e^{\gamma _{2}t}\kappa _{1}\left( \gamma _{1}-(\delta _{1,1}\mu _{1}-\kappa _{1})\right) \theta _{s}^{\top }\,\exp \left( \left( Q_{0}-\gamma _{2}I\right) (u-s)\right) \,c_{1}\\ -\,e^{\gamma _{2}t}\kappa _{2}\delta _{1,2}\mu _{2}\theta _{s}^{\top }\,\exp \left( \left( Q_{0}-\gamma _{2}I\right) (u-s)\right) \,c_{2} \end{array}\right) \end{array}\right) . \end{aligned}$$

and we can conclude by direct integration that expected value of \(\lambda _{t}^{i}\) are given by Eq. (19). This result also states processes \(\lambda _{t}^{1}\) and \(\lambda _{t}^{2}\) are Markov given that their \(\mathcal {F}_{s}\)expectations only depend on the information available at time s: \(\left( \lambda _{s}^{1},\lambda _{s}^{2},\theta _{s}^{1},\theta _{s}^{2}\right) \). \(\square \)

Proof of Corollary 3.3

To prove this statement, it is sufficient to show that the conditional expectation of these processes with respect to \(\mathcal {F}_{s}\) depends exclusively upon the information available at time s. Using the Tower property of conditional expectation, the expected number of supply order conditionally to \(\mathcal {F}_{s}\) is then equal to the following product:

$$\begin{aligned} \mathbb {E}\left( N_{t}^{1}|\mathcal {F}_{s}\right)= & {} \mathbb {E}\left( \mathbb {E}\left( N_{t}^{1}|\mathcal {F}_{s}\vee \mathcal {H}_{t}\right) |\mathcal {F}_{s}\right) . \end{aligned}$$

By construction, the compensator of process \(N_{t}^{1}\) is an \(\mathcal {H}_{t}\)-adapted process \(\int _{0}^{t}\lambda _{u}^{1}du\) such that the compensated process \(M_{t}^{1}=N_{t}^{1}-\int _{0}^{t}\lambda _{u}^{1}du\) is a martingale. Given that \(\mathbb {E}\left( M_{t}^{1}|\mathcal {F}_{s}\vee \mathcal {H}_{t}\right) =M_{s}^{1}\), we deduce that \(\mathbb {E}\left( N_{t}^{1}|\mathcal {F}_{s}\vee \mathcal {H}_{t}\right) =N_{s}^{1}+\int _{s}^{t}\lambda _{u}^{1}du\). Using the Fubini’s theorem, we infer that

$$\begin{aligned} \mathbb {E}\left( N_{t}^{1}|\mathcal {F}_{s}\right)= & {} \left( N_{s}^{1}+\mathbb {E}\left( \int _{s}^{t}\lambda _{u}^{1}du|\mathcal {F}_{s}\right) \right) \nonumber \\= & {} N_{s}^{1}+\int _{s}^{t}\mathbb {E}\left( \lambda _{u}^{1}|\mathcal {F}_{s}\right) du. \end{aligned}$$
(52)

According to Proposition 3.2, \(\mathbb {E}\left( \lambda _{u}^{1}|\mathcal {F}_{s}\right) \) depends only upon \(\lambda _{s}^{1}\), \(\lambda _{s}^{2}\) and \(\theta _{s}\). From Eq. (52), we immediately deduce that \(\mathbb {E}\left( N_{t}^{1}|\mathcal {F}_{s}\right) \) is exclusively a function of (\(\lambda _{s}^{1}\), \(\lambda _{s}^{2}\) ,\(\theta _{s}\), \(N_{s}^{1}\)). The same holds for \(N_{t}^{2}\). By definition, \(L_{t}^{1}\) is a sum of independent random variables:

$$\begin{aligned} \mathbb {E}\left( L_{t}^{1}|\mathcal {F}_{s}\right)= & {} \mathbb {E}\left( \sum _{n=1}^{N_{t}^{1}}O_{n}^{1}|\mathcal {F}_{s}\right) \\= & {} \mu _{1}\mathbb {E}\left( N_{t}^{1}|\mathcal {F}_{s}\right) . \end{aligned}$$

As \(\mathbb {E}\left( N_{t}^{1}|\mathcal {F}_{s}\right) \) is a function of (\(\lambda _{s}^{1}\), \(\lambda _{s}^{2}\), \(\theta _{s}\), \(N_{s}^{1}\)), the same conclusion holds for \(\mathbb {E}\left( L_{t}^{1}|\mathcal {F}_{s}\right) \). A similar reasoning for \(L_{t}^{2}\) and Proposition 3.2 allows to end the proof. \(\square \)

Proof of Proposition 3.4

If we remember the Eq. (48), we infer that the expectations of \(c_{j,t}\lambda _{t}^{i}\) for \(i,j=1,2\) are solution of ordinary differential equations (ODE):

$$\begin{aligned}&\underbrace{\left( \begin{array}{c} \frac{\partial }{\partial t}\mathbb {E}\left( c_{1,t}\lambda _{t}^{1}|\mathcal {F}_{0}\vee \mathcal {G}_{t}\right) \\ \frac{\partial }{\partial t}\mathbb {E}\left( c_{2,t}\lambda _{t}^{1}|\mathcal {F}_{0}\vee \mathcal {G}_{t}\right) \\ \frac{\partial }{\partial t}\mathbb {E}\left( c_{1,t}\lambda _{t}^{2}|\mathcal {F}_{0}\vee \mathcal {G}_{t}\right) \\ \frac{\partial }{\partial t}\mathbb {E}\left( c_{2,t}\lambda _{t}^{2}|\mathcal {F}_{0}\vee \mathcal {G}_{t}\right) \end{array}\right) }_{:=dE(t)}=\underbrace{\left( \begin{array}{c@{\quad }c@{\quad }c} \kappa _{1} &{} 0 &{} 0\\ 0 &{} 0 &{} \kappa _{1}\\ 0 &{} 0 &{} \kappa _{2}\\ 0 &{} \kappa _{2} &{} 0 \end{array}\right) }_{:=K}\underbrace{\left( \begin{array}{c} c_{1,t}^{2}\\ c_{2,t}^{2}\\ c_{1,t}c_{2,t} \end{array}\right) }_{:=C_{t}^{2}}\\&\quad +\underbrace{\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} \delta _{1,1}\mu _{1}-\kappa _{1} &{} 0 &{} \delta _{1,2}\mu _{2} &{} 0\\ 0 &{} \delta _{1,1}\mu _{1}-\kappa _{1} &{} 0 &{} \delta _{1,2}\mu _{2}\\ \delta _{2,1}\mu _{1} &{} 0 &{} \delta _{2,2}\mu _{2}-\kappa _{2} &{} 0\\ 0 &{} \delta _{2,1}\mu _{1} &{} 0 &{} \delta _{2,2}\mu _{2}-\kappa _{2} \end{array}\right) }_{WFW^{-1}}\underbrace{\left( \begin{array}{c} \mathbb {E}\left( c_{1,t}\lambda _{t}^{1}|\mathcal {F}_{0}\vee \mathcal {G}_{t}\right) \\ \mathbb {E}\left( c_{2,t}\lambda _{t}^{1}|\mathcal {F}_{0}\vee \mathcal {G}_{t}\right) \\ \mathbb {E}\left( c_{1,t}\lambda _{t}^{2}|\mathcal {F}_{0}\vee \mathcal {G}_{t}\right) \\ \mathbb {E}\left( c_{2,t}\lambda _{t}^{2}|\mathcal {F}_{0}\vee \mathcal {G}_{t}\right) \end{array}\right) }_{E(t)}. \end{aligned}$$

We summarize this system of ODE as follows

$$\begin{aligned} dE(t)= & {} K\,C_{t}^{2}+W\,F\,W^{-1}E(t). \end{aligned}$$

If we note \(U(t)=W^{-1}E(t)\), we rewrite this last system:

$$\begin{aligned} dU(t)= & {} W^{-1}K\,C_{t}^{2}+F\,U(t), \end{aligned}$$

that admits the following solution:

$$\begin{aligned} U(t)= & {} \int _{0}^{t}\exp \left( F\,s\right) \,W^{-1}K\,C_{s}^{2}\,ds+\exp \left( F\,t\right) U(0), \end{aligned}$$

and we can conclude. \(\square \)

Proof of Proposition 3.5

If we remember Eq. (27), we can develop it as follows

$$\begin{aligned}&\exp \left( F\,s\right) W^{-1}\left( \begin{array}{c} \kappa _{1}c_{1,s}^{2}\\ \kappa _{1}c_{1,s}c_{2,s}\\ \kappa _{2}c_{1,s}c_{2,s}\\ \kappa _{2}c_{2,s}^{2} \end{array}\right) \\&\quad =\frac{1}{\Upsilon }\left( \begin{array}{c} \kappa _{1}\left( (\delta _{1,1}\mu _{1}-\kappa _{1})-\gamma _{2}\right) \left( e^{\gamma _{1}s}c_{1,s}^{2}\right) +\delta _{1,2}\mu _{2}\kappa _{2}\left( e^{\gamma _{1}s}c_{1,s}c_{2,s}\right) \\ \kappa _{1}\left( \gamma _{1}-(\delta _{1,1}\mu _{1}-\kappa _{1})\right) \left( e^{\gamma _{2}s}c_{1,s}^{2}\right) -\delta _{1,2}\mu _{2}\kappa _{2}\left( e^{\gamma _{2}s}c_{1,s}c_{2,s}\right) \\ \kappa _{1}\left( (\delta _{1,1}\mu _{1}-\kappa _{1})-\gamma _{2}\right) \left( e^{\gamma _{1}s}c_{1,s}c_{2,s}\right) +\delta _{1,2}\mu _{2}\kappa _{2}\left( e^{\gamma _{1}s}c_{2,s}^{2}\right) \\ \kappa _{1}\left( \gamma _{1}-(\delta _{1,1}\mu _{1}-\kappa _{1})\right) \left( e^{\gamma _{2}s}c_{1,s}c_{2,s}\right) -\delta _{1,2}\mu _{2}\kappa _{2}\left( e^{\gamma _{2}s}c_{2,s}^{2}\right) \end{array}\right) , \end{aligned}$$

and its expectation is given by

$$\begin{aligned}&\mathbb {E}\left( \exp \left( F\,s\right) W^{-1}\left( \begin{array}{c} \kappa _{1}c_{1,s}^{2}\\ \kappa _{1}c_{1,s}c_{2,s}\\ \kappa _{2}c_{1,s}c_{2,s}\\ \kappa _{2}c_{2,s}^{2} \end{array}\right) \right) \\&\quad =\frac{1}{\Upsilon }\left( \begin{array}{c} \kappa _{1}\left( (\delta _{1,1}\mu _{1}-\kappa _{1})-\gamma _{2}\right) \left( \theta _{0}e^{\left( Q_{0}+\gamma _{1}I\right) s}\bar{c}_{1}^{2}\right) +\delta _{1,2}\mu _{2}\kappa _{2}\left( \theta _{0}e^{\left( Q_{0}+\gamma _{1}I\right) s}\bar{c}_{1,2}\right) \\ \kappa _{1}\left( \gamma _{1}-(\delta _{1,1}\mu _{1}-\kappa _{1})\right) \left( \theta _{0}e^{\left( Q_{0}+\gamma _{2}I\right) s}\bar{c}_{1}^{2}\right) -\delta _{1,2}\mu _{2}\kappa _{2}\left( \theta _{0}e^{\left( Q_{0}+\gamma _{2}I\right) s}\bar{c}_{1,2}\right) \\ \kappa _{1}\left( (\delta _{1,1}\mu _{1}-\kappa _{1})-\gamma _{2}\right) \left( \theta _{0}e^{\left( Q_{0}+\gamma _{1}I\right) s}\bar{c}_{1,2}\right) +\delta _{1,2}\mu _{2}\kappa _{2}\left( \theta _{0}e^{\left( Q_{0}+\gamma _{1}I\right) s}\bar{c}_{2}^{2}\right) \\ \kappa _{1}\left( \gamma _{1}-(\delta _{1,1}\mu _{1}-\kappa _{1})\right) \left( \theta _{0}e^{\left( Q_{0}+\gamma _{2}I\right) s}\bar{c}_{1,2}\right) -\delta _{1,2}\mu _{2}\kappa _{2}\left( \theta _{0}e^{\left( Q_{0}+\gamma _{2}I\right) s}\bar{c}_{2}^{2}\right) \end{array}\right) . \end{aligned}$$

Integrating this last equation allows us to conclude. \(\square \)

Proof of Proposition 3.6

If we remember the expression (24) of the infinitesimal generator, we have

$$\begin{aligned} \mathcal {A}\left( \left( \lambda _{t}^{1}\right) ^{2}\right)= & {} 2\kappa _{1}\left( c_{1,t}-\lambda _{t}^{1}\right) \lambda _{t}^{1}+\lambda _{t}^{1}\int _{-\infty }^{+\infty }\left( \lambda _{t}^{1}+\delta _{1,1}z\right) ^{2}-\left( \lambda _{t}^{1}\right) ^{2}\nu _{1}(dz)\\&+\,\lambda _{t}^{2}\int _{-\infty }^{+\infty }\left( \lambda _{t}^{1}+\delta _{1,2}z\right) ^{2}-\left( \lambda _{t}^{1}\right) ^{2}\nu _{2}(dz),\\ \\ \mathcal {A}\left( \left( \lambda _{t}^{2}\right) ^{2}\right)= & {} 2\kappa _{2}\left( c_{2,t}-\lambda _{t}^{2}\right) \lambda _{t}^{2}+\lambda _{t}^{1}\int _{-\infty }^{+\infty }\left( \lambda _{t}^{2}+\delta _{2,1}z\right) ^{2}-\left( \lambda _{t}^{2}\right) ^{2}\nu _{1}(dz)\\&+\,\lambda _{t}^{2}\int _{-\infty }^{+\infty }\left( \lambda _{t}^{2}+\delta _{2,2}z\right) ^{2}-\left( \lambda _{t}^{2}\right) ^{2}\nu _{2}(dz),\\ \\ \mathcal {A}\left( \lambda _{t}^{1}\lambda _{t}^{2}\right)= & {} \kappa _{1}\left( c_{1,t}-\lambda _{t}^{1}\right) \lambda _{t}^{2}+\kappa _{2}\left( c_{2,t}-\lambda _{t}^{2}\right) \lambda _{t}^{1}\\&+\,\lambda _{t}^{1}\int _{-\infty }^{+\infty }\left( \lambda _{t}^{1}+\delta _{1,1}z\right) \left( \lambda _{t}^{2}+\delta _{2,1}z\right) -\lambda _{t}^{1}\lambda _{t}^{2}\nu _{1}(dz)\\&+\,\lambda _{t}^{2}\int _{-\infty }^{+\infty }\left( \lambda _{t}^{1}+\delta _{1,2}z\right) \left( \lambda _{t}^{2}+\delta _{2,2}z\right) -\lambda _{t}^{1}\lambda _{t}^{2}\nu _{2}(dz). \end{aligned}$$

And given that \(\frac{\partial }{\partial t}g=\mathbb {E}\left( \mathcal {A}g\,|\,\mathcal {F}_{0}\right) \), we can conclude. \(\square \)

Proof of Proposition 3.9

Let us assume that \(\theta _{t}=e_{i}\). If we denote by \(g(\lambda _{t}^{1},J_{t}^{1},\lambda _{t}^{2},J_{t}^{2},\theta _{t})=\mathbb {E}\left( \omega ^{N_{T}^{k}}\,|\,\mathcal {F}_{t}\right) \), g is solution of the following Itô’s equation for semi martingale :

$$\begin{aligned} 0= & {} g_{t}+\kappa _{1}\left( c_{1,t}-\lambda _{t}^{1}\right) g_{\lambda ^{1}}+\kappa _{2} \left( c_{2,t}-\lambda _{t}^{2}\right) g_{\lambda ^{2}}\nonumber \\&+\,\lambda _{t}^{1}\int _{-\infty }^{+\infty }g \left( \lambda _{t}^{1}+\delta _{1,1}z,J_{t}^{1}+(z,1)^{\top }, \lambda _{t}^{2}+\delta _{2,1}z,J_{t}^{2},e_{i}\right) \nonumber \\&-\quad g\left( \lambda _{t}^{1},J_{t}^{1},\lambda _{t}^{2},J_{t}^{2},e_{i} \right) d\nu _{1}(z)\nonumber \\&+\,\lambda _{t}^{2}\int _{-\infty }^{+\infty }g \left( \lambda _{t}^{1}+\delta _{1,2}z,J_{t}^{1},\lambda _{t}^{2}+\delta _{2,2}z, J_{t}^{2}+(z,1)^{\top },e_{i}\right) \nonumber \\&-\,g\left( \lambda _{t}^{1},J_{t}^{1},\lambda _{t}^{2},J_{t}^{2},e_{i}\right) d \nu _{2}(z)\nonumber \\&+\,\sum _{j\ne i}^{N}q_{i,j}\left( g(\lambda _{t}^{1},J_{t}^{1},\lambda _{t}^{2},J_{t}^{2},e_{j})-g(\lambda _{t}^{1},J_{t}^{1},\lambda _{t}^{2},J_{t}^{2},e_{i})\right) . \end{aligned}$$
(53)

Next, we assume that g is an exponential affine function of \(\lambda _{t}^{1}\), \(\lambda _{t}^{2}\) and \(N_{t}^{i}\) :

$$\begin{aligned} g= & {} \exp \left( A(t,T,e_{j})+B_{1}(t,T)\lambda _{t}^{1}+B_{2}(t,T)\lambda _{t}^{2}+C(t,T)N_{t}^{k}\right) , \end{aligned}$$

where \(A(t,T,e_{i})\) for \(i=1\) to l, \(B_{1}(t,T)\), \(B_{2}(t,T)\) and C(tT) are time dependent functions. The partial derivatives of g are then given by:

$$\begin{aligned} g_{t}= & {} \left( \frac{\partial }{\partial t}A(t,T,e_{j})+\frac{\partial }{\partial t}B_{1}(t,T)\lambda _{t}^{1}+\frac{\partial }{\partial t}B_{2}(t,T)\lambda _{t}^{2}+\frac{\partial }{\partial t}C(t,T)N_{t}^{k}\right) g, \\ g_{\lambda ^{1}}= & {} B_{1}(t,T)g \quad and \quad g_{\lambda ^{2}}=B_{2}(t,T)g. \end{aligned}$$

And the integrands in Eq. (53) are rewritten with the notations \(A:=A(t,T,e_{i}),\,B_{1}:=B_{1}(t,T),\)\(B_{2}:=B_{2}(t,T)\) and \(C:=C(t,T)\) as follows:

$$\begin{aligned}&\int _{-\infty }^{+\infty }g\left( \lambda _{t}^{1}+\delta _{1,1}z,J_{t}^{1}+\left( z,1\right) ^{\top },\lambda _{t}^{2}+\delta _{2,1}z,J_{t}^{2},e_{i}\right) -g\left( \lambda _{t}^{1},J_{t}^{1},\lambda _{t}^{2},J_{t}^{2},e_{i}\right) \,d\nu _{1}\left( z\right) \\&\qquad =g\left[ e^{1_{k=1}C}\psi _{1}\left( B_{1}\delta _{1,1}+B_{2}\delta _{2,1}\right) -1\right] , \\&\int _{-\infty }^{+\infty }g\left( \lambda _{t}^{1}+\delta _{1,2}z,J_{t}^{1},\lambda _{t}^{2}+\delta _{2,2}z,J_{t}^{2}+\left( z,1\right) ^{\top },e_{j}\right) -g\left( \lambda _{t}^{1},J_{t}^{1},\lambda _{t}^{2},J_{t}^{2},e_{i}\right) \,d\nu _{2}\left( z\right) \\&\qquad =g\left[ e^{1_{k=2}C}\psi _{2}\left( B_{1}\delta _{1,2}+B_{2}\delta _{2,2}\right) -1\right] . \end{aligned}$$

As the sum of instantaneous probabilities is null, \(q_{ii}=-\sum _{i\ne j}^{l}q_{i,j}\), we have that

$$\begin{aligned} \sum _{j\ne i}^{l}q_{i,j}\left( g\left( \lambda _{t}^{1},J_{t}^{1},\lambda _{t}^{2},J_{t}^{2},e_{j} \right) -g\left( \lambda _{t}^{1},J_{t}^{1},\lambda _{t}^{2},J_{t}^{2},e_{i}\right) \right)= & {} \sum _{j=1}^{l}q_{i,j}g\left( \lambda _{t}^{1},J_{t}^{1},\lambda _{t}^{2},J_{t}^{2},e_{j}\right) . \end{aligned}$$

Then the Eq. (53) becomes:

$$\begin{aligned} 0= & {} \left( \frac{\partial }{\partial t}A+\frac{\partial }{\partial t}B_{1}\,\lambda _{t}^{1}+\frac{\partial }{\partial t}B_{2}\,\lambda _{t}^{2}+\frac{\partial }{\partial t}C\,N_{t}^{i}\right) e^{A\left( t,T,e_{i}\right) }\nonumber \\&+\,\kappa _{1}\left( c_{1,t}-\lambda _{t}^{1}\right) B_{1}\,e^{A\left( t,T,e_{i}\right) }+\kappa _{2}\left( c_{2,t}-\lambda _{t}^{2}\right) B_{2}\,e^{A\left( t,T,e_{i}\right) }\nonumber \\&+\,\lambda _{t}^{1}\left( e^{1_{k=1}C}\psi _{1}\left( B_{1}\delta _{1,1}+B_{2}\delta _{2,1}\right) -1\right) e^{A\left( t,T,e_{i}\right) }\nonumber \\&+\,\lambda _{t}^{2}\left( e^{1_{k=2}C}\psi _{2}\left( B_{1}\delta _{1,2}+B_{2}\delta _{2,2}\right) -1\right) e^{A\left( t,T,e_{i}\right) }\nonumber \\&+\sum _{j=1}^{l}q_{i,j}g\left( \lambda _{t}^{1},J_{t}^{1},\lambda _{t}^{2},J_{t}^{2},e_{j}\right) , \end{aligned}$$
(54)

from which we guess that \(C(t,s)=\ln \omega \). Regrouping terms allows to infer that

$$\begin{aligned} 0= & {} \frac{\partial }{\partial t}Ae^{A(t,T,e_{i})}+\kappa _{1}c_{1,t}\,B_{1}\,e^{A(t,T,e_{i})}+\kappa _{2}c_{2,t}\,B_{2}\,e^{A(t,T,e_{i})}+\sum _{j=1}^{l}q_{i,j}e^{A(t,T,e_{j})}\\&+\,\lambda _{t}^{1}\left( \frac{\partial }{\partial t}B_{1}-\kappa _{1}B_{1}+\left[ 1_{k=1}\omega \psi _{1}\left( B_{1}\delta _{1,1}+B_{2}\delta _{2,1}\right) -1\right] \right) e^{A(t,T,e_{i})}\\&+\,\lambda _{t}^{2}\left( \frac{\partial }{\partial t}B_{2}-\kappa _{2}B_{2}+\left[ 1_{k=2}\omega \psi _{2}\left( B_{1}\delta _{1,2}+B_{2}\delta _{2,2}\right) -1\right] \right) e^{A(t,T,e_{i})}. \end{aligned}$$

Given that \(\lambda _{t}^{1}\) and \(\lambda _{t}^{2}\) are random quantities, this equation is satisfied if and only if

$$\begin{aligned} \frac{\partial }{\partial t}B_{1}= & {} \kappa _{1}B_{1}-\left[ 1_{k=1}\omega \psi _{1}\left( B_{1}\delta _{1,1}+B_{2}\delta _{2,1}\right) -1\right] \\ \frac{\partial }{\partial t}B_{2}= & {} \kappa _{2}B_{2}-\left[ 1_{k=2}\omega \psi _{2}\left( B_{1}\delta _{1,2}+B_{2}\delta _{2,2}\right) -1\right] \\ \left( \frac{\partial }{\partial t}A\right) e^{A(t,T,e_{i})}= & {} -\kappa _{1}c_{1,t}\,B_{1}\,e^{A(t,T,e_{i})}-\kappa _{2}c_{2,t}\,B_{2}\,e^{A(t,T,e_{i})}-\sum _{j=1}^{l}q_{i,j}e^{A(t,T,e_{j})}. \end{aligned}$$

If we define \(\tilde{A}(t,T)=\left( e^{A(t,T,e_{1})},\ldots ,e^{A(t,T,e_{l})}\right) \) , the last equations can finally be put in matrix form as:

$$\begin{aligned} \frac{\partial \tilde{A}}{\partial t}+\left( \text {diag}\left( \kappa _{1}c_{1,t}\,B_{1}+\kappa _{2}c_{2,t}\,B_{2}\right) +Q_{0}\right) \tilde{A}=0. \end{aligned}$$

\(\square \)

Proof of Proposition 3.11

From previous results, we know that \(B_{k}(t,T)\) is solution of the following ODE

$$\begin{aligned} \frac{\partial }{\partial t}B_{k}=\kappa _{k}B_{k}+\left( -1\right) ^{k}\omega _{1}\alpha _{k}\mu _{k}-\left[ \psi _{k}\left( B_{1}\delta _{1,k}+B_{2}\delta _{2,k}+C_{k}\right) -1\right] ,\quad k=1,2 \end{aligned}$$

with terminal condition \(B_{k}(T,T)=\omega _{k+1}\). If we set \(B_{k}(t,T)=D_{k}(T-t)\) and \(\tau =T-t\). Then

$$\begin{aligned} \frac{\partial B_{k}}{\partial t}=\frac{\partial D_{k}}{\partial \tau }\frac{\partial \tau }{\partial t}=-\frac{\partial D_{k}}{\partial \tau }. \end{aligned}$$

Thus we obtain

$$\begin{aligned} \frac{\partial D_{k}}{\partial \tau }= & {} -\kappa _{k}B_{k}(\tau )-\left( -1\right) ^{k}\omega _{1}\alpha _{k}\mu _{k}+\left[ \psi _{k}\left( D_{1}(\tau )\delta _{1,k}+D_{2}(\tau )\delta _{2,k}+C_{k}\right) -1\right] \nonumber \\= & {} -\kappa _{k}D_{k}(\tau )+\psi _{k}\left( D_{1}(\tau )\delta _{1,k}+D_{2}(\tau )\delta _{2,k}+C_{k}\right) -\left[ \left( -1\right) ^{k}\omega _{1}\alpha _{k}\mu _{k}+1\right] \nonumber \\= & {} -\kappa _{k}D_{k}(\tau )+\psi _{k}\left( D_{1}(\tau )\delta _{1,k}+D_{2}(\tau )\delta _{2,k}+C_{k}\right) -\beta _{k}(\omega _{1}). \end{aligned}$$
(55)

The left hand side is then denoted \(h_{k}(D_{1},D_{2})\). Due to the convexity of \(\psi _{k}\) there is only one point \(\left( u_{1}^{*},u_{2}^{*}\right) \) such that \(h_{k}(u)=0\) for \(k=1,2\). These equations are indeed equivalent to

$$\begin{aligned} \psi _{k}\left( u_{1}\delta _{1,k}+u_{2}\delta _{2,k}+C_{k}\right) =\beta _{k}(\omega _{1})+\kappa _{k}u_{k}. \end{aligned}$$

We rewrite the Eq. (55) as follows,

$$\begin{aligned} \frac{dD_{k}}{-\kappa _{k}D_{k}+\psi _{k}\left( D_{1}\delta _{1,k}+D_{2}\delta _{2,k}+C_{k}\right) -\beta _{k}(\omega _{1})}=d\tau . \end{aligned}$$

As \(D_{k}(0)=\omega _{k+1}\) for \(k\in \{1,2\}\) by direct integration, we have that

$$\begin{aligned} \int _{\omega _{2}}^{D_{1}}\frac{du_{1}}{-\kappa _{1}u_{1}+\psi _{1}\left( u_{1}\delta _{1,1}+D_{2}\delta _{2,1}+C_{1}\right) -\beta _{1}(\omega _{1})}=\tau , \\ \int _{\omega _{3}}^{D_{2}}\frac{du_{2}}{-\kappa _{2}u_{2}+\psi _{2}\left( D_{1}\delta _{1,2}+u_{2}\delta _{2,2}+C_{2}\right) -\beta _{2}(\omega _{1})}=\tau . \end{aligned}$$

with \(D_{k}\in [\omega {k+1},u_{k}^{*})\) or \(D_{k}\in [u_{k}^{*},\omega {k+1})\).

We can remark that if \(\left( D_{1},D_{2}\right) =\left( u_{1}^{*},u_{2}^{*}\right) \) then \(\tau =+\infty \) as the numerator converges to zero. If we define the functions \(F_{\omega _{1}}^{1}(x,y)\) and \(F_{\omega _{1}}^{2}(x,y)\) from \(\mathbb {R}^{2}\) to \(\mathbb {R}^{+}\) by Eq. (36), \(D_{1}\) and \(D_{2}\) are such that \(F_{\omega _{1}}^{k}(D_{1},D_{2})=\tau \). If \(\left( F_{\omega _{1}}^{1}\right) ^{-1}(\tau \,|\,y)\) and \(\left( F_{\omega _{1}}^{2}\right) ^{-1}(\tau \,|\,x)\) are respectively the inverse functions of \(F_{\omega _{1}}^{1}(.,y)\) and \(\,F_{\omega _{1}}^{2}(x,.)\), then \(D_{1}\) and \(D_{2}\) satisfy the following system

$$\begin{aligned} D_{1}&=\left( F_{\omega _{1}}^{1}\right) ^{-1}(\tau \,|\,D_{2}),\\ D_{2}&=\left( F_{\omega _{1}}^{2}\right) ^{-1}(\tau \,|\,D_{1}), \end{aligned}$$

or \(B_{1}(t,T)=\left( F_{\omega _{1}}^{1}\right) ^{-1}(T-t\,|\,B_{2}(t,T))\) and \(B_{2}(t,T)=\left( F_{\omega _{1}}^{2}\right) ^{-1}(T-t\,|\,B_{1}(t,T))\) . \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hainaut, D., Goutte, S. A switching microstructure model for stock prices. Math Finan Econ 13, 459–490 (2019). https://doi.org/10.1007/s11579-018-00234-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11579-018-00234-6

Keywords

JEL Classifications

Navigation