Skip to main content
Log in

Multilevel estimation of normalization constants using ensemble Kalman–Bucy filters

  • Published:
Statistics and Computing Aims and scope Submit manuscript

Abstract

In this article we consider the application of multilevel Monte Carlo, for the estimation of normalizing constants. In particular we will make use of the filtering algorithm, the ensemble Kalman–Bucy filter (EnKBF), which is an N-particle representation of the Kalman–Bucy filter (KBF). The EnKBF is of interest as it coincides with the optimal filter in the continuous-linear setting, i.e. the KBF. This motivates our particular setup in the linear setting. The resulting methodology we will use is the multilevel ensemble Kalman–Bucy filter (MLEnKBF). We provide an analysis based on deriving \({\mathbb {L}}_q\)-bounds for the normalizing constants using both the single-level, and the multilevel algorithms, which is largely based on previous work deriving the MLEnKBF Chada et al. (2022). Our results will be highlighted through numerical results, where we firstly demonstrate the error-to-cost rates of the MLEnKBFs comparing it to the EnKBF on a linear Gaussian model. Our analysis will be specific to one variant of the MLEnKBF, whereas the numerics will be tested on different variants. We also exploit this methodology for parameter estimation, where we test this on the models arising in atmospheric sciences, such as the stochastic Lorenz 63 and 96 model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Arapostathis, A., Marcus, S.I.: Analysis of an identification algorithm arising in the adaptive estimation of Markov chains. Math. Control Signals Syst. 3, 1–29 (1990)

    Article  MathSciNet  Google Scholar 

  • Bain, A., Crisan, D.: Fundamentals of Stochastic Filtering. Springer, New York (2009)

    Book  Google Scholar 

  • Beskos, A., Crisan, D., Jasra, A., Kantas, N., Ruzayqat, H.: Score-based parameter estimation for a class of continuous-time state space models. SIAM J. Sci. Comput. (to appear) (2021)

  • Bhar, R.: Stochastic Filtering with Applications in Finance. World Scientific (2010)

  • Bishop, A.N., Del Moral, P.: On the mathematical theory of ensemble (linear-gaussian) Kalman–Bucy filtering. arXiv preprint arXiv:2006.08843 (2020)

  • Bishop, A.N., Del Moral, P.: On the stability of Kalman-Bucy diffusion processes. SIAM J. Control. Optim. 55(6), 4015–4047 (2017)

    Article  MathSciNet  Google Scholar 

  • Brosse, N., Durmus, A., Moulines, E.: Normalizing constants of log-concave densities. Elect. J. Stat. 12, 851–889 (2018)

    MathSciNet  MATH  Google Scholar 

  • Cerou, F., Del Moral, P., Guyader, A.: A non-asymptotic variance theorem for unnormalized Feynman–Kac particle models. Ann. Inst. Henri Poincare 4, 629–649 (2011)

    MATH  Google Scholar 

  • Chada, N.K., Franks, J., Jasra, A., Law, K.J.H., Vihola, M.: Unbiased inference for discretely observed hidden Markov model diffusions. SIAM/ASA J. Uncertainty Quantif. 9(2), 763–787 (2021)

    Article  MathSciNet  Google Scholar 

  • Chada, N.K., Jasra, A., Yu, F.: Multilevel ensemble Kalman–Bucy filters. SIAM/ASA J. Uncertainty Quantif. (to appear) (2022)

  • Chernov, A., Hoel, H., Law, K.J.H., Nobile, F., Tempone, R.: Multilevel ensemble Kalman filtering for spatio-temporal processes. Numer. Math. 1:1–55, (2021)

  • Crisan, D., Del Moral, P., Jasra, A., Ruzayqat, H.: Log-normalization constant estimation using the ensemble Kalman–Bucy filter with application to high-dimensional models. Adv. Appl. Probab. (to appear) (2022)

  • Crisan, D., Rozovskii, B.: The Oxford Handbook of Nonlinear Filtering. Oxford University Press, Oxford (2011)

    MATH  Google Scholar 

  • Del Moral, P., Jasra, A., Law, K., Zhou, Y.: Multilevel sequential Monte Carlo samplers for normalizing constants. TOMACS 27, 1 (2017)

    Article  MathSciNet  Google Scholar 

  • Del Moral, P., Doucet, A., Singh, S.S.: Forward smoothing using sequential Monte Carlo. arXiv preprint arxiv:1012.5390 (2010)

  • Del Moral, P.: Feynman-Kac Formulae. Springer, New York (2004)

    Book  Google Scholar 

  • Del Moral, P., Tugaut, J.: On the stability and the uniform propagation of chaos properties of ensemble Kalman–Bucy filters. Ann. Appl. Probab. 28, 790–850 (2018)

    MathSciNet  MATH  Google Scholar 

  • Evensen, G.: Data Assimilation: The Ensemble Kalman Filter, 2nd edn. Springer (2009)

  • Fossum, K., Mannseth, T., Stordal, A.S.: Assessment of multilevel ensemble-based data assimilation for reservoir history matching. Comput. Geosci. 24, 217–239 (2020)

    Article  MathSciNet  Google Scholar 

  • Giles, M.B.: Multilevel Monte Carlo path simulation. Op. Res. 56, 607–617 (2008)

    Article  MathSciNet  Google Scholar 

  • Giles, M.B.: Multilevel Monte Carlo methods. Acta Numer. 24, 259–328 (2015)

    Article  MathSciNet  Google Scholar 

  • Gelman, A., Meng, X.-L.: Simulating normalizing constants: From importance sampling to bridge sampling to path sampling. Stat. Sci. 1, 163–185 (1998)

    MathSciNet  MATH  Google Scholar 

  • Hoel, H., Law, K.J.H., Tempone, R.: Multilevel ensemble Kalman filtering. SIAM J. Numer. Anal. 54(3), 1813–1839 (2016)

    Article  MathSciNet  Google Scholar 

  • Hoel, H., Shaimerdenova, G., Tempone, R.: Multi-index ensemble Kalman filtering arXiv preprint arXiv:2104.07263 (2021)

  • Jasra, A., Kamatani, K., Law, K.J.H., Zhou, Y.: Multilevel particle filters. SIAM J. Numer. Anal. 55(6), 3068–3096 (2017)

    Article  MathSciNet  Google Scholar 

  • Jasra, A., Kamatani, K., Osei, P.P., Zhou, Y.: Multilevel particle filters: normalizing constant estimation. Stat. Comput. 28, 47–60 (2018)

    Article  MathSciNet  Google Scholar 

  • Jazwinski, A.: Stochastic processes and filtering theory, vol. x 63. Academic Press, London (1970)

  • Kostov, S., Whiteley, N.: An algorithm for approximating the second moment of the normalizing constant estimate from a particle filter. Methodol. Comput. Appl. Probab. 19, 799–818 (2017)

    Article  MathSciNet  Google Scholar 

  • Le Gland, F., Mevel, M.: Recursive identification in hidden Markov models. Proc. 36th IEEE Conf. Dec. Contr. pp. 3468–3473 (1997)

  • Lorenz, E.N.: Deterministic nonperiodic flow. J. Atmos. Sci. 20(2), 130–141 (1963)

    Article  MathSciNet  Google Scholar 

  • Lorenz, E.N.: Predictability: A problem partly solved. Proc. ECMWF Seminar on predictability 1, 1–18 (1996)

    Google Scholar 

  • Majda, A., Wang, X.: Non-linear Dynamics and Statistical Theories for Basic Geophysical Flows. Cambridge University Press (2006)

  • Oliver, D., Reynolds, A.C., Liu, N.: Inverse Theory for Petroleum Reservoir Characterization and History Matching, 1st edn. Cambridge University Press (2008)

  • Reich, S.: Data assimilation: the Schrödinger perspective. Acta Numer 28, 635–711 (2019)

    Article  MathSciNet  Google Scholar 

  • Poyiadjis, G., Doucet, A., Singh, S.S.: Particle approximations of the score and observed information matrix in state space models with application to parameter estimation. Biometrika 98, 65–80 (2011)

    Article  MathSciNet  Google Scholar 

  • Sakov, P., Oke, P.R.: A deterministic formulation of the ensemble Kalman filter: an alternative to ensemble square root filters. Tellus A. 60(2), 361–371 (2008)

    Article  Google Scholar 

  • Spall, J.C.: Introduction to Stochastic Search and Optimization, 1st edn. Wiley, New York (2003)

    Book  Google Scholar 

  • Spall, J.C.: Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE Trans. Auto. Control. 37, 332–341 (1992)

    Article  MathSciNet  Google Scholar 

  • Rischard, M., Jacob, P.E., Pillai, N.: Unbiased estimation of log normalizing constants with applications to Bayesian cross-validation. arXiv preprint arxiv:1810.01382 (2018)

  • Vihola, M.: Unbiased estimators and multilevel Monte Carlo. Op. Res. 66, 448–462 (2018)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by KAUST baseline funding.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Neil K. Chada.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix

A Analysis for EnKBF NC estimator

For the appendices, Appendix A will cover the propagation of chaos result, which is required for the variance of the single-level EnKBF NC estimator. We will then proceed to Appendix B which discusses various discretization biases of the diffusion process related to both EnKBF and the NC estimator. Finally our main theorem is proved in Appendix C. All of our results will be specific to the vanilla variant of the ENKBF, F(1).

Before proceeding to our results, we will introduce the following assumptions which will hold from herein, but not be added to any statements. For a square matrix, B say, we denote by \(\mu (B)\) as the maximum eigenvalue of \(\text {Sym}(B)\).

  1. 1.

    We have that \(\mu (A)<0\).

  2. 2.

    There exists a \(\mathsf {C}<+\infty \) such that for any \((k,l)\in {\mathbb {N}}_0^2\) we have that

    $$\begin{aligned} \max _{(j_1,j_2)\in \{1,\dots ,d_x\}^2}|P_{k\Delta _l} (j_1,j_2)|\le \mathsf {C}. \end{aligned}$$
    (A.1)

We note that 1. is typically used in the time stability of the hidden diffusion process \(X_t\), see for instance (Del Moral and Tugaut 2018). In the case of 2. we expect that it can be verified under 1., that \(S=\mathsf {C}I\) with \(\mathsf {C}\) a positive constant and some controllability and observability assumptions (e.g. (Del Moral and Tugaut 2018, eq. (20))). Under such assumptions, the Riccati equation has a solution and moreover, by Del Moral and Tugaut (2018, Proposition 5.3) \({\mathcal {P}}_t\) is exponentially stable w.r.t. the Frobenius norm; so that this type of bound exists in continuous time.

Throughout the appendix we will make use of the \(C_q-\)inequality. For two real-valued random variables X and Y defined on the same probability space, with expectation operator \({\mathbb {E}}\), suppose that for some fixed \(q\in (0,\infty )\), \({\mathbb {E}}[|X|^q]\) and \({\mathbb {E}}[|Y|^q]\) are finite, then the \(C_q-\)inequality is

$$\begin{aligned} {\mathbb {E}}[|X+Y|^q] \le \mathsf {C}_q\Big ({\mathbb {E}}[|X|^q] +{\mathbb {E}}[|Y|^q]\Big ), \end{aligned}$$

where \(\mathsf {C}_q=1\), if \(q\in (0,1)\) and \(\mathsf {C}_q=2^{q-1}\) for \(q\in [1,\infty )\).

In order to verify some of our claims for the analysis, we will rely on various results derived in Chada et al. (2022). For convenience-sake we will state these below, which are concerned with various \({\mathbb {L}}_q-\)bounds, where

Lemma A.1

For any \(q\in [1,\infty )\) there exists a \(\mathsf {C}<+\infty \) such that for any \((k,l,j)\in {\mathbb {N}}_0^2\times \{1,\dots ,d_y\}\):

$$\begin{aligned} {\mathbb {E}}[|[Y_{(k+1)\Delta _l}-Y_{k\Delta _l}](j)|^q]^{1/q} \le \mathsf {C}\Delta _l^{1/2}. \end{aligned}$$

Lemma A.2

For any \((q,t,k,l)\in [1,\infty )\times {\mathbb {N}}_0^3\) there exists a \(\mathsf {C}<+\infty \) such that for any \((j,N)\in \{1,\dots ,d_x\}\times \{2,3,\dots \}\):

$$\begin{aligned} {\mathbb {E}}\Big [\Big |m_{t+k_1\Delta _l}^N(j) -m_{t+k_1\Delta _l}(j)\Big |^q\Big ]^{1/q} \le \frac{\mathsf {C}}{\sqrt{N}}. \end{aligned}$$

Lemma A.3

For any \((q,k,l)\in (0,\infty )\times {\mathbb {N}}_0^2\) there exists a \(\mathsf {C}<+\infty \) such that for any \(N \ge 2\) and \(i\in \{1,\dots ,N\}\):

$$\begin{aligned} {\mathbb {E}}[|\xi _{k\Delta _l}^i(j)|^q]^{1/q} \le \mathsf {C}, \end{aligned}$$

where \(\xi _{k\Delta _l}\) is defined through (2.11).

We now present our first result for the single-level EnKBF NC estimator, which is presented as an \({\mathbb {L}}_q-\) error bound.

Proposition A.1

For any \((q,t,k_1,l)\in [1,\infty ) \times {\mathbb {N}}_0^3\) there exists a \(\mathsf {C}<+\infty \) such that for any \(N \in \{2,3,\ldots \}\) we have:

$$\begin{aligned} {\mathbb {E}}\Big [\Big |[\overline{U}_{t+k_1\Delta _l}^{N,l}(Y) -\overline{U}_{t+k_1\Delta _l}^{l}(Y)]\Big |^q\Big ]^{1/q} \le \frac{\mathsf {C}}{\sqrt{N}}. \end{aligned}$$

Proof

Let us first consider \(\overline{U}_{t+k_1\Delta _l}^{N,l}(Y) -\overline{U}_{t+k_1\Delta _l}^{l}(Y)\), which for every \(l \in {\mathbb {N}}_0\), we can decompose through a martingale remainder-type decomposition,

$$\begin{aligned} \overline{U}_{t+k_1\Delta _l}^{N,l}(Y) -\overline{U}_{t+k_1\Delta _l}^{l}(Y) =M_{t+k_1\Delta _l}^l(Y) + R_{t+k_1\Delta _l}^l, \end{aligned}$$
(A.2)

such that

$$\begin{aligned} M_{t+k_1\Delta _l}^l(Y)= & {} \sum _{k=0}^{t\Delta _l^{-1}+k_1-1} \langle Cm_{k\Delta _l}^N, {R}^{-1} [Y_{(k+1)\Delta _l} -Y_{k\Delta _l}]\rangle \\&-\sum _{k=0}^{t\Delta _l^{-1}+k_1-1} \langle Cm_{k\Delta _l}, {R}^{-1} [Y_{(k+1)\Delta _l} -Y_{k\Delta _l}]\rangle , \\ R_{t+k_1\Delta _l}^l= & {} -\frac{\Delta _l}{2} \sum _{k=0}^{t\Delta _l^{-1}+k_1-1} \langle m_{k\Delta _l}^N, Sm_{k\Delta _l}^N\rangle \\&+\frac{\Delta _l}{2} \sum _{k=0}^{t\Delta _l^{-1}+k_1-1} \langle m_{k\Delta _l}, Sm_{k\Delta _l}\rangle . \end{aligned}$$

We can decompose the martingale term from (A.2) further through

$$\begin{aligned} M_{t+k_1\Delta _l}^l(Y) = M_{t+k_1\Delta _l}^l(1) + R_{t+k_1\Delta _l}^l(1), \end{aligned}$$

where, by setting \({k=t\Delta _l^{-1}+k_1-1}\),

$$\begin{aligned} M_{t+k_1\Delta _l}^l(1)&= \sum _{k=0}^{t\Delta _l^{-1}+k_1-1} \sum _{j_1=1}^{d_y}\sum _{j_2=1}^{d_x}\sum _{j_3=1}^{d_y} C(j_1,j_2)\nonumber \\&\quad \{m_{k\Delta _l}^N(j_2)-m_{k\Delta _l}(j_2)\} {R}^{-1}(j_1,j_3)\nonumber \\&\quad \times \{[Y_{(k+1)\Delta _l}-Y_{k\Delta _l}](j_3) -CX_{k\Delta _l}(j_3)\Delta _l\}, \end{aligned}$$
(A.3)
$$\begin{aligned} R_{t+k_1\Delta _l}^l(1)&= \Delta _l\sum _{k=0}^{t\Delta _l^{-1}+k_1-1} \sum _{j_1=1}^{d_y}\sum _{j_2=1}^{d_x}\sum _{j_3=1}^{d_y}C(j_1,j_2)\nonumber \\&\quad \{m_{k\Delta _l}^N(j_2)-m_{k\Delta _l}(j_2)\} \nonumber \\&\quad \times {R}^{-1}(j_1,j_3)CX_{k\Delta _l}(j_3). \end{aligned}$$
(A.4)

In order to proceed we construct a martingale associated with the term of \(M_t^l\). Let us first begin with the \(M_t^l(1)\) term (A.3), where we construct the filtration \((\Omega ,{\mathscr {F}}, {\mathscr {F}}_{k \Delta _l}, {\mathbb {P}})\) for our discrete-time martingale \((M^l_t(1), {\mathscr {F}}_{k \Delta _l})\).

Then by using Hölder’s inequality

$$\begin{aligned}&{\mathbb {E}}[|M^l_{t+k_1\Delta _l}(1)|^q]^{1/q}\nonumber \\&\quad = {\mathbb {E}} \Big [\Big |\sum _{k=0}^{t\Delta _l^{-1}+k_1-1} \sum _{j_1=1}^{d_y}\sum _{j_2=1}^{d_x}\sum _{j_3=1}^{dy} C(j_1,j_2)\nonumber \\&\qquad \qquad \quad \{m_{k\Delta _l}^N(j_2)-m_{k\Delta _l}(j_2)\} {R}^{-1}(j_1,j_3) \nonumber \\&\qquad \times \{[Y_{(k+1)\Delta _l}-Y_{k\Delta _l}](j_3) -CX_{k\Delta _l}(j_3)\Delta _l\}\Big |^q\Big ]^{1/q}\nonumber \\&\quad \le {\mathbb {E}} \Big [\Big |\sum _{k=0}^{t\Delta _l^{-1} +k_1-1} \sum _{j_1=1}^{d_y}\sum _{j_2=1}^{d_x}\sum _{j_3=1}^{d_y} C(j_1,j_2)\nonumber \\&\qquad \qquad \quad \{m_{k\Delta _l}^N(j_2)-m_{k\Delta _l}(j_2)\} {R}^{-1}(j_1,j_3)\Big |^{2q}\Big ]^{1/2q} \nonumber \\&\qquad \times {\mathbb {E}} \Big [\Big | \sum _{k=0}^{t\Delta _l^{-1}+k_1-1} \sum _{j_3=1}^{dy} \{[Y_{(k+1)\Delta _l}-Y_{k\Delta _l}](j_3)\nonumber \\&\qquad \qquad \quad -CX_{k\Delta _l}(j_3)\Delta _l\} \Big |^{2q} \Big ]^{1/2q} \nonumber \\&\quad =: T_1 \times T_2. \end{aligned}$$
(A.5)

For \(T_1\) we can apply the Minkowski inequality and Lemma A.2 to yield

$$\begin{aligned} T_1\le & {} \sum _{k=0}^{t\Delta _l^{-1}+k_1-1} \sum _{j_1=1}^{d_y} \sum _{j_2=1}^{d_x} \sum _{j_3=1}^{d_y} C(j_1,j_2) {R}^{-1}(j_1,j_3)\\&{\mathbb {E}} \Big [\Big |\{m_{k\Delta _l}^N(j_2) -m_{k\Delta _l}(j_2)\} \Big |^{2q}\Big ]^{1/2q} \\\le & {} \frac{\mathsf {C}}{\sqrt{N}}. \end{aligned}$$

For \(T_2\) we know that the expression \(\{[Y_{(k+1)\Delta _l}-Y_{k\Delta _l}](j_3) -CX_{k\Delta _l}(j_3)\Delta _l\}\) is a Brownian motion increment, using the formulae (2.1)–(2.2). Therefore by using the Burkholder–Davis–Gundy inequality, along with Minkowski, for \(\tilde{q} = 2q\), we have

$$\begin{aligned} T_2\le & {} \sum _{j_3=1}^{d_y} {\mathbb {E}}\Big [\Big | \sum ^{t\Delta ^{-1}_l+k_1-1}_{k=0} [V_{(k+1)\Delta _l} -V_{k\Delta _l}](j_3)\Big |^{\tilde{q}}\Big ]^{1/\tilde{q}} \\\le & {} \sum _{j_3=1}^{d_y} \sum ^{t\Delta ^{-1}_l+k_1-1}_{k=0} \mathsf {C}_{\tilde{q}} {\mathbb {E}}\Big [ \Big |[V_{(k+1)\Delta _l} -V_{k\Delta _l}]^2(j_3)\Big |^{\tilde{q}/2}\Big ]^{1/\tilde{q}}\\\le & {} \sum _{j_3=1}^{d_y} \sum ^{t\Delta ^{-1}_l+k_1-1}_{k=0} \mathsf {C}_{\tilde{q}} \Big ( {\mathbb {E}}\Big [\Big |[V_{(k+1) \Delta _l} -V_{k\Delta _l}](j_3)\Big |^{\tilde{q}} \Big ]^{2/\tilde{q}}\Big )^{1/2} \end{aligned}$$

Then using that fact that \({\mathbb {E}}[|[V_{(k+1)\Delta _l}-V_{k\Delta _l}]|^{\tilde{q}}] =O(\Delta _l^{\tilde{q}/2})\), and with the summation it is of order \({\mathcal {O}}(\Delta ^{(1/2-1/\tilde{q})})\), we can conclude \(T_2\) is of order \({\mathcal {O}}(1)\), as we have used the case of when \(q \ge 1\).

For the \(R_t^l(1)\) term, it follows similarly to \(M_t^l(1)\), where we require the use of Lemma A.2.

Again we make use of the Minkowski and Hölder inequality, and Lemma A.2,

$$\begin{aligned}&{\mathbb {E}}[|R^l_{t+k_1\Delta _l}(1)|^q]^{1/q}\nonumber \\&\quad = {\mathbb {E}} \Big [ \Big | \sum _{k=0}^{t\Delta _l^{-1}+k_1-1} \sum _{j_1=1}^{d_x}\sum _{j_2=1}^{d_x}\sum _{j_3=1}^{d_y}C(j_1,j_2)\nonumber \\&\qquad \qquad \quad \{m_{k\Delta _l}^N(j_2)-m_{k\Delta _l}(j_2)\}{R}^{-1}(j_1,j_3) CX_{k\Delta _l}(j_3) \Delta _l \Big |^q \Big ]^{1/q} \nonumber \\&\quad \le \sum _{k=0}^{t\Delta _l^{-1}+k_1-1}\sum _{j_1=1}^{d_y} \sum _{j_2=1}^{d_x}\sum _{j_3=1}^{d_y}\sum ^{d_x}_{j_4=1} C(j_1,j_3){R}^{-1}(j_1,j_3) \nonumber \\&\qquad \qquad \quad \Big ( {\mathbb {E}} \Big [\Big |m^N_{k \Delta l}(j_2) - m_{k \Delta l}(j_2) \Big |^{2q}\Big ]^{1/2q} \nonumber \\&\qquad \times {\mathbb {E}} \Big [\Big |X_{k\Delta _l}(j_4) \Delta _l\Big |^{2q}\Big ]^{1/2q}\Big ) \nonumber \\&\quad \le \sum _{k=0}^{t\Delta _l^{-1}+k_1-1} \sum ^{d_x}_{j_4=1} \frac{\mathsf {C}}{\sqrt{N}} {\mathbb {E}} \Big [\Big |X_{k\Delta _l}(j_4)\Delta _l\Big |^{2q}\Big ]^{1/2q}. \end{aligned}$$
(A.6)

For the final term of (A.6) we can show it is of order \({\mathcal {O}}(1)\), using the Cauchy–Schwarz and Jensen’s inequality

$$\begin{aligned}&{\mathbb {E}} \Big [\Big |X_{k\Delta _l}(j_4) \Delta _l\Big |^{2q}\Big ]^{1/2q}\\&\quad = \Delta _l {\mathbb {E}}\Big [\Big | \int ^{(k+1) \Delta _l}_{k\Delta _l}{\Delta _l^{-1}}X_s(j_4) ds\Big |^{2q}\Big ]^{1/2q} \\&\quad \le \Delta _l {\mathbb {E}} \Big [ \Big ( \int ^{(k+1) \Delta _l}_{k\Delta _l}\Delta ^{-2}_l ds \Big )^q \Big (\int ^{(k+1)\Delta _l}_{k\Delta _l} |X_s(j_4)|^2d s\Big )^q \Big ]^{1/2q} \\&\quad = \Delta ^{1/2}_l {\mathbb {E}} \Big [\Big (\int ^{(k+1) \Delta _l}_{k\Delta _l} |X_s(j_4)|^2ds\Big )^q \Big ]^{1/2q}\\&\quad \le \Delta _l^{1/2} {\mathbb {E}}\Big [{\Delta _l^{q-1} \int ^{(k+1)\Delta _l}_{k\Delta _l}}\Big | X_s(j_4) \Big |^{2q} ds\Big ]^{1/2q} \\&\quad \le \Delta _l^{1-1/2q}\Big (\int ^{(k+1)\Delta _l}_{k\Delta _l} {{\mathbb {E}}[|X_s(j_4)|^{2q}]}_{}ds\Big )^{1/2q} \\&\quad \le \mathsf {C} \Delta _l^{1-1/2q} (\Delta _l)^{1/2q} \quad (\text {By [17] Eq. 54}) \\&\quad = \mathsf {C} \Delta _l, \end{aligned}$$

therefore combing this with the summation in (A.6), the above quantity is of order \({\mathcal {O}}(1)\), resulting in \({\mathbb {E}}[|R^l_{t+k_1\Delta _l}(1)|^q]^{1/q} \le \frac{\mathsf {C}}{\sqrt{N}}\). All that is left is the \(R^l_{t+k_1\Delta _l}\) term. Before proceeding we can express, or rewrite, the \(R_{t+k_1\Delta _l}^l\) term as

$$\begin{aligned} R_{t+k_1\Delta _l}^l= & {} -\frac{\Delta _l}{2} \sum _{k=0}^{t\Delta _l^{-1}+k_1-1} \langle m_{k\Delta _l}^N, Sm_{k\Delta _l}^N\rangle \\&+ \frac{\Delta _l}{2} \sum _{k=0}^{t\Delta _l^{-1}-1} \langle m_{k\Delta _l}, Sm_{k\Delta _l}\rangle \\= & {} -\frac{\Delta _l}{2}\sum _{k=0}^{t\Delta _l^{-1}+k_1-1} \sum _{j_1=1}^{d_x}\sum _{j_2=1}^{d_x}\\&\{m_{k\Delta _l}^N (j_1)S(j_1,j_2)m_{k\Delta _l}^N(j_2)\\&\quad -m_{k\Delta _l}(j_1) S(j_1,j_2)m_{k\Delta _l}(j_2)\} \\= & {} -\frac{\Delta _l}{2}\sum _{k=0}^{t\Delta _l^{-1}+k_1-1} \sum _{j_1=1}^{d_x}\sum _{j_2=1}^{d_x} \\&\{m_{k\Delta _l}^N(j_1) -m_{k\Delta _l}(j_1)\}S(j_1,j_2) m_{k\Delta _l}^N(j_2) \\&\quad + m_{k\Delta _l}(j_1)S(j_1,j_2)\{m_{k\Delta _l}^N(j_2) -m_{k\Delta _l}(j_2)\}. \end{aligned}$$

By taking its associated \({\mathbb {L}}_q\)-bound, from Minkowski’s inequality and Lemma A.2 - Lemma A.3, we have

$$\begin{aligned}&{\mathbb {E}}[|R_{t+k_1\Delta _l}^l|^q]^{1/q} \\&\quad = {\mathbb {E}} \Big [\Big | -\frac{\Delta _l}{2} \sum _{k=0}^{t\Delta _l^{-1}+k_1-1} \sum _{j_1=1}^{d_x}\sum _{j_2=1}^{d_x} \\&\qquad \{m_{k\Delta _l}^N(j_1) -m_{k\Delta _l}(j_1)\} S(j_1,j_2)m_{k\Delta _l}^N(j_2) \\&\qquad + m_{k\Delta _l}(j_1)S(j_1,j_2)\{m_{k\Delta _l}^N(j_2) -m_{k\Delta _l}(j_2)\} \Big |^q\Big ]^{1/q} \\&\quad \le -\frac{\Delta _l}{2}\sum _{k=0}^{t\Delta _l^{-1}+k_1-1} \sum _{j_1=1}^{d_x}\sum _{j_2=1}^{d_x} S(j_1,j_2)\\&\qquad \Big ({\mathbb {E}}\Big [\Big |\{m_{k\Delta _l}^N(j_1) -m_{k\Delta _l}(j_1)\}m_{k\Delta _l}^N(j_2)\Big |^{q}\Big ]^{1/q} \\&\qquad + {\mathbb {E}}\Big [\Big |m_{k\Delta _l}(j_1)\{m_{k\Delta _l}^N (j_2)-m_{k\Delta _l}(j_2)\}\Big |^{q}\Big ]^{1/q} \Big ) \\&\quad \le \frac{\mathsf {C}}{\sqrt{N}}. \end{aligned}$$

Finally by using the Minkowski inequality, we can deduce that

$$\begin{aligned}&{\mathbb {E}}\Big [\Big |[\overline{U}_{t+k_1\Delta _l}^{N,l}(Y) -\overline{U}_{t+k_1\Delta _l}^{l}(Y)]\Big |^q\Big ]^{1/q}\\&\quad = {\mathbb {E}}\Big [\Big | M_{t+k_1\Delta _l}^l(1) +R_{t+k_1\Delta _l}^l(1) + R_{t+k_1\Delta _l}^l \Big |^q \Big ]^{1/q} \\&\quad \le {\mathbb {E}}[|M^l_{t+k_1\Delta _l}(1)|^q]^{1/q}\\&\qquad +{\mathbb {E}}[|R^l_{t+k_1\Delta _l}(1)|^q]^{1/q} +{\mathbb {E}}[|R^l_{t+k_1\Delta _l}|^q]^{1/q} \\&\quad \le \frac{\mathsf {C}}{\sqrt{N}}. \end{aligned}$$

\(\square \)

B Analysis for discretized diffusion process

In this appendix we consider deriving analysis for the discretized diffusion process. This will include both the discretized i.i.d. particle system

$$\begin{aligned} \zeta _{(k+1)\Delta _l}^i&=(I+A\Delta _l)\zeta _{k\Delta _l}^i + {Q}^{1/2} [\overline{W}_{(k+1)\Delta _l}^i -\overline{W}_{k\Delta _l}^i] \nonumber \\&\quad + P_{k \Delta _l} C^{\top }{R}^{-1}\Big ([Y_{(k+1)\Delta _l}-Y_{k\Delta _l}] \nonumber \\&\qquad -\Big [C\zeta _{k\Delta _l}^i\Delta _l +{R}^{1/2}[\overline{V}_{(k+1)\Delta _l}^i -\overline{V}_{k\Delta _l}^i]\Big ]\Big ), \end{aligned}$$
(B.1)

and the discretized NC estimator. We recall, in the limit as \(N \rightarrow \infty \), the i.i.d. system coincides with discretized Kalman–Bucy diffusion whose mean, for \((k,l)\in {\mathbb {N}}_0\times {\mathbb {N}}_0\), is defined by

$$\begin{aligned} m_{(k+1)\Delta _l}^l&= m_{k\Delta _l}^l + Am^l_{k\Delta _l} \Delta _l + U_{k\Delta _l}^l\nonumber \\&\quad \Big ([Y_{(k+1)\Delta _l}-Y_{k\Delta _l}] -Cm_{k\Delta _l}^l\Delta _l\Big ), \end{aligned}$$
(B.2)

We note in this appendix our results will use the notation \(\overline{X}\) for the Kalman–Bucy diffusion, to keep it consistent with (Chada et al. 2022). However these results also hold for the i.i.d. system (B.1). We require additional lemmas from Chada et al. (2022), which are discretization bias results for the discretized Kalman–Bucy diffusion. We state these as follows. The notation of the equations are modified for the multilevel, which we will discuss later.

Lemma B.1

For any \(T\in {\mathbb {N}}\) fixed and \(t\in [0,T]\) there exists a \(\mathsf {C}<+\infty \) such that for any \((l,j_1,j_2)\in {\mathbb {N}}_0\times \{1,\dots ,d_x\}^2\):

$$\begin{aligned} \Big |{\mathcal {P}}_{t}(j_1,j_2)-P_{\tau _t^l}^l(j_1,j_2)\Big | \le \mathsf {C}\Delta _l. \end{aligned}$$

Lemma B.2

For any \(T\in {\mathbb {N}}\) fixed and \(t\in [0,T-1]\) there exists a \(\mathsf {C}<+\infty \) such that for any \((l,j,k_1)\in {\mathbb {N}}_0\times \{1,\dots ,d_x\} \times \{0,1,\dots ,\Delta _{l}^{-1}\}\):

$$\begin{aligned} {\mathbb {E}}\Big [\Big (\overline{X}_{t+k_1\Delta _{l}}(j) -\overline{X}_{t+k_1\Delta _{l}}^l(j)\Big )^2\Big ] \le \mathsf {C}\Delta _l^2. \end{aligned}$$

We now present our first conditional bias result, which will be the weak error of the Kalman–Bucy diffusion. This weak error will be analogous to the strong error of Lemma B.2, which was not proved, or provided, in Chada et al. (2022). However this result will be required for the current and succeeding appendix.

Lemma B.3

For any \(T\in {\mathbb {N}}\) fixed and \(t\in [0,T-1]\) there exists a \(\mathsf {C}<+\infty \) such that for any \((l,k_1)\in {\mathbb {N}}_ 0\times \{1,\dots ,d_x\}\times \{0,1,\dots ,\Delta _{l}^{-1}\}\):

$$\begin{aligned} \Big |{\mathbb {E}}\Big [\overline{X}_{t+k_1\Delta _{l}}(j) -\overline{X}_{t+k_1\Delta _{l}}^l(j)\Big ]\Big | \le \mathsf {C}\Delta _l. \end{aligned}$$

Proof

As before we can separate the above expression in different terms,

$$\begin{aligned} {\mathbb {E}}\Big [\overline{X}_{t+k_1\Delta _{l}}(j) -\overline{X}_{t+k_1\Delta _{l}}^l(j)\Big ] = T_1+T_2+T_3, \end{aligned}$$

such that, for \(\tau _t^l=[\tfrac{t}{\Delta _l}]\Delta _l\), \(t\in {\mathbb {R}}^+\), we have

$$\begin{aligned} T_1= & {} {\mathbb {E}}\Big [\int _{0}^{t+k_1\Delta _{l}} \Big (\sum _{j_1=1}^{d_x} A(j,j_1)[\overline{X}_s(j_1) -\overline{X}_{\tau _s^l}^l(j_1)] \\&+ \sum _{j_1=1}^{d_x} \sum _{j_2=1}^{d_x} {\mathcal {P}}_s(j,j_1)\hat{C} (j_1,j_2)[\overline{X}_s(j_2) -\overline{X}_{\tau _s^l}^l(j_2)] \Big )ds\Big ], \\ T_2= & {} {\mathbb {E}}\Big [\int _{0}^{t+k_1\Delta _{l}} \Big (\sum _{j_1=1}^{d_x}\sum _{j_2=1}^{d_x}[{\mathcal {P}}_s (j,j_1)\\&-P_{\tau _s^l}^l(j,j_1)]\hat{C}(j_1,j_2) \overline{X}_{\tau _s^l}^l(j_2)]\Big )ds\Big ],\\ T_3= & {} {\mathbb {E}}\Big [\sum ^{d_x}_{j_1=1}\sum ^{d_y}_{j_2=1} \int _{0}^{t+k_1\Delta _{l}}[{\mathcal {P}}_{s}(j,j_1)\\&-P_{\tau _s^l}^l(j,j_1)]\tilde{C}(j_1,j_2)dY_s(j_2)\Big ], \end{aligned}$$

where \(\hat{C}=\tilde{C}C\) with \(\tilde{C}=C^{\top }{R}^{-1}\). Now let us consider each individual term

firstly for \(T_1\), that (A.1) we have the following bound

$$\begin{aligned} |T_1| \le \mathsf {C} \int _{0}^{t+k_1 \Delta _{l}}\max _{j\in \{1,\dots ,d_x\}} \Big |{\mathbb {E}}\Big [\overline{X}_{s}(j) -\overline{X}_{\tau _s^l}^l(j)\Big ]\Big |ds. \end{aligned}$$
(B.3)

For \(T_2\), We can apply Lemma B.1, using the fact that \(\max _{j_2\in \{1,\dots ,d_x\}}|{\mathbb {E}}[\overline{X}_{\tau _s^l}^l (j_2)]|\le \mathsf {C}\) we have

$$\begin{aligned} |T_2| \le \mathsf {C}\Delta _l. \end{aligned}$$
(B.4)

Similarly for \(T_3\), we can use Lemma B.1 and Lemma A.1 which provides the bound

$$\begin{aligned} |T_3| \le \mathsf {C}\Delta _l. \end{aligned}$$
(B.5)

Thus combining (B.3)-(B.5) leads to

$$\begin{aligned}&\max _{j\in \{1,\dots ,d_x\}}\Big |{\mathbb {E}} \Big [\overline{X}_{t+k_1\Delta _{l}}(j) -\overline{X}_{t+k_1\Delta _{l}}^l(j)\Big ]\Big | \le \mathsf {C}\Delta _l\\&\quad +\,\mathsf {C} \int _{0}^{t+k_1\Delta _{l}}\max _{j\in \{1,\dots ,d_x\}} \Big |{\mathbb {E}}\Big [\overline{X}_{s}(j) -\overline{X}_{\tau _s^l}^l(j)\Big ]\Big |ds. \end{aligned}$$

Finally by applying Grönwall’s lemma, leads to the desired result. \(\square \)

We now proceed with our result of the discretized NC estimator, which is the strong error, through the following lemma.

Lemma B.4

For any \(T\in {\mathbb {N}}\) fixed and \(t\in [0,T-1]\) there exists a \(\mathsf {C}<+\infty \) such that for any \((l,k_1)\in {\mathbb {N}}_0\times \{0,1,\dots ,\Delta _{l}^{-1}\}\):

$$\begin{aligned} {\mathbb {E}}\Big [\Big ([\overline{U}_{t+k_1\Delta _l}(Y) -\overline{U}_{t+k_1\Delta _l}^{l}(Y)]\Big )^2\Big ] \le \mathsf {C} \Delta ^2_l. \end{aligned}$$

Proof

Let us first recall that,

$$\begin{aligned} \overline{U}^l_{t+k_1\Delta _{l}}(Y)= & {} \sum ^{t\Delta ^{-1}_l +k_1-1}_{k=0}\langle Cm_{k\Delta _l},{R}^{-1}[Y_{(k+1)\Delta _l} - Y_{k\Delta _l}]\rangle \\&-\frac{\Delta _l}{2}\sum ^{t\Delta ^{-1}_l +k_1-1}_{k=0} \langle m_{k\Delta _l},S m_{k\Delta _l} \rangle , \\ \overline{U}_{t+k_1\Delta _{l}}(Y)= & {} \int ^{t+k_1\Delta _l}_0 \Big [\langle Cm_s,{R}^{-1}dY_s\rangle - \frac{1}{2} \langle m_s,Sm_s\rangle ds\Big ]. \end{aligned}$$

In order to proceed we again consider a martingale-remainder type decomposition. Therefore by setting \(\tau _t^l=[\tfrac{t}{\Delta _l}]\Delta _l\), \(t\in {\mathbb {R}}^+\), and expanding on the angle brackets, we have

$$\begin{aligned} M_{t+k_1\Delta _l}(1)= & {} \sum _{j_1=1}^{d_y}\sum _{j_2=1}^{d_x} \sum _{j_3=1}^{d_y} \int ^{t+k_1\Delta _l}_0 {R}^{-1}(j_1,j_3) C(j_1,j_2)\\&\Big [ m_s(j_2) - m^l_{\tau _s^l}(j_2) \Big ]{R}^{1/2} (j_1,j_3)dV_s(j_3), \\ R_{t+k_1\Delta _l}= & {} \sum _{j_1=1}^{d_x}\sum _{j_2=1}^{d_x} \int ^{t+k_1\Delta _l}_{0} \frac{S(j_1,j_2)}{2}\\&\Big [m_s(j_1)m_s(j_2) - m^l_{\tau _s^l}(j_1) m^l_{\tau _s^l}(j_2) \Big ]ds,\\ R_{t+k_1\Delta _l}(1)= & {} \sum _{j_1=1}^{d_y} \sum _{j_2=1}^{d_x}\sum _{j_3=1}^{d_y} \int ^{t+k_1\Delta _l}_0 {R}^{-1}(j_1,j_3) C(j_1,j_2)\\&\Big [ m_s(j_1) - m^l_{\tau _s^l}(j_1) \Big ]CX_s(j_3)ds, \end{aligned}$$

where we have used the formula for the observational process (2.1), combined both remainder terms into one, and taken the scaled Brownian motion \(V_t\). Let us first consider the remainder term of \(R_{t+k_1\Delta _l}(1)\). Through Jensen’s inequality we have

$$\begin{aligned}&{\mathbb {E}}[R_{t+k_1\Delta _l}(1)]^2 \\&\quad = {\mathbb {E}} \Big [\sum _{j_1=1}^{d_y} \sum _{j_2=1}^{d_x}\sum _{j_3=1}^{d_y} \int ^{t+k_1\Delta _l}_0 {R}^{-1}(j_1,j_3) C(j_1,j_2)\\&\qquad \Big [m_s(j_2) - m^l_{\tau _s^l}(j_2) \Big ]C X_s(j_3)ds\Big ]^2 \\&\quad \le \sum _{j_1=1}^{d_y}\sum _{j_2=1}^{d_x} \sum _{j_3=1}^{d_y}\sum _{j_4=1}^{d_x}{R}^{-2} (j_1,j_3)C^2(j_1,j_2) \\&\quad \int ^{t+k_1\Delta _l}_0{\mathbb {E}}\Big [ [m_s(j_2) -m^l_{\tau _s}(j_2)]X_s(j_4)\Big ]^2 ds. \end{aligned}$$

Then by using \(d^2_xd^2_y\) applications of the \(C_2\)-inequality we get

$$\begin{aligned}&{\mathbb {E}}[R_{t+k_1\Delta _l}(1)]^2 \\&\quad \le \mathsf {C} \Big (\int ^{t+k_1\Delta _l}_0 \max _{j_2 \in \{1,\ldots ,d_x\}} {\mathbb {E}}[m_s(j_2) - m^l_{\tau ^l_s}(j_2)]^2 \max _{j_4 \in \{1,\ldots ,d_x\}}\\&\qquad {\mathbb {E}}[X_s(j_4)]^2 ds \Big ). \end{aligned}$$

We know that \(\max _{j_4 \in \{1,\ldots ,d_x\}}{\mathbb {E}}[X_s(j_4)]^2 \le \mathsf {C}\), therefore all we need is to bound \(| m_s(j_1) - m^l_{\tau ^l_s}(j_1)|^2\). Therefore by using the fact that \({\mathbb {E}}[\overline{X}_s(j)] = m_s(j)\) and \( {\mathbb {E}}[\overline{X}^l_{\tau ^l_s}(j)] = m^l_{\tau ^l_s}(j) \), we can use the weak error, i.e. Lemma B.3, to conclude that \({\mathbb {E}}[R_{t+k_1\Delta _l}(1)]^2 \le \mathsf {C}\Delta ^2_l\).

Now to proceed with \(R_{t+k_1\Delta _l}(Y)\), we can split the difference of the mean term

$$\begin{aligned}&m_s(j_1)m_s(j_2) - m^l_{\tau ^l_s}(j_1)m^l_{\tau ^l_s}(j_2)\nonumber \\&\quad = m_s(j_1)m_s(j_2) - m^l_{\tau ^l_s}(j_1)m_s(j_2)\nonumber \\&\qquad + m^l_{\tau ^l_s}(j_1)m_s(j_2) - m^l_{\tau ^l_s} (j_1)m^l_{\tau ^l_s}(j_2) \nonumber \\&\quad = \{m_s(j_1) - m^l_{\tau ^l_s}(j_1)\}m_s(j_2) + m^l_{\tau ^l_s}(j_1)\{m_s(j_2) - m^l_{\tau ^l_s}(j_2)\}. \end{aligned}$$
(B.6)

Therefore one can substitute (B.6) into \(R_t(Y)\), and by Jensen’s and the \(C_2\)-inequality, results in

$$\begin{aligned}&{\mathbb {E}}[R_{t+k_1\Delta _l}]^2 \nonumber \\&\quad = {\mathbb {E}} \Big [\sum _{j_1=1}^{d_x}\sum _{j_2=1}^{d_x} \int ^{t+k_1\Delta _l}_{0} \frac{S(j_1,j_2)}{2}\nonumber \\&\qquad \Big [ \{m_s(j_1) - m^l_{\tau ^l_s}(j_1)\}m_s(j_2) + m^l_{\tau ^l_s}(j_1)\{m_s(j_2) -m^l_{\tau ^l_s}(j_2)\}\Big ]ds\Big ]^2 \nonumber \\&\quad \le \sum _{j_1=1}^{d_x}\sum _{j_2=1}^{d_x} \frac{S^2(j_1,j_2)}{2} \int ^{t+k_1\Delta _l}_0 {\mathbb {E}}\Big [\{m_s(j_1) - m^l_{\tau ^l_s} (j_1)\}m_s(j_2)\Big ]^2ds \nonumber \\&\qquad + \sum _{j_1=1}^{d_x}\sum _{j_2=1}^{d_x} \frac{S^2(j_1,j_2)}{2} \nonumber \\&\qquad \int ^{t+k_1\Delta _l}_0{\mathbb {E}}\Big [m^l_{\tau ^l_s}(j_1)\{m_s(j_2) - m^l_{\tau ^l_s}(j_2)\}\Big ]^2ds. \end{aligned}$$
(B.7)

Therefore by again taking the \(C_2\)-inequality \(d^2_x\) times, and noting that \(\max _{j_2\in \{1,\dots ,d_x\}}m_s(j_2) =\max _{j_2\in \{1,\dots ,d_x\}}{\mathbb {E}}[\overline{X}_s(j_2)] \le \mathsf {C}\), from Lemma A.2 and similarly done for \(m^l_{\tau ^l_s}(j_1)\), leads to

$$\begin{aligned} {\mathbb {E}}[R_{t+k_1\Delta _l}]^2&\le \mathsf {C} \Big (\int ^{t+k_1\Delta _l}_0 \max _{j_1 \in \{1,\ldots ,d_x\}} {\mathbb {E}}\Big [m_s(j_1) - m^l_{\tau ^l_s}(j_1)\Big ]^2 \\&\qquad + \max _{j_2 \in \{1,\ldots ,d_x\}} {\mathbb {E}} \Big [m_s(j_2) - m^l_{\tau ^l_s}(j_2)\Big ]^2 ds \Big ). \end{aligned}$$

Through the same substitution as before, and using the weak error, i.e. Lemma B.3,

we have that \({\mathbb {E}}[R_{t+k_1\Delta _l}]^2 \le \mathsf {C}\Delta ^2_l\).

Lastly we have the martingale term \(M_{t+k_1\Delta _l}(1)\). As before we can apply Jensens inequality

$$\begin{aligned} {\mathbb {E}}[M_{t+k_1\Delta _l}(1)]^2= & {} {\mathbb {E}} \Big [\sum _{j_1=1}^{d_y}\sum _{j_2=1}^{d_x}\sum _{j_3=1}^{d_y} \int ^{t+k_1\Delta _l}_0 {R}^{-1}(j_1,j_3) C(j_1,j_2)\\&\Big [ m_s(j_2) - m^l_{\tau _s^l}(j_2) \Big ]{R}^{1/2} (j_1,j_3)dV_s(j_3)\Big ]^2 \\\le & {} \sum _{j_1=1}^{d_y}\sum _{j_2=1}^{d_x} \sum _{j_3=1}^{d_y} {R}^{-1}(j_1,j_3) \ C^2(j_1,j_2) \\&\int ^{t+k_1\Delta _l}_0{\mathbb {E}}\Big [ m_s(j_2) -m^l_{\tau _s^l}(j_2) dV_s(j_3) \Big ]^2. \end{aligned}$$

Then by using the Ito isometry and \(d_xd^2_y\) applications of the \(C_2\)-inequality, we have

$$\begin{aligned} {\mathbb {E}}[M_{t+k_1\Delta _l}(1)]^2 \le \mathsf {C} \int ^{t+k_1\Delta _l}_0 \max _{j_2 \in \{1,\ldots ,d_x\}} {\mathbb {E}} \Big [m_s(j_2) - m^l_{\tau ^l_s}(j_2)\Big ]^2ds, \end{aligned}$$

and, as before, by using Lemma B.3 as done for \(R_{t+k_1\Delta _l}\), we can conclude that \({\mathbb {E}}[M_{t +k_1\Delta _l}(1)]^2 \le \mathsf {C} \Delta ^2_l\). Therefore by combining all terms and a further application of the \(C_2\)-inequality three times

$$\begin{aligned}&{\mathbb {E}}\Big [\Big ([\overline{U}_{t+k_1\Delta _l}(Y) -\overline{U}_{t+k_1\Delta _l}^{l}(Y)]\Big )^2\Big ]\\&\quad \le {\mathbb {E}}[M_{t+k_1\Delta _l}(1)]^2 +{\mathbb {E}}[R_{t+k_1\Delta _l}(1)]^2 +{\mathbb {E}}[R_{t+k_1\Delta _l}]^2 \\&\quad \le \mathsf {C}\Delta ^2_l. \end{aligned}$$

\(\square \)

C Analysis for i.i.d. MLEnKBF NC estimator

We now discuss the analysis, related to the variance, of both the NC estimator using the EnKBF and the i.i.d. MLEnKBF. This will lead onto the proof of our main result, presented as Theorem 3.1. We note that in our notations, we extend the case of the discretized EnKBF, to the discretized MLEnKBF, by adding superscripts l as above. Specifically the analysis now considers the i.i.d. couple particle system

$$\begin{aligned} \zeta _{(k+1)\Delta _l}^{i,l}&= \zeta _{k\Delta _l}^{i,l} + A\zeta _{k\Delta _l}^{i,l}\Delta _l + {Q}^{1/2} [\overline{W}_{(k+1)\Delta _l}^i-\overline{W}_{k\Delta _l}^i]\\&\quad + P^{N,l}_{k \Delta _l} C^{\top }{R}^{-1} \Big ([Y^{{i}}_{(k+1)\Delta _l}-Y^{{i}}_{k\Delta _l}]\\&\qquad -\Big [C\zeta _{k\Delta _l}^{i,l}\Delta _l +{R}^{1/2}[\overline{V}_{(k+1)\Delta _l}^i -\overline{V}_{k\Delta _l}^i]\Big ]\Big ), \\ \zeta _{(k+1)\Delta _{l-1}}^{i,l-1}&= \zeta _{k\Delta _{l-1}}^{i,l-1} + A\zeta _{k\Delta _{l-1}}^{i,l-1}\Delta _{l-1} \\&\quad + {Q}^{1/2} [\overline{W}_{(k+1)\Delta _{l-1}}^i -\overline{W}_{k\Delta _{l-1}}^i]\\&\quad + P^{N,l-1}_{k \Delta _{l-1}} C^{\top }{R}^{-1} \Big ([Y^{{i}}_{(k+1)\Delta _{l-1}}-Y^{{i}}_{k\Delta _{l-1}}] \\&\qquad -\Big [C\zeta _{k\Delta _{l-1}}^{i,l-1}\Delta _{l-1} +{R}^{1/2}[\overline{V}_{(k+1)\Delta _{l-1}}^i -\overline{V}_{k\Delta _{l-1}}^i]\Big ]\Big ), \end{aligned}$$

within the NC estimator.

We will use the fact that the i.i.d. system coincides with the Kalman–Bucy Diffusion \(\overline{X}_t\), in the limit of \(N \rightarrow \infty \). This implies the mean and covariance are defined through the Kalman–Bucy filter and the Ricatti equations, which allows us to use the results from Appendix B, for the process \(\zeta ^i_{k\Delta _l}\).

1.1 C.1 MSE bound on EnKBF NC estimator

Here we use the notation: for a \(d_x-\)dimensional vector x denote \(\Vert x\Vert _2=(\sum _{j=1}^{d_x}x(j)^2)^{1/2}\).

Proposition C.1

For any \(T\in {\mathbb {N}}\) fixed and \(t\in [0,T-1]\) there exists a \(\mathsf {C}<+\infty \) such that for any \((l,N,k_1)\in {\mathbb {N}}_0 \times \{2,3,\dots \}\times \{0,1,\dots ,\Delta _l^{-1}\}\):

$$\begin{aligned} {\mathbb {E}}\Big [\Big \Vert [\widehat{\overline{U}}_{t+k_1\Delta _l}^{N,l} -\overline{U}_{t+k_1\Delta _l}](Y)\Big \Vert _2^2\Big ] \le \mathsf {C} \Big (\frac{1}{N}+\Delta _l^2\Big ). \end{aligned}$$

Proof

Using the \(C_2-\)inequality one has

$$\begin{aligned}&{\mathbb {E}}\Big [\Big \Vert [\widehat{\overline{U}}_{t+k_1 \Delta _l}^{N,l}-{\overline{U}}_{t+k_1\Delta _l}](Y)\Big \Vert _2^2\Big ]\nonumber \\&\quad \le \mathsf {C}\Big ({\mathbb {E}} \Big [\Big \Vert [\widehat{\overline{U}}_{t+k_1\Delta _l}^{N,l} -{\overline{U}}_{t+k_1\Delta _l}^l](Y)\Big \Vert _2^2\Big ] \nonumber \\&\qquad +{\mathbb {E}}\Big [\Big \Vert [{\overline{U}}_{t+k_1\Delta _l}^{l} -{\overline{U}}_{t+k_1\Delta _l}](Y)\Big \Vert _2^2\Big ]\Big ). \end{aligned}$$
(C.1)

The first term on the R.H.S. can be controlled by standard results for i.i.d. sampling (recall that \(\zeta _{t+k_1\Delta _l}^i|{\mathscr {F}}_{t+k_1\Delta _l}\) are i.i.d. Gaussian with mean \(m_{t+k_1\Delta _l}\) and covariance \(P_{t+k_1\Delta _l}\)), that is

$$\begin{aligned} {\mathbb {E}}\Big [\Big \Vert [{\widehat{\overline{U}}}_{t+k_1 \Delta _l}^{N,l}-{\overline{U}}_{t+k_1\Delta _l}^l](Y) \Big \Vert _2^2\Big ] \le \frac{\mathsf {C}}{N}. \end{aligned}$$
(C.2)

The formula in (C.2) can be proved by using the formulae for the NC estimators, in the usual integral form, and through a simple application of the general Minkowski inequality.

Note that it is crucial that (A.1) holds, otherwise the upper-bound can explode as a function of l. For the right-most term on the R.H.S. of (C.1) by Jensen’s inequality and Lemma B.4:

$$\begin{aligned} {\mathbb {E}}\Big [\Big \Vert [{\overline{U}}_{t+k_1\Delta _l}^{l} -{\overline{U}}_{t+k_1\Delta _l}](Y)\Big \Vert _2^2\Big ] \le \mathsf {C}\Delta _l^2. \end{aligned}$$
(C.3)

So the proof can be concluded by combining (C.1), (C.2) and (C.3). \(\square \)

1.2 C.2 Variance of i.i.d. MLEnKBF NC estimator

Proposition C.2

For any \((t,q)\in {\mathbb {N}}_0 \times [1,\infty )\), there exists a \(\mathsf {C}<+\infty \) such that for any \((l,N,k_1)\in {\mathbb {N}} \times \{2,3,\dots \}\times \{0,1,\dots ,\Delta _{l-1}^{-1}-1\}\):

$$\begin{aligned}&{\mathbb {E}}\Big [\Big |[\widehat{\overline{U}}_{t+k_1 \Delta _{l-1}}^{N,l}-\widehat{\overline{U}}_{t+k_1 \Delta _{l-1}}^{N,l-1}] \nonumber \\&\quad - [{\overline{U}}_{t+k_1 \Delta _{l-1}}^{l}-{\overline{U}}_{t+k_1\Delta _{l-1}}^{l-1}] \Big |^q\Big ]^{1/q} \le \frac{\mathsf {C}\Delta ^{1/2}_l}{\sqrt{N}}. \end{aligned}$$
(C.4)

As before, in order to proceed we will make use of a martingale-remainder decomposition.

Recall that, for level \(s \in \{l-1,l\}\), we have

$$\begin{aligned}&\overline{U}_{t+k_1\Delta _{l-1}}^{N,s}(Y) -\overline{U}_{t+k_1\Delta _{l-1}}^{s}(Y)\\&\quad = M_{t+k_1\Delta _{l-1}}^s(Y) + R_{t+k_1\Delta _{l-1}}^s \\&\quad = M^s_{t+k_1\Delta _{l-1}}(1) + R^s_{t+k_1\Delta _{l-1}}(1) + R^s_{t+k_1\Delta _{l-1}}, \end{aligned}$$

Therefore substituting into the LHS of (C.4), and using Minkowski’s inequality, results in

$$\begin{aligned}&{\mathbb {E}}\Big [\Big |[\widehat{\overline{U}}_{t+k_1\Delta _{l-1}}^{N,l} -\widehat{\overline{U}}_{t+k_1\Delta _{l-1}}^{N,l-1}] -[{\overline{U}}_{t+k_1\Delta _{l-1}}^{l}-{\overline{U}}_{t+k_1 \Delta _{l-1}}^{l-1}]\Big |^q\Big ]^{1/q} \\&={\mathbb {E}}\Big [\Big | (R_{t+k_1\Delta _{l-1}}^{l}(1) -R_{t+k_1\Delta _{l-1}}^{l-1}(1)) \\&\quad + (M_{t+k_1 \Delta _{l-1}}^{l}(1) -M_{t+k_1\Delta _{l-1}}^{l-1}(1))\\&\quad +(R_{t+k_1\Delta _{l-1}}^{l}(Y) -R_{t+k_1 \Delta _{l-1}}^{l-1}(Y)) \Big |^q\Big ]^{1/q} \\&\le {\mathbb {E}}\Big [\Big | (R_{t+k_1\Delta _{l-1}}^{l}(1) -R_{t+k_1\Delta _{l-1}}^{l-1}(1)) \Big |^q\Big ]^{1/q}\\&\quad +{\mathbb {E}}\Big [\Big | (M_{t+k_1\Delta _{l-1}}^{l}(1) -M_{t+k_1\Delta _{l-1}}^{l-1}(1)) \Big |^q\Big ]^{1/q} \\&\quad + {\mathbb {E}}\Big [\Big | (R_{t+k_1\Delta _{l-1}}^{l}(Y) -R_{t+k_1\Delta _{l-1}}^{l-1}(Y)) \Big |^q\Big ]^{1/q}. \end{aligned}$$

Therefore in order to prove Proposition C.2 we will split it into three lemmas, which are stated and proved below.

Lemma C.1

For any \((t,q)\in {\mathbb {N}}_0 \times [1,\infty )\), there exists a \(\mathsf {C}<+\infty \) such that for any \((l,N,k_1)\in {\mathbb {N}} \times \{2,3,\dots \}\times \{0,1,\dots ,\Delta _{l-1}^{-1}-1\}\):

$$\begin{aligned} {\mathbb {E}}\Big [\Big | \Big (R_{t+k_1\Delta _{l-1}}^{l}(1) -R_{t+k_1\Delta _{l-1}}^{l-1}(1)\Big ) \Big |^q\Big ]^{1/q} \le \frac{\mathsf {C}\Delta ^{1/2}_l}{\sqrt{N}}. \end{aligned}$$
(C.5)

Proof

$$\begin{aligned}&{\mathbb {E}}\Big [\Big | \Big (R_{t+k_1\Delta _{l-1}}^{l}(1) -R_{t+k_1\Delta _{l-1}}^{l-1}(1)\Big ) \Big |^q\Big ]^{1/q}\\&\quad = {\mathbb {E}}\Big [\Big |\sum ^{d_y}_{j_1=1} \sum _{j_2=1}^{d_x}\sum _{j_3=1}^{d_y} \sum ^{t\Delta ^{-1}_{l-1} +k_1-1}_{k=0} C(j_1,j_2) \\&\qquad \Big (m^{N,l}_{k \Delta _{l-1}}(j_2) - m^l_{k \Delta _{l-1}}(j_2)\Big ) {R}^{-1}(j_1,j_3) CX_{k\Delta _{l-1}}(j_3)\Delta _l \\&- \sum ^{d_y}_{j_1=1}\sum _{j_2=1}^{d_x}\sum _{j_3=1}^{d_y} \sum ^{t\Delta ^{-1}_{l-1}+k_1-1}_{k=0} C(j_1,j_2)\\&\qquad \Big (m^{N,l-1}_{k \Delta _{l-1}}(j_2) -m^{l-1}_{k \Delta _{l-1}} (j_2)\Big )\\&\qquad {R}^{-1}(j_1,j_3) CX_{k\Delta _{l-1}}(j_3) \Delta _{l-1} \Big |^q\Big ]^{1/q}. \end{aligned}$$

Then through the generalized Minkowski inequality

$$\begin{aligned}&{\mathbb {E}}\Big [\Big | \Big (R_t^{l}(1) -R_t^{l-1}(1)\Big ) \Big |^q\Big ]^{1/q} \\&\quad \le \sum _{j_1=1}^{d_y} \sum _{j_2=1}^{d_x}\sum _{j_3=1}^{d_y} \sum _{j_4=1}^{d_x} C(j_1,j_2)C(j_3,j_4){R}^{-1}(j_1,j_3) \\&\qquad \times {\mathbb {E}}\Big [\Big |\int ^{t+k_1\Delta _{l-1}}_0\\&\qquad \quad \Big ( \overline{m}^{N,l}_{\tau ^l_s}(j_2) X_{\tau _s^l}(j_4) - \overline{m}^{N,l-1}_{\tau ^{l-1}_s}(j_2) X_{\tau _s^{l-1}}(j_4) \Big ) ds\Big |^q\Big ]^{1/q} \\&\quad \le \mathsf {C}\sum _{j_2=1}^{d_x}\sum _{j_4=1}^{d_x} \Big ({\mathbb {E}}\Big [\Big | \int ^{t+k_1\Delta _{l-1}}_0\\&\qquad \quad \Big (\overline{m}^{N,l}_{\tau ^l_s}(j_2) -\overline{m}^{N,l-1}_{\tau ^{l-1}_s}(j_2) \Big ) X_{\tau ^{l}_s}(j_4) ds \Big |^q\Big ]^{1/q} \\&\qquad + {\mathbb {E}}\Big [\Big | \int ^{t+k_1 \Delta _{l-1}}_0 \overline{m}^{N,l-1}_{\tau ^{l-1}_s}(j_2)\\&\qquad \quad \Big (X_{\tau ^l_s}(j_4)-X_{\tau ^{l-1}_s}(j_4)\Big ) ds \Big |^q\Big ]^{1/q}\Big ) \\&\quad =: T_1 + T_2, \end{aligned}$$

where we have used \(\tau ^s_t = \lfloor \frac{t}{\Delta _s} \rfloor \Delta _s \) for \(t \in {\mathbb {R}}^+\), and

$$\begin{aligned} \overline{m}^{N,s}_{k\Delta _l} = {m}^{N,s}_{k\Delta _l} -m^l_{k\Delta _s}, \quad s \in \{l-1,l\}. \end{aligned}$$
(C.6)

For \(T_1\) we can express it as

$$\begin{aligned} T_1&= \mathsf {C} \sum _{j_2=1}^{d_x}\sum _{j_4=1}^{d_x} {\mathbb {E}}\Big [\Big | \int ^{t+k_1\Delta _{l-1}}_0\\&\quad \Big (\overline{m}^{N,l}_{\tau ^l_s}(j_2) -\overline{m}^{N,l}_{\tau ^{l-1}_s}(j_2)\\&\qquad \quad +\overline{m}^{N,l}_{\tau ^{l-1}_s}(j_2) -\overline{m}^{N,l-1}_{\tau ^{l-1}_s}(j_2)\Big ) X_{\tau ^l_s}(j_4)ds \Big |^q\Big ]^{1/q} \\&\le \mathsf {C} \sum _{j_2=1}^{d_x}\sum _{j_4=1}^{d_x} \int ^{t+k_1\Delta _{l-1}}_0 \\&\quad \Big ( {\mathbb {E}} \Big [\Big | \Big (\overline{m}^{N,l}_{\tau ^l_s}(j_2) -\overline{m}^{N,l}_{\tau ^{l-1}_s}(j_2)\Big ) X_{\tau ^l_s}(j_4) \Big |^q\Big ]^{1/q}\\&\quad + {\mathbb {E}}\Big [\Big | \Big ( \overline{m}^{N,l}_{\tau ^{l-1}_s} (j_2) - \overline{m}^{N,l-1}_{\tau ^{l-1}_s}(j_2)\Big ) X_{\tau ^l_s}(j_4) \Big |^q\Big ]^{1/q}\Big )ds,\\&\quad =:T_3+T_4. \end{aligned}$$

again by using the generalized Minkowski inequality and Jensen’s inequality.

For \(T_3\) we can apply the Marcinkiewicz–Zygmund and Hölder inequalities, and using the fact that means can be expressed as the expectations of (3.9)–(3.10)

$$\begin{aligned} T_3= & {} \mathsf {C} \sum _{j_2=1}^{d_x}\sum _{j_4=1}^{d_x} \int ^{t+k_1\Delta _{l-1}}_0 {\mathbb {E}} \\&\Big [ \Big | \frac{1}{N} \sum ^N_{i=1}\Big [\Big ({\zeta }^{i,l}_{\tau ^l_s}(j_2) -{\zeta }^{i,l}_{\tau ^{l-1}_s}(j_2)\Big )\\&\quad -\Big (m^l_{\tau ^{l}_s}(j_2)-m^l_{\tau ^{l-1}_s}(j_2)\Big )\Big ] X_{\tau ^l_s}(j_4) \Big |^q\Big ]^{1/q} ds \\\le & {} \mathsf {C} \sum _{j_2=1}^{d_x}\sum _{j_4=1}^{d_x} \int ^{t+k_1\Delta _{l-1}}_0 {\mathbb {E}} \\&\Big [ \Big | \frac{1}{N} \sum ^N_{i=1}\Big [\Big ({\zeta }^{i,l}_{\tau ^l_s}(j_2) -{\zeta }^{i,l}_{\tau ^{l-1}_s}(j_2)\Big )\\&\quad -\Big (m^l_{\tau ^{l}_s}(j_2)-m^l_{\tau ^{l-1}_s}(j_2)\Big )\Big ] \Big |^{2q}\Big ]^{1/2q} \\&\times {\mathbb {E}} \Big [ \Big |X_{\tau ^l_s}(j_4) \Big |^{2q}\Big ]^{1/2q} ds \\\le & {} \frac{\mathsf {C}_q}{\sqrt{N}} \sum _{j_2=1}^{d_x} \sum _{j_4=1}^{d_x} \int ^{t+k_1\Delta _{l-1}}_0 {\mathbb {E}} \Big [ \Big | [{\zeta }^l_{\tau ^l_s}(j_2) - {\zeta }^l_{\tau ^{l-1}_s}(j_2)]\Big |^{2q}\Big ]^{1/2q}\\&{\mathbb {E}} \Big [ \Big |X_{\tau ^l_s}(j_4) \Big |^{2q}\Big ]^{1/2q} ds. \end{aligned}$$

The process \({\mathbb {E}}[|X_{\tau ^l_s}]^{2q}]\) is of order \({\mathcal {O}}(1)\) and and the recursion is of order \({\mathcal {O}}({\Delta _l})\), through the strong error Lemma B.2. For \(T_4\) we know it is sufficiently small, which is of order \({\mathcal {O}}(\Delta _l)\).

For \(T_2\), we use the definition of the discretized diffusion process

$$\begin{aligned} X_{\tau ^l_s} - X_{\tau ^{l-1}_s} = \int ^{\tau ^l_s}_{\tau ^{l-1}_s} AX_{u} du + {Q}^{1/2}\Big (W_{\tau ^{l}_s} - W_{\tau ^{l-1}_s}\Big ). \end{aligned}$$
(C.7)

As before, we know the difference of the Brownian motion increment \({\mathbb {E}}\Big [\Big |W_{\tau ^{l}_s} - W_{\tau ^{l-1}_s}\Big |^q \Big ]\) is of order \({\mathcal {O}}(\Delta ^{1/2}_l)\), and, as before, \({\mathbb {E}}[|X_u|^q]^{1/q} \le \mathsf {C}\). Therefore the integral term of (C.7) is of order \({\mathcal {O}}(\Delta ^{1/2}_l)\). Finally for \({\mathbb {E}}[|\overline{m}^{N,l-1}_{\tau ^{l-1}_s}|^q]^{1/q}\), as it is of order \({\mathcal {O}}(N^{-\frac{1}{2}})\), therefore, combining all terms, we can deduce from that

$$\begin{aligned} {\mathbb {E}}\Big [\Big | \Big (R_{t+k_1\Delta _{l-1}}^{l}(1) -R_{t+k_1\Delta _{l-1}}^{l-1}(1)\Big ) \Big |^q\Big ]^{1/q} \le \frac{\mathsf {C}\Delta ^{1/2}_l}{\sqrt{N}}. \end{aligned}$$

\(\square \)

Lemma C.2

For any \((t,q)\in {\mathbb {N}}_0 \times [1,\infty )\), there exists a \(\mathsf {C}<+\infty \) such that for any \((l,N,k_1)\in {\mathbb {N}} \times \{2,3,\dots \}\times \{0,1,\dots ,\Delta _{l-1}^{-1}-1\}\):

$$\begin{aligned} {\mathbb {E}}\Big [\Big | \Big (M_{t+k_1\Delta _{l-1}}^{l}(1) -M_{t+k_1\Delta _{l-1}}^{l-1}(1)\Big ) \Big |^q\Big ]^{1/q} \le \frac{\mathsf {C}\Delta _l^{1/2}}{\sqrt{N}}.\nonumber \\ \end{aligned}$$
(C.8)

Proof

As before, we set \(\tau ^s_t = \lfloor \frac{t}{\Delta _s} \rfloor \Delta _s \) for \(t \in {\mathbb {R}}^+\), and make use of (C.6)

$$\begin{aligned}&{\mathbb {E}}\Big [\Big | \Big (M_{t+k_1\Delta _{l-1}}^{l}(1) -M_{t+k_1\Delta _{l-1}}^{l-1}(1)\Big ) \Big |^q\Big ]^{1/q} \\&\quad = {\mathbb {E}}\Big [\Big |\sum ^{d_y}_{j_1=1} \sum _{j_2=1}^{d_x}\sum _{j_3=1}^{d_y}\\&\qquad \Big (\sum ^{t\Delta ^{-1}_{l-1}+k_1-1}_{k=0} C(j_1,j_2) \Big (\overline{m}^{N,l}_{k \Delta _{l-1}}(j_2)\Big ) {R}^{-1}(j_1,j_3) \\&\qquad \times \Big ([Y_{(k+1)\Delta _{l-1}}-Y_{k\Delta _{l-1}}] (j_3)-CX_{k\Delta _{l-1}}(j_3)\Delta _l\Big ) \\&\qquad - \sum ^{t\Delta ^{-1}_{l-1}+k_1-1}_{k=0} C(j_1,j_2) \Big (\overline{m}^{N,l-1}_{k \Delta _{l-1}}(j_2)\Big ){R}^{-1}(j_1,j_3)\\&\qquad \Big ([Y_{(k+1)\Delta _{l-1}}-Y_{k\Delta _{l-1}}] (j_3)-CX_{k\Delta _{l-1}}(j_3)\Delta _{l-1}\Big ) \Big ) \Big |^q\Big ]^{1/q} \end{aligned}$$

Then by using generalized Minkowski and Jensen’s inequality

$$\begin{aligned}&{\mathbb {E}}\Big [\Big | \Big (M_{t+k_1\Delta _{l-1}}^{l}(1) -M_{t+k_1\Delta _{l-1}}^{l-1}(1)\Big ) \Big |^q\Big ]^{1/q} \\&\quad = {\mathbb {E}}\Big [\Big |\sum _{j_1=1}^{d_y} \sum _{j_2=1}^{d_x}\sum ^{d_y}_{j_3=1}C(j_1,j_2){R}^{-1}(j_1,j_3)\\&\qquad \int ^{t+k_1\Delta _{l-1}}_0 \Big ( \overline{m}^{N,l}_{\tau ^l_s}(j_2) -\overline{m}^{N,l-1}_{\tau ^{l-1}_s}(j_2) \Big ) dY_s(j_3) \\&\qquad - \Big ( \overline{m}^{N,l}_{\tau ^l_s}(j_2) CX_{\tau _s^l}(j_3) - \overline{m}^{N,l-1}_{\tau ^{l-1}_s}(j_2) CX_{\tau _s^{l-1}}(j_3) \Big ) ds\Big |^q\Big ]^{1/q} \\&\quad \le \mathsf {C} \sum _{j_2=1}^{d_x}\sum _{j_3=1}^{d_y} \sum _{j_4=1}^{d_x} {\mathbb {E}}\Big [\Big | \int ^{t+k_1\Delta _{l-1}}_0 \\&\qquad \Big ( \overline{m}^{N,l}_{\tau ^l_s} (j_2)- \overline{m}^{N,l-1}_{\tau ^{l-1}_s}(j_2) \Big ) X_{\tau ^{l}_s}(j_4)ds \\&\qquad + \Big ( \overline{m}^{N,l}_{\tau ^l_s}(j_2) -\overline{m}^{N,l-1}_{\tau ^{l-1}_s}(j_2) \Big )dV_s(j_3)\\&\qquad - \Big ( \overline{m}^{N,l}_{\tau ^l_s}(j_2) X_{\tau _s^l} (j_4) - \overline{m}^{N,l-1}_{\tau ^{l-1}_s}(j_2) X_{\tau _s^{l-1}} (j_4) \Big ) ds\Big |^q\Big ]^{1/q} \\&\quad \le \mathsf {C} \sum _{j_2=1}^{d_x}\sum _{j_3=1}^{d_y} \sum _{j_4=1}^{d_x}\int ^{t+k_1\Delta _{l-1}}_0 \\&\qquad \Big ({\mathbb {E}} \Big [\Big | -\overline{m}^{N,l-1}_{\tau ^{l-1}_s}(j_2) \Big (X_{\tau _s^l}(j_4) - X_{\tau _s^{l-1}}(j_4) \Big ) ds \Big |^q\Big ]^{1/q} \\&\qquad + {\mathbb {E}}\Big [\Big | \Big (\overline{m}^{N,l}_{\tau ^l_s}(j_2) -\overline{m}^{N,l-1}_{\tau ^{l-1}_s}(j_2)\Big ) dV_s(j_3)\Big |^q\Big ]^{1/q}\Big ) \\&\quad =: T_2+T_{3}. \end{aligned}$$

For \(T_2\), we know it follows the same analysis as Lemma C.1. Therefore we can conclude that

$$\begin{aligned} T_2 \le \frac{\mathsf {C} \Delta ^{1/2}_l}{\sqrt{N}}. \end{aligned}$$

For \(T_{3}\) by using the Burkholder–Davis–Gundy and Hölder inequality, as done previously from (A.5), and we use the bound from Lemma C.1, for the \(\Big (\overline{m}^{N,l}_{\tau ^l_s}- \overline{m}^{N,l-1}_{\tau ^{l-1}_s}\Big )\) term. Therefore this implies that

$$\begin{aligned} {\mathbb {E}}\Big [\Big | \Big (M_{t+k_1\Delta _{l-1}}^{l}(1) -M_{t+k_1\Delta _{l-1}}^{l-1}(1)\Big ) \Big |^q\Big ]^{1/q} \le \frac{\mathsf {C} \Delta _l^{1/2}}{\sqrt{N}}. \end{aligned}$$

\(\square \)

Lemma C.3

For any \((t,q)\in {\mathbb {N}}_0 \times [1,\infty )\), there exists a \(\mathsf {C}<+\infty \) such that for any \((l,N,k_1)\in {\mathbb {N}} \times \{2,3,\dots \}\times \{0,1,\dots ,\Delta _{l-1}^{-1}-1\}\):

$$\begin{aligned} {\mathbb {E}}\Big [\Big | \Big (R_{t+k_1\Delta _{l-1}}^{l} -R_{t+k_1\Delta _{l-1}}^{l-1}\Big ) \Big |^q\Big ]^{1/q} \le \frac{\mathsf {C}\Delta _l}{\sqrt{N}}. \end{aligned}$$
(C.9)

Proof

Again we let \(\tau ^s_t = \lfloor \frac{t}{\Delta _s} \rfloor \Delta _s\) for \(t \in {\mathbb {R}}^+\), and using the generalized Minkowski inequality

$$\begin{aligned}&{\mathbb {E}}\Big [\Big | \Big (R_{t+k_1\Delta _l}^{l} -R_{t+k_1\Delta _l}^{l-1}\Big ) \Big |^q\Big ]^{1/q} \\&\quad = \Big ({\mathbb {E}}\Big [\Big | \sum _{j_1=1}^{d_x} \sum _{j_2=1}^{d_x} \frac{S(j_1,j_2)}{2} \sum ^{t\Delta ^{-1}_{l-1}+k_1-1}_{k=0}\\&\qquad \Big (m^{N,l}_{k \Delta _{l-1}}(j_1)m^{N,l}_{k \Delta _{l-1}}(j_2) - m^l_{k \Delta _{l-1}}(j_1) m^l_{k \Delta _{l-1}}(j_2)\Big )\\&\qquad - \sum ^{t\Delta ^{-1}_{l-1}+k_1-1}_{k=0} \ \Big (m^{N,l-1}_{k \Delta _{l-1}}(j_1)m^{N,l-1}_{k \Delta _{l-1}}(j_2) \\&\qquad \qquad - m^l_{k \Delta _{l-1}}(j_1)m^l_{k \Delta _{l-1}}(j_2)\Big )\Big |^q\Big ]^{1/q} \Big ) \\&\quad = \sum _{j_1=1}^{d_x}\sum _{j_2=1}^{d_x} \frac{S(j_1,j_2)}{2} \Big ({\mathbb {E}} \Big [\Big | \int ^{t+k_1\Delta _{l-1}}_0\\&\qquad \Big (m^{N,l}_{\tau ^l_s}(j_1)m^{N,l}_{\tau ^l_s}(j_2) -m^l_{\tau ^{l}_s}(j_1)m^l_{\tau ^{l}_s}(j_2)\Big ) \\&\qquad - \Big (m^{N,l}_{\tau ^{l-1}_s}(j_1)m^{N,l}_{\tau ^{l-1}_s} (j_2) - m^l_{\tau ^{l-1}_s}(j_1)m^l_{\tau ^{l-1}_s}(j_2)\Big ) \\&\qquad + \Big (m^{N,l}_{\tau ^{l-1}_s}(j_1)m^{N,l}_{\tau ^{l-1}_s} (j_2) - m^l_{\tau ^{l-1}_s}(j_1)m^l_{\tau ^{l-1}_s}(j_2)\Big ) \\&\qquad - \Big (m^{N,l-1}_{\tau ^{l-1}_s}(j_1)m^{N,l-1}_{\tau ^{l-1}_s} (j_2) - m^l_{\tau ^{l-1}_s}(j_1)m^l_{\tau ^{l-1}_s}(j_2)\Big ) ds \Big |^q\Big ]^{1/q} \Big ) \\&\quad \le \mathsf {C} \sum _{j_1=1}^{d_x}\sum _{j_2=1}^{d_x} \Big ({\mathbb {E}}\Big [\Big | \int ^{t+k_1\Delta _{l-1}}_0\\&\qquad \Big (m^{N,l}_{\tau ^l_s}(j_1)m^{N,l}_{\tau ^l_s}(j_2) - m^l_{\tau ^{l}_s}(j_1)m^l_{\tau ^{l}_s}(j_2)\Big ) \\&\qquad -\Big (m^{N,l}_{\tau ^{l-1}_s}(j_1)m^{N,l}_{\tau ^{l-1}_s} (j_2) - m^l_{\tau ^{l-1}_s}(j_1)m^l_{\tau ^{l-1}_s}(j_2)\Big ) ds \Big |^q\Big ]^{1/q} \Big ) \\&\qquad + \mathsf {C} \sum _{j_1=1}^{d_x}\sum _{j_2=1}^{d_x} \Big ({\mathbb {E}}\Big [\Big | \Big (m^{N,l}_{\tau ^{l-1}_s} (j_1)m^{N,l}_{\tau ^{l-1}_s}(j_2) \\&\qquad -m^l_{\tau ^{l-1}_s} (j_1)m^l_{\tau ^{l-1}_s}(j_2)\Big ) - \Big (m^{N,l-1}_{\tau ^{l-1}_s}(j_1) m^{N,l-1}_{\tau ^{l-1}_s}(j_2) \\&\qquad -m^l_{\tau ^{l-1}_s}(j_1)m^l_{\tau ^{l-1}_s}(j_2)\Big ) ds \Big |^q\Big ]^{1/q} \Big ) \\&\quad =: T_1 + T_2. \end{aligned}$$

For \(T_1\) we make use of (C.6) and with the generalized Minkowski inequality

$$\begin{aligned} T_1&= \mathsf {C} \sum _{j_1=1}^{d_x}\sum _{j_2=1}^{d_x} \Big ({\mathbb {E}}\Big [\Big | \int ^{t+k_1\Delta _{l-1}}_0\\&\qquad \Big (m^{N,l}_{\tau ^l_s}(j_1)m^{N,l}_{\tau ^l_s}(j_2) - m_{\tau ^{l}_s}(j_1)m_{\tau ^{l}_s}(j_2)\Big ) \\&\quad -\Big (m^{N,l}_{\tau ^{l-1}_s}(j_1)m^{N,l}_{\tau ^{l-1}_s} (j_2) - m_{\tau ^{l-1}_s}(j_1)m_{\tau ^{l-1}_s}(j_2)\Big ) ds \Big |^q\Big ]^{1/q} \Big ) \\&= \mathsf {C} \sum _{j_1=1}^{d_x}\sum _{j_2=1}^{d_x} \Big ({\mathbb {E}}\Big [\Big | \int ^{t+k_1\Delta _{l-1}}_0\\&\qquad \Big (\overline{m}^{N,l}_{\tau ^l_s}(j_1)m^{N,l}_{\tau ^l_s}(j_2) -\overline{m}^{N,l}_{\tau ^{l-1}_s}(j_1)m^{N,l}_{\tau ^{l-1}_s}(j_2) \Big ) \\&\quad + \Big ( m_{\tau ^l_s}(j_1)\overline{m}^{N,l}_{\tau ^l_s}(j_2) -m_{\tau ^{l-1}_s}(j_1)\overline{m}^{N,l}_{\tau ^{l-1}_s}(j_2) \Big ) ds \Big |^q\Big ]^{1/q} \Big ) \\&\le \mathsf {C} \sum _{j_1=1}^{d_x}\sum _{j_2=1}^{d_x} {\mathbb {E}}\\&\quad \Big [\Big | \int ^{t+k_1\Delta _{l-1}}_0 \overline{m}^{N,l}_{\tau ^l_s}(j_1)m^{N,l}_{\tau ^l_s}(j_2)\\&\qquad -\overline{m}^{N,l}_{\tau ^{l-1}_s}(j_1)m^{N,l}_{\tau ^{l-1}_s} (j_2) ds \Big |^q\Big ]^{1/q} \\&\quad + \sum _{j_1=1}^{d_x}\sum _{j_2=1}^{d_x} {\mathbb {E}}\\&\quad \Big [\Big | \int ^{t+k_1\Delta _{l-1}}_0 m^l_{\tau ^l_s}(j_1)\overline{m}^{N,l}_{\tau ^l_s}(j_2)\\&\qquad m^l_{\tau ^{l-1}_s}(j_1)\overline{m}^{N,l}_{\tau ^{l-1}_s} (j_2) ds \Big |^q \Big ]^{1/q} \\&=: T_3 + T_4. \end{aligned}$$

For \(T_3\) we can use the difference of mean trick, as in (B.6),

$$\begin{aligned} T_3= & {} \mathsf {C} \sum _{j_1=1}^{d_x} \sum _{j_2=1}^{d_x} {\mathbb {E}}\\&\Big [\Big | \int ^{t+k_1\Delta _{l-1}}_0 \Big (\overline{m}^{N,l}_{\tau ^l_s}(j_1) - \overline{m}^{N,l}_{\tau ^{l-1}_s}(j_1) \Big ) m^{N,l}_{\tau ^l_s}(j_2)\\&\qquad + \,\overline{m}^{N,l}_{\tau ^{l-1}_s}(j_1) \Big ( {m}^{N,l}_{\tau ^l_s}(j_2) - {m}^{N,l}_{\tau ^{l-1}_s} (j_2) \Big )ds \Big |^q \Big ]^{1/q}. \end{aligned}$$

We know \({\mathbb {E}}[|m^{N,l}_{\tau ^l_s}|^q] \le \mathsf {C}\), and \(\overline{m}^{N,l}_{\tau ^{l-1}_s}(j_1)\) is of oder \({\mathcal {O}}(N^{-\frac{1}{2}})\). The last bracket term is of order \({\mathcal {O}}(\Delta _l)\), arising from the strong error. The first bracket term is the same, that appears in Lemma C.1, which is of order \({\mathcal {O}}(\frac{\Delta _l}{\sqrt{N}})\).

Similarly, \(T_4\) can be expressed as

$$\begin{aligned} T_4= & {} \mathsf {C} \sum _{j_1=1}^{d_x} \sum _{j_2=1}^{d_x} {\mathbb {E}}\\&\Big [\Big | \int ^{t+k_1\Delta _{l-1}}_0 \Big (\overline{m}^{N,l}_{\tau ^l_s}(j_2) - \overline{m}^{N,l}_{\tau ^{l-1}_s}(j_2) \Big ) m^{l}_{\tau ^l_s}(j_1)\\&\qquad + \overline{m}^{N,l}_{\tau ^{l-1}_s}(j_2) \Big ( {m}^{l}_{\tau ^l_s}(j_1) - {m}^{l}_{\tau ^{l-1}_s}(j_1) \Big )ds \Big |^q \Big ]^{1/q}, \end{aligned}$$

which contains the same bounds as \(T_3\), therefore implying that

$$\begin{aligned} T_1 \le \frac{\mathsf {C} \Delta _l}{\sqrt{N}}. \end{aligned}$$

For \(T_2\), we can rewrite it as

$$\begin{aligned} T_2= & {} \mathsf {C} \sum _{j_1=1}^{d_x}\sum _{j_2=1}^{d_x}\\&\Big ({\mathbb {E}} \Big [ \Big | \int ^{t+k_1\Delta _{l-1}}_0 \Big (\frac{1}{N} \sum ^N_{i=1} {\zeta }^{i,l}_{\tau ^{l-1}_s}(j_1)\Big ) \Big (\frac{1}{N} \sum ^N_{i=1} {\zeta }^{i,l}_{\tau ^{l-1}_s}(j_2)\Big )\\&- \Big (\frac{1}{N} \sum ^N_{i=1} {\zeta }^{i,l-1}_{\tau ^{l-1}_s}(j_1)\Big ) \Big (\frac{1}{N} \sum ^N_{i=1} {\zeta }^{i,l-1}_{\tau ^{l-1}_s}(j_2)\Big ) \\&- {\mathbb {E}}[ {\zeta }^l_{\tau ^{l-1}_s}(j_1)] {\mathbb {E}}[ {\zeta }^l_{\tau ^{l-1}_s}(j_2)]\\&+ {\mathbb {E}}[ {\zeta }^{l-1}_{\tau ^{l-1}_s} (j_1)]{\mathbb {E}}[ {\zeta }^{l-1}_{\tau ^{l-1}_s} (j_2)] ds \Big |^q \Big ]^{1/q}\Big ). \end{aligned}$$

Now to proceed, we again use \(\overline{m}^{N,s}_{\tau ^{l-1}_s} ={m}^{N,s}_{\tau ^{l-1}_s} - m_{\tau ^{l-1}_s}\), for \(s \in \{l-1,l\}\), which we can rewrite, in terms of its expectation, as

$$\begin{aligned} \frac{1}{N}\sum ^N_{i=1}\overline{\zeta }^{i,s}_{\tau ^{l-1}_s}(j) = \frac{1}{N}\sum ^N_{i=1}{{\zeta }}^{i,s}_{\tau ^{l-1}_s}(j) - {\mathbb {E}}[ {\zeta }^s_{\tau ^{l-1}_s}(j)]. \end{aligned}$$

Therefore the integral terms of \(T_2\), for \(s \in \{l-1,l\}\), become

$$\begin{aligned}&\Big (\frac{1}{N} \sum ^N_{i=1} {\zeta }^{i,s}_{\tau ^{l-1}_s}(j_1)\Big ) \Big (\frac{1}{N} \sum ^N_{i=1} {\zeta }^{i,s}_{\tau ^{l-1}_s}(j_2)\Big ) - {\mathbb {E}}[ {\zeta }^s_{\tau ^{l-1}_s}(j_1)] {\mathbb {E}}[ {\zeta }^s_{\tau ^{l-1}_s}(j_2)]\\&\quad = \Big (\frac{1}{N} \sum ^N_{i=1} \overline{{\zeta }}^{i,s}_{\tau ^{l-1}_s}(j_1) +{\mathbb {E}}[ {\zeta }^s_{\tau ^{l-1}_s}(j_1)] \Big )\\&\qquad \Big (\frac{1}{N} \sum ^N_{i=1} \overline{\zeta }^{i,s}_{\tau ^{l-1}_s}(j_2) +{\mathbb {E}}[ {\zeta }^s_{\tau ^{l-1}_s}(j_2)] \Big ) -{\mathbb {E}}[ {\zeta }^s_{\tau ^{l-1}_s}(j_1)] {\mathbb {E}}[ {\zeta }^s_{\tau ^{l-1}_s}(j_2)] \\&\quad = \Big (\frac{1}{N} \sum ^N_{i=1} \overline{\zeta }^{i,s}_{\tau ^{l-1}_s}(j_1)\Big ) \Big (\frac{1}{N} \sum ^N_{i=1} \overline{\zeta }^{i,s}_{\tau ^{l-1}_s} (j_2)\Big ) \\&\qquad + \Big (\frac{1}{N} \sum ^N_{i=1} \overline{\zeta }_{\tau ^{l-1}_s}^{i,s}(j_1)\Big ) {\mathbb {E}}[ {\zeta }^s_{\tau ^{l-1}_s}(j_2)]\\&\qquad + \Big (\frac{1}{N} \sum ^N_{i=1} \overline{\zeta }^{i,s}_{\tau ^{l-1}_s} (j_2)\Big ){\mathbb {E}}[ {\zeta }^s_{\tau ^{l-1}_s}(j_1)]. \end{aligned}$$

By substituting this into \(T_2\), we have

$$\begin{aligned} T_2 = \mathsf {C}\sum _{j_1=1}^{d_x}\sum _{j_2=1}^{d_x} \Big ( {\mathbb {E}} \Big [ \Big | \int ^{t+k_1\Delta _{l-1}}_0 T_5 +T_6+T_7 \ ds \Big |^q \Big ]^{1/q} \Big ), \end{aligned}$$

where

$$\begin{aligned} T_5= & {} \Big (\frac{1}{N} \sum ^N_{i=1}\overline{\zeta }^{i,l}_{\tau ^{l-1}_s} (j_1)\Big )\Big (\frac{1}{N} \sum ^N_{i=1} \overline{\zeta }^{i,l}_{\tau ^{l-1}_s}(j_2)\Big )\\&-\Big (\frac{1}{N} \sum ^N_{i=1}\overline{\zeta }^{i,l-1}_{\tau ^{l-1}_s} (j_1)\Big )\Big (\frac{1}{N} \sum ^N_{i=1} \overline{\zeta }^{i,l-1}_{\tau ^{l-1}_s}(j_2)\Big ),\\ T_6= & {} \Big (\frac{1}{N} \sum ^N_{i=1}\overline{\zeta }^{i,l}_{\tau ^{l-1}_s} (j_1)\Big ){\mathbb {E}}[ {\zeta }^l_{\tau ^{l-1}_s}(j_2)]\\&-\Big (\frac{1}{N} \sum ^N_{i=1}\overline{\zeta }^{i,l-1}_{\tau ^{l-1}_s} (j_1)\Big ){\mathbb {E}}[ {\zeta }^{l-1}_{\tau ^{l-1}_s}(j_2)], \\ T_7= & {} \Big (\frac{1}{N} \sum ^N_{i=1}\overline{\zeta }^{i,l}_{\tau ^{l-1}_s} (j_2)\Big ){\mathbb {E}}[ {\zeta }^l_{\tau ^{l-1}_s}(j_1)]\\&-\Big (\frac{1}{N} \sum ^N_{i=1}\overline{\zeta }^{i,l-1}_{\tau ^{l-1}_s} (j_2)\Big ){\mathbb {E}}[ {\zeta }^{l-1}_{\tau ^{l-1}_s}(j_1)]. \end{aligned}$$

For \(T_6\) and \(T_{7}\), they can be expressed as

$$\begin{aligned} T_6+T_{7}= & {} \frac{1}{N}\sum ^N_{i=1} \overline{\zeta }^{i,l}_{\tau ^{l-1}_s}(j_1) \Big ({\mathbb {E}}[ {\zeta }^l_{\tau ^{l-1}_s}(j_2)] -{\mathbb {E}}[ {\zeta }^{l-1}_{\tau ^{l-1}_s}(j_2)]\Big )\\&+ {\mathbb {E}}[ {\zeta }^{l-1}_{\tau ^{l-1}_s}(j_2)] \frac{1}{N}\sum ^n_{i=1}\Big (\overline{\zeta }^{i,l}_{\tau ^{l-1}_s}(j_1) - \overline{\zeta }_{\tau ^{l-1}_s}^{i,l-1}(j_1)\Big ) \\&+ \frac{1}{N}\sum ^N_{i=1}\overline{\zeta }_{\tau ^{l-1}_s}^{i,l} (j_2) \Big ({\mathbb {E}}[ {\zeta }^l_{\tau ^{l-1}_s}(j_1)] -{\mathbb {E}}[ {\zeta }^{l-1}_{\tau ^{l-1}_s}(j_1)]\Big )\\&+ {\mathbb {E}}[ {\zeta }^{l-1}_{\tau ^{l-1}_s}(j_1)] \frac{1}{N}\sum ^n_{i=1}\Big (\overline{\zeta }^{i,l}_{\tau ^{l-1}_s}(j_2) - \overline{\zeta }^{i,l-1}(j_2)\Big ). \end{aligned}$$

For \(T_5\), we can rewrite it as

$$\begin{aligned} T_5&= \frac{1}{N}\sum ^N_{i=1} \overline{\zeta }_{\tau ^{l-1}_s}^{i,l} (j_2)\Big (\frac{1}{N} \sum ^N_{i=1}\Big (\overline{\zeta }^{i,l}_{\tau ^{l -1}_s}(j_1) - \overline{\zeta }^{i,l-1}_{\tau ^{l-1}_s}(j_1)\Big ) \Big )\\&\quad + \Big (\frac{1}{N}\sum ^N_{i=1} \overline{\zeta }^{i,l}_{\tau ^{l-1}_s} (j_1)\Big )\frac{1}{N}\sum ^N_{i=1} \Big (\overline{\zeta }^{i,l}_{\tau ^{l-1}_s}(j_2) -\overline{\zeta }^{i,l-1}_{\tau ^{l-1}_s}(j_2)\Big ). \end{aligned}$$

Using these expressions, and through the generalized Minkowski and Jensen’s inequality, \(T_2\) can be simplified as

$$\begin{aligned} T_2\le & {} \mathsf {C} \sum _{j_1=1}^{d_x}\sum _{j_2=1}^{d_x}\Big ({\mathbb {E}}\Big [ \Big | \int ^{t+k_1\Delta _{l-1}}_0 \Big ( \frac{1}{N}\sum ^N_{i=1} \overline{\zeta }_{\tau ^{l-1}_s}^{i,l} (j_2)\\&\qquad \Big (\frac{1}{N} \sum ^N_{i=1} \Big (\overline{\zeta }_{\tau ^{l-1}_s}^{i,l}(j_1) -\overline{\zeta }_{\tau ^{l-1}_s}^{i,l-1}(j_1)\Big )\Big ) ds\Big |^p \Big ]^{1/p}\\&+ {\mathbb {E}}\Big [ \Big |\int ^{t+k_1\Delta _{l-1}}_0 \Big ( \frac{1}{N}\sum ^N_{i=1} \overline{\zeta }_{\tau ^{l-1}_s}^{i,l} (j_1)\\&\qquad \qquad \Big (\frac{1}{N} \sum ^N_{i=1} \Big (\overline{\zeta }_{\tau ^{l-1}_s}^{i,l}(j_2) -\overline{\zeta }_{\tau ^{l-1}_s}^{i,l-1}(j_2)\Big )\Big ) ds\Big |^p \Big ]^{1/p} \\&+ \Big |{\mathbb {E}}[\zeta _{\tau ^{l-1}_s}^l(j_2)] -{\mathbb {E}}[\zeta _{\tau ^{l-1}_s}^{l-1}(j_2)]\Big |\\&{\mathbb {E}}\Big [\Big |\int ^{t+k_1\Delta _{l-1}}_0 \frac{1}{N}\sum ^N_{i=1}\overline{\zeta }_{\tau ^{l-1}_s}^{i,l} (j_1)ds\Big |^p \Big ]^{1/p} \\&+ \Big |{\mathbb {E}}[\zeta _{\tau ^{l-1}_s}^{l-1}(j_2)] \Big |{\mathbb {E}} \Big [ \Big | \int ^{t+k_1\Delta _{l-1}}_0\\&\qquad \frac{1}{N}\sum ^n_{i=1} \Big (\overline{\zeta }_{\tau ^{l-1}_s}^{i,l}(j_1) - \overline{\zeta }_{\tau ^{l-1}_s}^{i,l-1}(j_1)\Big )ds \Big |^p \Big ]^{1/p} \\&+ \Big |{\mathbb {E}}[\zeta _{\tau ^{l-1}_s}^l(j_1)] -{\mathbb {E}}[\zeta _{\tau ^{l-1}_s}^{l-1}(j_1)]\Big |\\&{\mathbb {E}}\Big [\Big | \int ^{t+k_1\Delta _{l-1}}_0 \frac{1}{N}\sum ^N_{i=1}\overline{\zeta }_{\tau ^{l-1}_s}^{i,l}(j_2) ds\Big |^p \Big ]^{1/p}\\&+ \Big |{\mathbb {E}}[\zeta _{\tau ^{l-1}_s}^{l-1}(j_1)] \Big |{\mathbb {E}} \Big [ \Big | \int ^{t+k_1\Delta _{l-1}}_0\\&\qquad \frac{1}{N}\sum ^n_{i=1}\Big (\overline{\zeta }_{\tau ^{l-1}_s}^{i,l} (j_2) - \overline{\zeta }_{\tau ^{l-1}_s}^{i,l-1}(j_2)\Big )ds \Big |^p \Big ]^{1/p} \Big ). \end{aligned}$$

Then by using the results from the strong and weak error of the diffusion processes, in Lemma B.2 and B.3, we reach the following bound of

$$\begin{aligned} {\mathbb {E}}\Big [\Big | \Big (R_{t+k_1\Delta _{l-1}}^{l} -R_{t+k_1\Delta _{l-1}}^{l-1}\Big ) \Big |^q\Big ]^{1/q} \le \frac{\mathsf {C}\Delta _l}{\sqrt{N}}. \end{aligned}$$

\(\square \)

Therefore by combining all the results from Lemma C.1 - Lemma C.3, leads to the desired result of (C.4) in Proposition C.2.

1.3 C.3 Proof of Theorem 3.1

Proof

Noting (2.15) one has

$$\begin{aligned}&{[}\widehat{\overline{U}}_t^{ML}-{\overline{U}}_t](Y) =[\widehat{\overline{U}}_t^{N_0,0}-{\overline{U}}_t^0](Y)\\&\quad + \sum _{l=1}^L [\widehat{\overline{U}}_t^{N_l,l} -\widehat{\overline{U}}_t^{N_l,l-1}-{\overline{U}}_t^{l} +{\overline{U}}_t^{l-1}](Y) + [{\overline{U}}_t^L-{\overline{U}}_t](Y). \end{aligned}$$

Thus, by using three applications of the \(C_2-\)inequality we have

$$\begin{aligned}&{\mathbb {E}}\Big [\Big \Vert [\widehat{\overline{U}}_t^{ML} -{\overline{U}}_t](Y)\Big \Vert _2^2\Big ] \\&\quad \le \mathsf {C} \Big ({\mathbb {E}} \Big [\Big \Vert [\widehat{\overline{U}}_t^{N_0,0} -{\overline{U}}_t^0](Y)\Big \Vert _2^2\Big ] \\&\qquad + {\mathbb {E}}\Big [\Big \Vert \sum _{l=1}^L [\widehat{\overline{U}}_t^{N_l,l} -\widehat{\overline{U}}_t^{N_l,l-1}-{\overline{U}}_t^{l} +{\overline{U}}_t^{l-1}](Y)\Big \Vert _2^2\Big ]\\&\qquad +{\mathbb {E}}\Big [\Big \Vert [{\overline{U}}_t^L -{\overline{U}}_t](Y)\Big \Vert _2^2\Big ]\Big ). \end{aligned}$$

For the first term on the R.H.S. one can use (C.2) and for the last term on the R.H.S., we can use (C.3). For the middle term, one has

$$\begin{aligned}&\Big \Vert \sum _{l=1}^L [\widehat{\overline{U}}_t^{N_l,l} -\widehat{\overline{U}}_t^{N_l,l-1}-{\overline{U}}_t^{l} +{\overline{U}}_t^{l-1}](Y)\Big \Vert _2^2\\&\quad = \sum _{l=1}^L \sum _{j=1}^{d_x}\Big ([\widehat{\overline{U}}_t^{N_l,l} -\widehat{\overline{U}}_t^{N_l,l-1}-{\overline{U}}_t^{l} +{\overline{U}}_t^{l-1}](Y)^2\Big )(j)^2 \\&\qquad + \sum _{l=1}^L\sum _{q=1}^L{\mathbb {I}}_{D^c}(l,q) \sum _{j=1}^{d_x}\\&\qquad \Big ([\widehat{\overline{U}}_t^{N_l,l} -\widehat{\overline{U}}_t^{N_l,l-1}-{\overline{U}}_t^{l} +{\overline{U}}_t^{l-1}](Y)\Big )(j) \\&\qquad \times \Big ([\widehat{\overline{U}}_t^{N_q,q} -\widehat{\overline{U}}_t^{N_q,q-1}-{\overline{U}}_t^{q} +{\overline{U}}_t^{q-1}](Y)\Big )(j). \end{aligned}$$

Then using a combination of the independence of the coupled particle systems along with Proposition C.2 the proof can be concluded. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ruzayqat, H., Chada, N.K. & Jasra, A. Multilevel estimation of normalization constants using ensemble Kalman–Bucy filters. Stat Comput 32, 38 (2022). https://doi.org/10.1007/s11222-022-10094-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11222-022-10094-2

Keywords

AMS subject classifications

Navigation