Skip to main content
Log in

Simultaneous confidence bands for comparing variance functions of two samples based on deterministic designs

  • Original paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

Asymptotically correct simultaneous confidence bands (SCBs) are proposed in both multiplicative and additive form to compare variance functions of two samples in the nonparametric regression model based on deterministic designs. The multiplicative SCB is based on two-step estimation of ratio of the variance functions, which is as efficient, up to order \(n^{-1/2}\), as an infeasible estimator if the two mean functions are known a priori. The additive SCB, which is the log transform of the multiplicative SCB, is location and scale invariant in the sense that the width of SCB is free of the unknown mean and variance functions of both samples. Simulation experiments provide strong evidence that corroborates the asymptotic theory. The proposed SCBs are used to analyze several strata pressure data sets from the Bullianta Coal Mine in Erdos City, Inner Mongolia, China.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Bose S, Mahalanobis P (1935) On the distribution of the ratio of variances of two samples drawn from a given normal bivariate correlated population. Sankhyā Indian J Stat 2:65–72

    Google Scholar 

  • Bickel P, Rosenblatt M (1973) On some global measures of deviations of density function estimates. Ann Stat 31:1852–1884

    MathSciNet  MATH  Google Scholar 

  • Brown L, Levine M (2007) Variance estimation in nonparametric regression via the difference sequence method. Ann Stat 35:2219–2232

    MathSciNet  MATH  Google Scholar 

  • Claeskens G, Van Keilegom I (2003) Bootstrap confidence bands for regression curves and their derivatives. Ann Stat 31:1852–1884

    MathSciNet  MATH  Google Scholar 

  • Cai L, Yang L (2015) A smooth simultaneous confidence band for conditional variance function. TEST 24:632–655

    MathSciNet  MATH  Google Scholar 

  • Cai L, Liu R, Wang S, Yang L (2019) Simutaneous confidence bands for mean and varience functions based on deterministic design. Stat Sin 29:505–525

    Google Scholar 

  • Cai L, Li L, Huang S, Ma L, Yang L (2020) Oracally efficient estimation for dense functional data with holiday effects. TEST 29:282–306

    MathSciNet  MATH  Google Scholar 

  • Cao G, Wang L, Li Y, Yang L (2016) Oracle-efficient confidence envelopes for covariance functions in dense functional data. Stat Sin 26:359–383

    MathSciNet  MATH  Google Scholar 

  • Cao G, Yang L, Todem D (2012) Simultaneous inference for the mean function based on dense functional data. J Nonparametr Stat 24:359–377

    MathSciNet  MATH  Google Scholar 

  • Degras D (2011) Simultaneous confidence bands for nonparametric regression with functional data. Stat Sin 21:1735–1765

    MathSciNet  MATH  Google Scholar 

  • de Boor C (2001) A Practical Guide to Splines. Springer, New York

    MATH  Google Scholar 

  • Eubank R, Speckman P (1993) Confidence bands in nonparametric regression. J Am Stat Assoc 88:1287–1301

    MathSciNet  MATH  Google Scholar 

  • Finney D (1938) The distribution of the ratio of estimates of the two variances in a sample from a normal bi-variate population. Biometrika 30:190–192

    MATH  Google Scholar 

  • Fisher R (1924) On a distribution yielding the error function of several well known statistics. Proc Int Congr Math 2:805–813

    Google Scholar 

  • Fan J, Gijbels I (1996) Local Polynomial Modelling and Its Applications. Chapman and Hall, London

    MATH  Google Scholar 

  • Gayen A (1950) The distribution of the variance ratio in random samples of any size drawn from non-normal universes. Biometrika 37:236–255

    MathSciNet  MATH  Google Scholar 

  • Gu L, Wang L, Härdle W, Yang L (2014) A simultaneous confidence corridor for varying coefficient regression with sparse functional data. TEST 23:806–843

    MathSciNet  MATH  Google Scholar 

  • Gu L, Yang L (2015) Oracally efficient estimation for single-index link function with simultaneous confidence band. Electron J Stat 9:1540–1561

    MathSciNet  MATH  Google Scholar 

  • Gu L, Wang S, Yang L (2019) Simultaneous confidence bands for the distribution function of a finite population in stratified sampling. Ann Inst Stat Math 71:983–1005

    MathSciNet  MATH  Google Scholar 

  • Härdle W (1989) Asmptotic maximal deviation of M-smoothers. J Multivar Anal 29:163–179

    MATH  Google Scholar 

  • Härdle W, Marron J (1991) Bootstrap simultaneous error bars for nonparametric regression. Ann Stat 19:778–796

    MathSciNet  MATH  Google Scholar 

  • Hall P, Titterington D (1988) On confidence bands in nonparametric density estimation and regression. J Multivar Anal 27:228–254

    MathSciNet  MATH  Google Scholar 

  • James G (1951) The comparison of several groups of observations when the ratios of the population variances are unknown. Biometrika 38:324–329

    MathSciNet  MATH  Google Scholar 

  • Jiang J, Cai L, Yang L (2020) Simultaneous confidence band for the difference of regression functions of two samples. Commun Stat Theory Methods. https://doi.org/10.1080/03610926.2020.1800039

    Article  Google Scholar 

  • Johnston G (1982) Probabilities of maximal deviations for nonparametric regression function estimates. J Multivar Anal 12:402–414

    MathSciNet  MATH  Google Scholar 

  • Ju J, Xu J (2013) Structural characteristics of key strata and strata behaviour of a fully mechanized longwall face with 7.0m height chocks. Int J Rock Mech Min Sci 58:46–54

    Google Scholar 

  • Leadbetter MR, Lindgren G, Rootzén H (1983) Extremes and related properties of random sequences and processes. Springer, New York

    MATH  Google Scholar 

  • Levine M (2006) Bandwidth selection for a class of difference-based variance estimators in the nonparametric regression: a possible approach. Comput Stat Data Anal 50:3405–3431

    MathSciNet  MATH  Google Scholar 

  • Ma S, Yang L, Carroll R (2012) A simultaneous confidence band for sparse longitudinal regression. Stat Sin 22:95–122

    MathSciNet  MATH  Google Scholar 

  • Qian M, Shi P, Xu J (2010) Mining pressure and strata control. China University of Mining and Technology Press, Beijing

    Google Scholar 

  • Scheffé H (1942) On the ratio of the variances of two normal populations. Ann Math Stat 13:371–388

    MathSciNet  MATH  Google Scholar 

  • Song Q, Yang L (2009) Spline confidence bands for variance functions. J Nonparametr Stat 5:589–609

    MathSciNet  MATH  Google Scholar 

  • Song Q, Liu R, Shao Q, Yang L (2014) A simultaneous confidence band for dense longitudinal regression. Commun Stat Theory Methods 43:5195–5210

    MathSciNet  MATH  Google Scholar 

  • Wang J, Yang L (2009) Polynomial spline confidence bands for regression curves. Stat Sin 19:325–342

    MathSciNet  MATH  Google Scholar 

  • Wang J (2012) Modelling time trend via spline confidence band. Ann Inst Stat Math 64:275–301

    MathSciNet  MATH  Google Scholar 

  • Wang J, Liu R, Cheng F, Yang L (2014) Oracally efficient estimation of autoregressive error distribution with simultaneous confidence band. Ann Stat 42:654–668

    MathSciNet  MATH  Google Scholar 

  • Wang J, Wang S, Yang L (2016) Simultaneous confidence bands for the distribution function of a finite population and its superpopulation. TEST 25:692–709

    MathSciNet  MATH  Google Scholar 

  • Welch B (1938) The significance of the difference between two means when the population variances are unequal. Biometrika 29:350–362

    MATH  Google Scholar 

  • Xia Y (1998) Bias-corrected confidence bands in nonparametric regression. J R Stat Soc Ser B 60:797–811

    MathSciNet  MATH  Google Scholar 

  • Xue L, Yang L (2006) Additive coefficient modeling via polynomial spline. Stat Sin 16:1423–1446

    MathSciNet  MATH  Google Scholar 

  • Zhao Z, Wu W (2008) Confidence bands in nonparametric time series regression. Ann Stat 36:1854–1878

    MathSciNet  MATH  Google Scholar 

  • Zhang Y, Yang L (2018) A smooth simultaneous confidence band for correlation curve. TEST 27:247–269

    MathSciNet  MATH  Google Scholar 

  • Zheng S, Yang L, Härdle W (2014) A smooth simultaneous confidence corridor for the mean of sparse functional data. J Am Stat Assoc 109:661–673

    MathSciNet  MATH  Google Scholar 

  • Zheng S, Liu R, Yang L, Härdle W (2016) Statistical inference for generalized additive models:simultaneous confidence corridors and variable selection. TEST 25:607–626

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This research is part of the first author’s dissertation under the supervision of the second author, and has been supported exclusively by National Natural Science Foundation of China award 11771240. The authors are grateful to Professor Yaodong Jiang for providing the strata pressure data, and two Reviewers for thoughtful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lijian Yang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

The following is a reformulation of Theorems 11.1.5 and 12.3.5 in Leadbetter et al. (1983).

Lemma 1

If a Gaussian process \(\varsigma \left( s\right) ,0\le s\le T\) is stationary with mean zero and variance one, and covariance function statisfying

$$\begin{aligned} r\left( t\right) =\text{ E }\varsigma \left( s\right) \varsigma \left( t+s\right) =1-C\left| t\right| ^{\alpha }+o\left( \left| t\right| ^{\alpha }\right) ,\text { as }t\rightarrow 0 \end{aligned}$$

for some constant \(C>0,0<\alpha \le 2.\) Then as \(T\rightarrow \infty \),

$$\begin{aligned} \text{ P }\left[ a_{T}\left\{ \sup _{t\in \left[ 0,T\right] }\left| \varsigma \left( t\right) \right| -b_{T}\right\} \le z\right] \rightarrow e^{-2e^{-z}},\forall z\in {\mathbb {R}}, \end{aligned}$$

where \(a_{T}=\left( 2\log T\right) ^{1/2}\) and

$$\begin{aligned} b_{T}=a_{T}+a_{T}^{-1}\times \left\{ \left( \frac{1}{\alpha }-\frac{1}{2} \right) \log \left( a_{T}^{2}/2\right) +\log \left( 2\pi \right) ^{-1/2}\left( C^{\frac{1}{\alpha }}H_{\alpha }2^{\frac{2-\alpha }{2\alpha } }\right) \right\} \end{aligned}$$

with \(H_{1}=1,H_{2}=\pi ^{-1/2}\).

Lemmas 24 are from Cai et al. (2019).

Lemma 2

Under Assumption (A6), for \(s=1,2\), as \(n\rightarrow \infty \),

$$\begin{aligned} \sup _{x\in {\mathcal {I}}_{n}}\left| {\hat{f}}_{s}\left( x\right) -1\right| ={\mathcal {O}}\left( n_{s}^{-1}h^{-2}\right) . \end{aligned}$$

Lemma 3

Under Assumptions (A2), (A6) and (A7), for \(s=1,2\), as \( n\rightarrow \infty \),

$$\begin{aligned} \sup _{x\in {\mathcal {I}}_{n}}\left| A_{s,n_{s}}\left( x\right) \right| ={\mathcal {O}}\left( h^{\theta _{0}+p_{0}-1}+n_{s}^{-1}h^{-1}\right) . \end{aligned}$$

Lemma 4

Under Assumptions (A2)–(A4), (A6), (A7), for \(s=1,2\), as \(n\rightarrow \infty ,\)

$$\begin{aligned}&(a) \sup _{x\in \left[ 0,1\right] }\left| B_{s,n_{s}}\left( x\right) -B_{s,n_{s},1}\left( x\right) \right| ={\mathcal {O}}_{p}\left( n_{s}^{\beta _{s}-1}h^{-1}\right) , \\&\quad (b) \sup _{x\in \left[ 0,1\right] }\left| B_{s,n_{s},1}\left( x\right) -B_{s,n_{s},2}\left( x\right) \right| ={\mathcal {O}}_{p}\left( n_{s}^{-1/2}h^{1/2}\log ^{1/2}n_{s}\right) , \\&\quad (c) \sup _{x\in {\mathcal {I}}_{n}}\left| B_{s,n_{s},2}\left( x\right) -B_{s,n_{s},3}\left( x\right) \right| ={\mathcal {O}}_{p}\left( n_{s}^{-3/2}h^{-2}\log ^{1/2}n_{s}\right) , \\&\quad (d) \sup _{x\in \left[ 0,1\right] }\left| B_{s,n_{s},3}\left( x\right) \right| ={\mathcal {O}}_{p}\left( n_{s}^{-1/2}h^{-1/2}\log ^{1/2}n_{s}\right) . \end{aligned}$$

Denote

$$\begin{aligned} B_{n_{1},n_{2}}\left( x\right) =\sigma _{1}^{-2}\left( x\right) B_{1,n_{1}}\left( x\right) -\sigma _{2}^{-2}\left( x\right) B_{2,n_{2}}\left( x\right) \\ B_{n_{1},n_{2,}3}\left( x\right) =\sigma _{1}^{-2}\left( x\right) B_{1,n_{1},3}\left( x\right) -\sigma _{2}^{-2}\left( x\right) B_{2,n_{2},3}\left( x\right) . \end{aligned}$$

Lemma 5

Under Assumptions (A2)–(A4), (A6), (A7), as \( n\rightarrow \infty ,\)

$$\begin{aligned}&\sup _{x\in {\mathcal {I}}_{n}}\left| \ln \frac{{\tilde{\sigma }} _{1}^{2}\left( x\right) }{{\tilde{\sigma }}_{2}^{2}\left( x\right) }-\ln \frac{ \sigma _{1}^{2}\left( x\right) }{\sigma _{2}^{2}\left( x\right) } -B_{n_{1},n_{2,}3}\left( x\right) \right| \\&\quad ={\mathcal {O}}_{p}\left( n_{1}^{-1/2}h^{-1/2}\log ^{1/2}n_{1}+n_{2}^{-1/2}h^{-1/2}\log ^{1/2}n_{2}\right) \\&\qquad +{\mathcal {O}}_{p}\left( h^{\theta _{0}+p_{0}-1}+n_{1}^{\beta _{1}-1}h^{-1}+n_{2}^{\beta _{2}-1}h^{-1}\right) +o_{p}\left( 1\right) . \end{aligned}$$

Consequently,

$$\begin{aligned} a_{h}\left\{ v_{n}^{-1}\sup _{x\in {\mathcal {I}}_{n}}\left| \ln \frac{ {\tilde{\sigma }}_{1}^{2}\left( x\right) }{{\tilde{\sigma }}_{2}^{2}\left( x\right) }-\ln \frac{\sigma _{1}^{2}\left( x\right) }{\sigma _{2}^{2}\left( x\right) }\right| \right\} =a_{h}\left\{ v_{n}^{-1}\sup _{x\in {\mathcal {I}} _{n}}\left| B_{n_{1},n_{2,}3}\left( x\right) \right| \right\} +o_{p}\left( 1\right) , \end{aligned}$$

where \(a_{h}\) and \(v_{n}\) are given in (5).

Proof According to Lemmas 24, one has

$$\begin{aligned}&\sup _{x\in {\mathcal {I}}_{n}}\left| {\hat{f}}_{s}^{-1}\left( x\right) \left\{ A_{s,n_{s}}\left( x\right) +B_{s,n_{s}}\left( x\right) \right\} \right| \\&\quad \le \sup _{x\in {\mathcal {I}}_{n}}\left| {\hat{f}}_{s}^{-1}\left( x\right) \right| \sup _{x\in {\mathcal {I}}_{n}}\left| A_{s,n_{s}}\left( x\right) +B_{s,n_{s}}\left( x\right) \right| \\&\quad \le \left\{ 1+{\mathcal {O}}\left( n_{s}^{-1}h^{-2}\right) \right\} \left\{ \sup _{x\in {\mathcal {I}}_{n}}\left| A_{s,n_{s}}\left( x\right) \right| +\sup _{x\in {\mathcal {I}}_{n}}\left| B_{s,n_{s}}\left( x\right) \right| \right\} \\&\quad \le {\mathcal {O}}\left( h^{\theta _{0}+p_{0}-1}+n_{s}^{-1}h^{-1}\right) +\sup _{x\in {\mathcal {I}}_{n}}\left| B_{s,n_{s}}\left( x\right) \right| \\&\le {\mathcal {O}}\left( h^{\theta _{0}+p_{0}-1}+n_{s}^{-1}h^{-1}\right) +\sup _{x\in \left[ 0,1\right] }\left| B_{s,n_{s}}\left( x\right) -B_{s,n_{s},1}\left( x\right) \right| \\&\qquad +\sup _{x\in \left[ 0,1\right] }\left| B_{s,n_{s},1}\left( x\right) -B_{s,n_{s},2}\left( x\right) \right| \\&\qquad +\sup _{x\in {\mathcal {I}}_{n}}\left| B_{s,n_{s},2}\left( x\right) -B_{s,n_{s},3}\left( x\right) \right| +\sup _{x\in \left[ 0,1\right] }\left| B_{s,n_{s},3}\left( x\right) \right| \\&\quad \le {\mathcal {O}}_{p}\left( h^{\theta _{0}+p_{0}-1}+n_{s}^{\beta _{s}-1}h^{-1}+n_{s}^{-1/2}h^{-1/2}\log ^{1/2}n_{s}\right) \end{aligned}$$

Now applying Taylor series expansions to \(\ln {\tilde{\sigma }}_{s}^{2}\left( x\right) -\ln \sigma _{s}^{2}\left( x\right) \), for \(s=1,2\)

$$\begin{aligned}&\sup _{x\in {\mathcal {I}}_{n}}\left| \ln {\tilde{\sigma }}_{s}^{2}\left( x\right) -\ln \sigma _{s}^{2}\left( x\right) \right| \\= & {} \sup _{x\in {\mathcal {I}}_{n}}\left| \ln \left[ \sigma _{s}^{2}\left( x\right) +{\hat{f}}_{s}^{-1}\left( x\right) \left\{ A_{s,n_{s}}\left( x\right) +B_{s,n_{s}}\left( x\right) \right\} \right] -\ln \sigma _{s}^{2}\left( x\right) \right| \\\le & {} \sup _{x\in {\mathcal {I}}_{n}}\left| \sigma _{s}^{-2}\left( x\right) {\hat{f}}_{s}^{-1}\left( x\right) \left\{ A_{s,n_{s}}\left( x\right) +B_{s,n_{s}}\left( x\right) \right\} \right| +o_{p}\left( 1\right) . \end{aligned}$$

Then one obtains

$$\begin{aligned}&\ln \frac{{\tilde{\sigma }}_{1}^{2}\left( x\right) }{{\tilde{\sigma }} _{2}^{2}\left( x\right) }-\ln \frac{\sigma _{1}^{2}\left( x\right) }{\sigma _{2}^{2}\left( x\right) }-B_{n_{1},n_{2,},3}\left( x\right) \\&\quad =\ln {\tilde{\sigma }}_{1}^{2}\left( x\right) -\ln \sigma _{1}^{2}\left( x\right) -\left\{ \ln {\tilde{\sigma }}_{2}^{2}\left( x\right) -\ln \sigma _{2}^{2}\left( x\right) \right\} -B_{n_{1},n_{2,},3}\left( x\right) \\&\quad =\sigma _{1}^{-2}\left( x\right) {\hat{f}}_{1}^{-1}\left( x\right) \left\{ A_{1,n_{1}}\left( x\right) +B_{1,n_{1}}\left( x\right) \right\} \\&\qquad -\sigma _{2}^{-2}\left( x\right) {\hat{f}}_{2}^{-1}\left( x\right) \left\{ A_{2,n_{2}}\left( x\right) +B_{2,n_{2}}\left( x\right) \right\} -B_{n_{1},n_{2,},3}\left( x\right) +u_{p}\left( 1\right) \\&\quad =\sigma _{1}^{-2}\left( x\right) {\hat{f}}_{1}^{-1}\left( x\right) A_{1,n_{1}}\left( x\right) -\sigma _{2}^{-2}\left( x\right) {\hat{f}} _{2}^{-1}\left( x\right) A_{2,n_{2}}\left( x\right) \\&\qquad +\sigma _{1}^{-2}\left( x\right) \left\{ {\hat{f}}_{1}^{-1}\left( x\right) -1\right\} B_{1,n_{1}}\left( x\right) -\sigma _{2}^{-2}\left( x\right) \left\{ {\hat{f}}_{2}^{-1}\left( x\right) -1\right\} B_{2,n_{2}}\left( x\right) \\&\qquad +B_{n_{1},n_{2}}\left( x\right) -B_{n_{1},n_{2,},3}\left( x\right) +u_{p}\left( 1\right) . \end{aligned}$$

Since one has

$$\begin{aligned}&B_{n_{1},n_{2}}\left( x\right) -B_{n_{1},n_{2,},3}\left( x\right) =\sigma _{1}^{-2}\left( x\right) B_{1,n_{1}}\left( x\right) -\sigma _{2}^{-2}\left( x\right) B_{2,n_{2}}\left( x\right) \nonumber \\&\quad =\sigma _{1}^{-2}\left( x\right) \left\{ B_{1,n_{1}}\left( x\right) -B_{1,n_{1},1}\left( x\right) \right\} +\sigma _{1}^{-2}\left( x\right) \left\{ B_{1,n_{1},1}\left( x\right) -B_{1,n_{1},2}\left( x\right) \right\} \nonumber \\&\qquad +\sigma _{1}^{-2}\left( x\right) \left\{ B_{1,n_{1},2}\left( x\right) -B_{1,n_{1},3}\left( x\right) \right\} -\sigma _{2}^{-2}\left( x\right) \left\{ B_{2,n_{2}}\left( x\right) -B_{2,n_{2},1}\left( x\right) \right\} \nonumber \\&\qquad -\sigma _{2}^{-2}\left( x\right) \left\{ B_{2,n_{2},1}\left( x\right) -B_{2,n_{2},2}\left( x\right) \right\} -\sigma _{2}^{-2}\left( x\right) \left\{ B_{2,n_{2},2}\left( x\right) -B_{2,n_{2},3}\left( x\right) \right\} , \nonumber \\ \end{aligned}$$
(14)

and according to Lemmas 24, one has

$$\begin{aligned}&\sup _{x\in {\mathcal {I}}_{n}}\left| \sigma _{s}^{-2}\left( x\right) {\hat{f}} _{s}^{-1}\left( x\right) A_{s,n_{s}}\left( x\right) \right| ={\mathcal {O}} \left( h^{\theta _{0}+p_{0}-1}+n_{s}^{-1}h^{-1}\right) , \end{aligned}$$
(15)
$$\begin{aligned}&\sup _{x\in {\mathcal {I}}_{n}}\left| \sigma _{s}^{-2}\left( x\right) \left\{ {\hat{f}}_{s}^{-1}\left( x\right) -1\right\} B_{1,n_{s}}\left( x\right) \right| ={\mathcal {O}}_{p}\left( n_{s}^{\beta _{s}-1}h^{-1}+n_{s}^{-1/2}h^{-1/2}\log ^{1/2}n_{s}\right) , \nonumber \\ \end{aligned}$$
(16)

Hence combining (14), (15) and (16), the proof is completed.

Denote the following processes

$$\begin{aligned}&Y_{1,n_{1},1}\left( x\right) =h^{-1}n_{1}^{-1/2}\left( \mu _{1,4}-1\right) ^{1/2}\int K\left( x-u/h\right) dW_{1,n_{1}}(u),x\in \left[ 1,h^{-1}-1\right] , \\&\quad Y_{2,n_{2,}1}\left( x\right) =h^{-1}n_{2}^{-1/2}\left( \mu _{2,4}-1\right) ^{1/2}\int K\left( x-u/h\right) dW_{2,n_{2}}(u),x\in \left[ 1,h^{-1}-1\right] , \\&\quad Y_{1,n_{1},2}\left( x\right) =h^{-1/2}n_{1}^{-1/2}\left( \mu _{1,4}-1\right) ^{1/2}\int K\left( x-r\right) dW_{1,n_{1}}(r),x\in \left[ 1,h^{-1}-1\right] , \\&\quad Y_{2,n_{2},2}\left( x\right) =h^{-1/2}n_{2}^{-1/2}\left( \mu _{2,4}-1\right) ^{1/2}\int K\left( x-r\right) dW_{2,n_{2}}(r),x\in \left[ 1,h^{-1}-1\right] . \end{aligned}$$

As \(\text{ E }\left\{ B_{n_{1},n_{2,}3}^{2}\left( x\right) \right\} =h^{-1}\left\{ n_{1}^{-1}\left( \mu _{1,4}-1\right) +n_{2}^{-1}\left( \mu _{2,4}-1\right) \right\} \int _{-1}^{1}K^{2}\left( u\right) du\), one obtains the following standard Gaussian processes,

$$\begin{aligned}&\bigtriangleup _{1}\left( x\right) =\frac{B_{n_{1},n_{2,}3}\left( x\right) }{ h^{-1/2}\left[ \left\{ n_{1}^{-1}\nu _{1,4}+n_{2}^{-1}\nu _{2,4}\right\} \int _{-1}^{1}K^{2}\left( u\right) du\right] ^{1/2}},x\in \left[ h,1-h\right] , \end{aligned}$$
(17)
$$\begin{aligned}&\bigtriangleup _{2}\left( x\right) =\frac{Y_{1,n_{1},1}\left( x\right) -Y_{2,n_{2,}1}\left( x\right) }{h^{-1/2}\left[ \left\{ n_{1}^{-1}\nu _{1,4}+n_{2}^{-1}\nu _{2,4}\right\} \int _{-1}^{1}K^{2}\left( u\right) du \right] ^{1/2}},x\in \left[ 1,h^{-1}-1\right] , \nonumber \\ \end{aligned}$$
(18)

where \(\nu _{1,4}=\mu _{1,4}-1\) and \(\nu _{2,4}=\mu _{2,4}-1\).

Another standard Gaussian process is

$$\begin{aligned} \frac{Y_{1,n_{1},2}\left( x\right) -Y_{2,n_{2,}2}\left( x\right) }{h^{-1/2} \left[ \left\{ n_{1}^{-1}\nu _{1,4}+n_{2}^{-1}\nu _{2,4}\right\} \int _{-1}^{1}K^{2}\left( u\right) du\right] ^{1/2}},x\in \left[ 1,h^{-1}-1 \right] , \end{aligned}$$

which is \(\zeta \left( x\right) \) defined in (10).

Lemma 6

The absolute maximum of the process \(\bigtriangleup _{1}\left( x\right) \) follows the same as that of \(\bigtriangleup _{2}\left( x\right) \), and the absolute maximum of the process \(\bigtriangleup _{2}\left( x\right) \) follows the same as that of \(\zeta \left( x\right) \), that is

$$\begin{aligned} \sup _{x\in \left[ h,1-h\right] }\left| \bigtriangleup _{1}\left( x\right) \right| \overset{d}{=}\sup _{x\in \left[ 1,h^{-1}-1\right] }\left| \bigtriangleup _{2}\left( x\right) \right| \overset{d}{=} \sup _{x\in \left[ 1,h^{-1}-1\right] }\left| \zeta \left( x\right) \right| . \end{aligned}$$

Proof This lemma can be easily obtained by noting the fact that for \(s=1,2\), the process \(B_{n_{1},n_{2,}3}\left( x\right) ,x\in \left[ h,1-h\right] \) has the same probability law as \(Y_{1,n_{1},1}\left( x\right) -Y_{2,n_{2,}1}\left( x\right) ,x\in \left[ 1,h^{-1}-1\right] \), and the process \(Y_{s,n_{s},1}\left( x\right) ,x\in \left[ h,1-h\right] \) has the same probability law as \(Y_{s,n_{s},2}\left( x\right) ,x\in \left[ 1,h^{-1}-1\right] \).

Proof of Proposition 1

Proposition 1 is a direct corollary of Lemma 5, Lemma 6 and Proposition 3.

Proof of Proposition 2

According to Theorem 2 in Cai et al. (2019) and applying Taylor expansion, one has

$$\begin{aligned}&\sup _{x\in {\mathcal {I}}_{n}}\left| \ln \frac{{\hat{\sigma }}_{1}^{2}\left( x\right) }{{\hat{\sigma }}_{2}^{2}\left( x\right) }-\ln \frac{{\tilde{\sigma }} _{1}^{2}\left( x\right) }{{\tilde{\sigma }}_{2}^{2}\left( x\right) }\right| =\sup _{x\in {\mathcal {I}}_{n}}\left| \ln {\hat{\sigma }}_{1}^{2}\left( x\right) -\ln {\tilde{\sigma }}_{1}^{2}\left( x\right) -\left\{ \ln {\hat{\sigma }} _{2}^{2}\left( x\right) -\ln {\tilde{\sigma }}_{2}^{2}\left( x\right) \right\} \right| \\&\quad \le \sup _{x\in {\mathcal {I}}_{n}}\left| \ln {\hat{\sigma }}_{1}^{2}\left( x\right) -\ln {\tilde{\sigma }}_{1}^{2}\left( x\right) \right| +\sup _{x\in {\mathcal {I}}_{n}}\left| \ln {\hat{\sigma }}_{2}^{2}\left( x\right) -\ln {\tilde{\sigma }}_{2}^{2}\left( x\right) \right| \\&\quad =\sup _{x\in {\mathcal {I}}_{n}}\left| {\tilde{\sigma }}_{1}^{-2}\left( x\right) \left\{ {\hat{\sigma }}_{1}^{2}\left( x\right) -{\tilde{\sigma }} _{1}^{2}\left( x\right) \right\} \right| +\sup _{x\in {\mathcal {I}} _{n}}\left| {\tilde{\sigma }}_{2}^{-2}\left( x\right) \left\{ {\hat{\sigma }} _{2}^{2}\left( x\right) -{\tilde{\sigma }}_{2}^{2}\left( x\right) \right\} \right| +{\mathcal {O}}_{p}(n_{1}^{-1}+n_{2}^{-1}) \\&\quad \le c_{\sigma }^{-2}\sup _{x\in {\mathcal {I}}_{n}}\left| {\hat{\sigma }} _{1}^{2}\left( x\right) -{\tilde{\sigma }}_{1}^{2}\left( x\right) \right| +c_{\sigma }^{-2}\sup _{x\in {\mathcal {I}}_{n}}\left| {\hat{\sigma }} _{2}^{2}\left( x\right) -{\tilde{\sigma }}_{2}^{2}\left( x\right) \right| + {\mathcal {O}}_{p}(n_{1}^{-1}+n_{2}^{-1})=o_{p}(n^{-1/2}), \end{aligned}$$

which completes the proof.

Proof of Proposition 3

For Gaussian process \(\zeta \left( x\right) \), its correlation function is

$$\begin{aligned}&r\left( x-y\right) =\text{ corr }\left( \zeta \left( x\right) ,\zeta \left( y\right) \right) =\frac{\text{ E }\left\{ \zeta \left( x\right) \zeta \left( y\right) \right\} }{\text{ var}^{1/2}\left\{ \zeta \left( x\right) \right\} \text{ var}^{1/2}\left\{ \zeta \left( y\right) \right\} } \\&\quad =\frac{\left( n_{1}^{-1}\nu _{1,4}+n_{2}^{-1}\nu _{2,4}\right) \left( K*K\right) \left( x-y\right) }{\left( n_{1}^{-1}\nu _{1,4}+n_{2}^{-1}\nu _{2,4}\right) \int _{-1}^{1}K^{2}\left( u\right) du} \\&\quad =\frac{\left( K*K\right) \left( x-y\right) }{\int _{-1}^{1}K^{2}\left( u\right) du}, \end{aligned}$$

which implies that

$$\begin{aligned} r\left( t\right) =\frac{\int K\left( u\right) K\left( u-t\right) du}{ \int _{-1}^{1}K^{2}\left( u\right) du}. \end{aligned}$$

Define next a Gaussian process \(\varsigma \left( t\right) ,0\le t\le T=T_{n}=h^{-1}-2\),

$$\begin{aligned} \varsigma \left( t\right) =\zeta \left( t+1\right) \left\{ \int _{-1}^{1}K^{2}\left( u\right) du\right\} ^{-1/2}, \end{aligned}$$

which is stationary with mean zero and variance one, and covariance function

$$\begin{aligned} r\left( t\right) =\text{ E }\varsigma \left( s\right) \varsigma \left( t+s\right) =1-Ct^{2}+o\left( t^{2}\right) ,t\rightarrow 0, \end{aligned}$$

with \(C=\int _{-1}^{1}K^{\left( 1\right) }\left( u\right) ^{2}du/2\int _{-1}^{1}K^{2}\left( u\right) du\). Hence applying Lemmas 16, one has for \( h\rightarrow 0\) or \(T\rightarrow \infty ,\)

$$\begin{aligned} \text{ P }\left[ a_{T}\left\{ \sup _{t\in \left[ 0,T\right] }\left| \varsigma \left( t\right) \right| -b_{T}\right\} \le z\right] \rightarrow e^{-2e^{-z}},\forall z\in {\mathbb {R}}, \end{aligned}$$

where \(a_{T}=\left( 2\log T\right) ^{1/2}\) and \(b_{T}=a_{T}+a_{T}^{-1}\left\{ 2^{-1}\mathrm {log}\left( C_{K}/\left( 4\pi ^{2}\right) \right) \right\} \) . Note that

$$\begin{aligned} a_{h}a_{T}^{-1}\rightarrow 1,a_{T}\left( b_{T}-b_{h}\right) \rightarrow 0. \end{aligned}$$

Hence, applying Slutsky’s Theorem twice, one obtains that

$$\begin{aligned} a_{h}\left\{ \sup _{t\in \left[ 0,T\right] }\left| \varsigma \left( t\right) \right| -b_{h}\right\}= & {} a_{h}a_{T}^{-1}\left[ a_{T}\left\{ \sup _{t\in \left[ 0,T\right] }\left| \varsigma \left( t\right) \right| -b_{T}\right\} \right] \\&+a_{h}\left( b_{T}-b_{h}\right) , \end{aligned}$$

which converges in distribution to the same limit as \(a_{T}\left\{ \sup _{t\in \left[ 0,T\right] }\left| \varsigma \left( t\right) \right| -b_{T}\right\} \). Thus

$$\begin{aligned} \text{ P }\left[ a_{h}\left\{ \sup _{s\in \left[ 1,h^{-1}-1\right] }\left| \zeta \left( s\right) \right| -b_{h}\right\} <z\right] \rightarrow \exp \left\{ -2\exp \left( -z\right) \right\} ,z\in {\mathbb {R}}. \end{aligned}$$

Hence the proof is completed.

Proof of Theorem 1

According to Proposition 1, as \(n\rightarrow \infty ,\)

$$\begin{aligned} {\mathbb {P}}\left[ a_{h}\left\{ v_{n}^{-1}\sup _{x\in {\mathcal {I}} _{n}}\left| \ln \frac{{\tilde{\sigma }}_{1}^{2}\left( x\right) }{{\tilde{\sigma }}_{2}^{2}\left( x\right) }-\ln \frac{\sigma _{1}^{2}\left( x\right) }{ \sigma _{2}^{2}\left( x\right) }\right| -b_{h}\right\} \le z\right] \rightarrow \exp \left\{ -2\exp \left( -z\right) \right\} ,z\in {\mathbb {R}} , \nonumber \\ \end{aligned}$$
(19)

where \(a_{h},b_{h}\) and \(v_{n}\) are given in (5). Finally applying Proposition 2, one obtains

$$\begin{aligned} a_{h}\left\{ v_{n}^{-1}\sup _{x\in {\mathcal {I}}_{n}}\left| \ln \frac{{\hat{\sigma }}_{1}^{2}\left( x\right) }{{\hat{\sigma }}_{2}^{2}\left( x\right) }-\ln \frac{{\tilde{\sigma }}_{1}^{2}\left( x\right) }{{\tilde{\sigma }}_{2}^{2}\left( x\right) }\right| \right\} =o_{p}\left( \left\{ \log \left( h^{-1}\right) \right\} ^{1/2}h^{1/2}\right) =o_{p}\left( 1\right) . \end{aligned}$$

Using Slutsky’s Theorem one can substitute \(\ln \frac{{\hat{\sigma }} _{1}^{2}\left( x\right) }{{\hat{\sigma }}_{2}^{2}\left( x\right) }\) for \(\ln \frac{{\tilde{\sigma }}_{1}^{2}\left( x\right) }{{\tilde{\sigma }}_{2}^{2}\left( x\right) }\) in (19). Hence the proof of Theorem 1 is completed.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhong, C., Yang, L. Simultaneous confidence bands for comparing variance functions of two samples based on deterministic designs. Comput Stat 36, 1197–1218 (2021). https://doi.org/10.1007/s00180-020-01043-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-020-01043-6

Keywords

Navigation