Abstract
We propose a robust estimator of the stable tail dependence function in the case where random covariates are recorded. Under suitable assumptions, we derive the finite-dimensional weak convergence of the estimator properly normalized. The performance of our estimator in terms of efficiency and robustness is illustrated through a simulation study. Our methodology is applied on a real dataset of sale prices of residential properties.
Similar content being viewed by others
References
Basu, A., Harris, I. R., Hjort, N. L., Jones, M. C. (1998). Robust and efficient estimation by minimizing a density power divergence. Biometrika, 85, 549–559.
Beirlant, J., Joossens, E., Segers, J. (2009). Second-order refined peaks-over-threshold modelling for heavy-tailed distributions. Journal of Statistical Planning and Inference, 139, 2800–2815.
Beirlant, J., Dierckx, G., Guillou, A. (2011). Bias-reduced estimators for bivariate tail modelling. Insurance: Mathematics and Economics, 49, 18–26.
Castro, D., de Carvalho, M. (2017). Spectral density regression for bivariate extremes. Stochastic Environmental Research and Risk Assessment, 31, 1603–1613.
Castro, D., de Carvalho, M., Wadsworth, J. L. (2018). Time-varying extreme value dependence with application to leading European stock markets. Annals of Applied Statistics, 12, 283–309.
Daouia, A., Gardes, L., Girard, S., Lekina, A. (2011). Kernel estimators of extreme level curves. TEST, 20, 311–333.
de Carvalho, M. (2016). Statistics of extremes: Challenges and opportunities. In F. Longin (Ed.), Extreme events in finance: A handbook of extreme value theory and its applications. Hoboken: Wiley.
de Carvalho, M., Leonelli, M., Rossi, A. (2020). Tracking change-points in multivariate extreme. arXiv:2011.05067.
De Cock, D. (2011). Ames, Iowa: Alternative to the Boston housing data as an end of semester regression project. Journal of Statistics Education. https://doi.org/10.1080/10691898.2011.11889627.
de Haan, L., Ferreira, A. (2006). Extreme value theory: An introduction. New York: Springer.
Dell’Aquila, R., Embrechts, P. (2006). Extremes and robustness: A contradiction? Financial Markets and Portfolio Management, 20, 103–118.
Drees, H. (2022). Statistical inference on a changing extreme value dependence structure. arXiv:2201.06389v2.
Dutang, C., Goegebeur, Y., Guillou, A. (2014). Robust and bias-corrected estimation of the coefficient of tail dependence. Insurance: Mathematics and Economics, 57, 46–57.
Escobar-Bach, M., Goegebeur, Y., Guillou, A., You, A. (2017). Bias-corrected and robust estimation of the bivariate stable tail dependence function. TEST, 26, 284–307.
Escobar-Bach, M., Goegebeur, Y., Guillou, A. (2018a). Local robust estimation of the Pickands dependence function. Annals of Statistics, 46, 2806–2843.
Escobar-Bach, M., Goegebeur, Y., Guillou, A. (2018b). Local estimation of the conditional stable tail dependence function. Scandinavian Journal of Statistics, 45, 590–617.
Escobar-Bach, M., Goegebeur, Y., Guillou, A. (2020). Bias correction in conditional multivariate extremes. Electronic Journal of Statistics, 14, 1773–1795.
Feuerverger, A., Hall, P. (1999). Estimating a tail exponent by modelling departure from a Pareto distribution. Annals of Statistics, 27, 760–781.
Fujisawa, H., Eguchi, S. (2008). Robust parameter estimation with a small bias against heavy contamination. Journal of Multivariate Analysis, 99, 2053–2081.
Gardes, L., Girard, S. (2015). Nonparametric estimation of the conditional tail copula. Journal of Multivariate Analysis, 137, 1–16.
Giné, E., Guillou, A. (2002). Rates of strong uniform consistency for multivariate kernel density estimators. Annales de l’Institut Henri Poincaré, Probabilités et Statistiques, 38, 907–921.
Giné, E., Koltchinskii, V., Zinn, J. (2004). Weighted uniform consistency of kernel density estimators. Annals of Probability, 32, 2570–2605.
Goegebeur, Y., Guillou, A., Qin, J. (2019). Bias-corrected estimation for conditional Pareto- type distributions with random right censoring. Extremes, 22, 459–498.
Goegebeur, Y., Guillou, A., Ho, N. K. L., Qin, J. (2020). Robust nonparametric estimation of the conditional tail dependence coefficient. Journal of Multivariate Analysis. https://doi.org/10.1016/j.jmva.2020.104607.
Goegebeur, Y., Guillou, A., Ho, N. K. L., Qin, J. (2021). A Weissman-type estimator of the conditional marginal expected shortfall. Econometrics and Statistics. https://doi.org/10.1016/j.ecosta.2021.09.006
Gomes, M. I., Martins, M. J. (2004). Bias-reduction and explicit semi-parametric estimation of the tail index. Journal of Statistical Planning and Inference, 124, 361–378.
Hampel, F., Ronchetti, E., Rousseeuw, P., Stahel, W. (1986). Robust statistics: The approach based on influence functions. New York: Wiley.
Huang, X. (1992). Statistics of bivariate extremes. PhD Thesis, Erasmus University Rotterdam, Tinbergen Institute Research series No. 22.
Huber, P. (1981). Robust statistics. New York: Wiley.
Hubert, M., Dierckx, G., Vanpaemel, D. (2013). Detecting influential data points for the Hill estimator in Pareto-type distributions. Computational Statistics and Data Analysis, 65, 13–28.
Kullback, S., Leibler, R. A. (1951). On information and sufficiency. The Annals of Mathematical Statistics, 22, 79–86.
Ledford, A. W., Tawn, J. A. (1997). Modelling dependence within joint tail regions. Journal of the Royal Statistical Society: Series B, 59, 475–499.
Mhalla, L., de Carvalho, M., Chavez-Demoulin, V. (2019). Regression type models for extremal dependence. Scandinavian Journal of Statistics, 46, 1141–1167.
Minami, M., Eguchi, S. (2002). Robust blind source separation by beta divergence. Neural Computation, 14, 1859–1886.
Nolan, D., Pollard, D. (1987). U-processes: Rates of convergence. Annals of Statistics, 15, 780–799.
Resnick, S. I. (2007). Heavy-tail phenomena. Probabilistic and statistical modeling. New York: Springer.
Song, J. (2021). Sequential change point test in the presence of outliers: The density power divergence based approach. Electronic Journal of Statistics, 15, 3504–3550.
Acknowledgements
The authors sincerely thank the editor, associate editor and the referees for their helpful comments and suggestions that led to substantial improvement of the paper. The research of Armelle Guillou was supported by the French National Research Agency under the grant ANR-19-CE40-0013-01/ExtremReg project and an International Emerging Action (IEA-00179). Computation/simulation for the work described in this paper was supported by the DeIC National HPC Centre, SDU.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflicts of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Appendix
Appendix
The minimization of the empirical density power divergence \({{\widehat{\Delta }}}_{\alpha , 1-t}(\delta _{1-t}|x_0)\) is based on its derivative. Direct computations show that all the terms appearing in this derivative have the following form
for \(s <0\).
Assuming \(F_{Z_{1-t}}(y|x_0)\) is strictly increasing in y, we can rewrite this main statistic as follows:
where
Thus, we start this appendix with some auxiliary results allowing us to study the statistic \(T_{n,1-t}(y|x_0)\) and subsequently in Section 7.2 we establish the weak convergence of \(S_{n,1-t}(s|x_0)\). Finally, in Sect. 7.3, Theorem 1 will be established. The proof of Theorem 2 from Sect. 3 is deferred to the online Supplementary Material.
1.1 Auxiliary results in case of known margins
First, we establish the joint weak convergence of processes \(W_{n,1-t_j} := \lbrace \sqrt{kh_n^d} [T_{n,1-t_j}(y|x_0)-yf_X(x_0)]; y \in (0,T] \rbrace \), \(j=1,\ldots ,J\).
Lemma 1
Assume \(({\mathcal {D}}_{1-t_j})\) and \(({\mathcal {H}}_{1-t_j})\) for \(j=1,\ldots ,J\), \(({\mathcal {D}}_{0.5})\), \(({\mathcal {H}}_{0.5})\), \(({\mathcal {K}}_1)\), \(x_0\in Int(S_X)\) with \(f_X(x_0)>0\), and \(y \mapsto F_{Z_{1-t_j}}(y|x_0)\), \(j=1,\ldots ,J\), are strictly increasing. Consider sequences \(k \rightarrow \infty \) and \(h_n\rightarrow 0\) as \(n \rightarrow \infty \) such that \(k/n \rightarrow 0\), \(kh_n^d \rightarrow \infty \), \(h_n^{\eta _{\varepsilon _{1-t_1}}\wedge \cdots \wedge \eta _{\varepsilon _{1-t_J}}\wedge \eta _{\varepsilon _{0.5}}}\log \frac{n}{k} \rightarrow 0\), \(\sqrt{kh_n^d}h_n^{\eta _{f_X}\wedge \eta _{G_{1-t_1}}\wedge \cdots \wedge \eta _{G_{1-t_J}}}\rightarrow 0\), and for \(j=1,\ldots ,J\), \(\sqrt{kh_n^d} |\delta _{1-t_j}(U_{Z_{1-t_j}}({n\over k}|x_0)|x_0)|h_n^{\eta _{C_{1-t_j}}}\rightarrow 0\) and \(\sqrt{kh_n^d} |\delta _{1-t_j}(U_{Z_{1-t_j}}({n\over k}|x_0)|x_0)| h_n^{\eta _{\varepsilon _{1-t_j}}} \log {n\over k} \rightarrow 0\). Then, for \(n \rightarrow \infty \), we have
in \(\ell ^J((0,T])\), for any \(T >0\).
Lemma 2
Under the assumptions of Lemma 1, for any sequence \(u_n^{(j)}\) satisfying
as \(n \rightarrow \infty \), \(j=1, \ldots , J\), we have
Lemma 3
Assume \(({\mathcal {D}}_{1-t_j})\) and \(({\mathcal {H}}_{1-t_j})\) for \(j=1,\ldots ,J\) , \(({\mathcal {D}}_{0.5})\) , \(({\mathcal {H}}_{0.5})\) , \(({\mathcal {K}}_1)\) , \(x_0\in Int(S_X)\) with \(f_X(x_0)>0\) , and \(y \mapsto F_{Z_{1-t_j}}(y|x_0)\) , \(j=1,\ldots ,J\) , are strictly increasing. Consider sequences \(k \rightarrow \infty \) and \(h_n\rightarrow 0\) as \(n \rightarrow \infty \) such that \(k/n \rightarrow 0\) , \(kh_n^d \rightarrow \infty \) , \(h_n^{\eta _{\varepsilon _{1-t_1}}\wedge \cdots \wedge \eta _{\varepsilon _{1-t_J}}\wedge \eta _{\varepsilon _{0.5}}}\log \frac{n}{k} \rightarrow 0\) , \(\sqrt{kh_n^d}h_n^{\eta _{f_X}\wedge \eta _{G_{1-t_1}}\wedge \cdots \wedge \eta _{G_{1-t_J}}}\rightarrow 0\) , \(\sqrt{kh_n^d} |\delta _{1-t_j}(U_{Z_{1-t_j}}({n\over k}|x_0)|x_0)|\rightarrow 0\) , \(j=1,\dots ,J\) . Then, we have
1.2 Joint weak convergence of \(S_{n,1-t_j}(s_j|x_0), j=1,\ldots , M\)
We have now all the ingredients to state the joint weak convergence of \(S_{n,1-t_j}(s_j|x_0)\), \(j=1,\ldots ,M\). Note that we allow for the possibility that \(t_j=t_{j'}\) for \(j \ne j'\), but of course the statistics \(S_{n,1-t_j}(s_j|x_0)\), \(j=1,\ldots ,M\), must be different. This is due to the fact that, for a given value of t, the study of the MDPD estimator \({{\widehat{\delta }}}_{n,1-t}\) requires the joint convergence in distribution of several statistics \(S_{n,1-t}(s|x_0)\), with different values of s.
Theorem 3
Under the conditions of Theorem 1, we have, for \(s_1,\ldots ,s_M <0\),
To prove this Theorem 3, we start to establish the weak convergence of an individual statistic \(S_{n,1-t}(s|x_0)\), properly normalized. We have the following decomposition
We study the terms separately. Clearly, using Lemma 5.2 from Goegebeur et al. (2021) we have that for n large, with arbitrary large probability,
and hence, by Lemma 1 combined with the Skorohod construction we obtain \(T_{1,k}=o_{{\mathbb {P}}}(1)\) and \(T_{4,k}=o_{{\mathbb {P}}}(1).\)
Using again Lemma 5.2 in Goegebeur et al. (2021) with continuity, we have
Concerning \(T_{3,k}\), we can use the following decomposition:
By Proposition B.1.10 in de Haan and Ferreira (2006), for n large, with arbitrary large probability, we have for \(\varepsilon , \xi >0\)
In the above, the notation \(a^{\pm \bullet }\) means \(a^{\bullet }\) if \(a\ge 1\) and \(a^{-\bullet }\) if \(a<1\). This implies by Lemma 3 and our conditions that
Concerning now \(T_{5,k}\), we have for any \(\delta \in (0,1)\) small
Finally, concerning \(T_{6,k}\), we have
with
using arguments similar to those for \(T^{(1)}_{3,k}\). Consequently, using again Lemma 3, we deduce that
Combining decomposition (12) with (13)–(19), the proof of the marginal weak convergence of \(S_{n,1-t}(s|x_0)\), properly normalized, is achieved.
The joint weak convergence of \(( \sqrt{kh_n^d}[S_{n,1-t_j}(s_j|x_0) - f_X(x_0)/( 1-s_j) ], j=1,\ldots ,M )\) follows from Lemmas 1 and 3, respectively. \(\square \)
1.3 Proof of Theorem 1
Again we first consider the case of a single estimator \({{\widehat{L}}}_k(y_1,y_2|x_0)\). From (3), (4) and (5), we deduce that
Now remark that
by (16). This implies that
Using the fact that
we can deduce that
Now, concerning the finite-dimensional convergence, it follows from Lemma 3 combined with the following theorem which states the joint behavior of the MDPD estimator \({{\widehat{\delta }}}_{n,1-t_j}, j=1,\ldots , J\), and whose proof is deferred to the online Supplementary Material:
Theorem 4
Under the conditions of Theorem 1, with probability tending to one, there exists sequences of solutions \(({{\widehat{\delta }}}_{n,1-t_j})_{n \ge 1}\), \(j=1,\ldots ,J,\) to the MDPD estimating equations such that
Moreover, for the consistent solution sequences one has that
where c is defined in Theorem 1. \(\square \)
About this article
Cite this article
Goegebeur, Y., Guillou, A. & Qin, J. Robust estimation of the conditional stable tail dependence function. Ann Inst Stat Math 75, 201–231 (2023). https://doi.org/10.1007/s10463-022-00839-1
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10463-022-00839-1