Skip to main content
Log in

Regularized Kernel-Based Reconstruction in Generalized Besov Spaces

  • Published:
Foundations of Computational Mathematics Aims and scope Submit manuscript

Abstract

We present a theoretical framework for reproducing kernel-based reconstruction methods in certain generalized Besov spaces based on positive, essentially self-adjoint operators. An explicit representation of the reproducing kernel is given in terms of an infinite series. We provide stability estimates for the kernel, including inverse Bernstein-type estimates for kernel-based trial spaces, and we give condition estimates for the interpolation matrix. Then, a deterministic error analysis for regularized reconstruction schemes is presented by means of sampling inequalities. In particular, we provide error bounds for a regularized reconstruction scheme based on a numerically feasible approximation of the kernel. This allows us to derive explicit coupling relations between the series truncation, the regularization parameters and the data set.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. It follows from [10, Theorems 3.4 & 3.7] and (39) that such integral kernels exist for \({\sigma } > d/2\) with \(\varvec{K}^{({\sigma })} (x, \cdot ) \in L^p(M; d{\mu })\) for \(1 \le p \le \infty \) and \(\varvec{K}^{(\sigma )} (x, \cdot ) \in B_{2,2}^{\sigma }\) \((M; D)\).

  2. For \(p\ne \infty \), the convergence rate is \(\delta ^{{\sigma -}d/r}\) instead of the anticipated rate \(\delta ^{{\sigma -}d(1/r-1/p)_+}\), where \((x)_{+}=\max \{x,0\}\) (cf. [61] for the case of classical Sobolev spaces). This is most likely due to the fact that we work with the global estimates (51) and (52) instead of local estimates on a cover (see also [41, 61]).

  3. There are also lower bounds for the covering number, cf. [11, Theorem 5.21].

  4. For practical consideration other parameter choices could be more useful. We do not give the details here, but leave those considerations to the reader since we work in a very general framework and hence do not have a model for the numerical costs for realizing \(\varepsilon _{\max }\). In many specific applications, an estimate for these costs is available and can be employed in an exhaustive cost–benefit discussion.

References

  1. R. A. Adams and J. J. F. Fournier, Sobolev Spaces, Academic Press, Oxford (UK), 2003.

    MATH  Google Scholar 

  2. R. Arcangéli, M. C. L. di Silanes, and J. J. Torrens, An extension of a bound for functions in Sobolev spaces, with applications to (m,s)-spline interpolation and smoothing, Numer. Math., 107(2) (2007), pp. 181–211.

  3. R. Arcangéli, M. C. L. di Silanes, and J. J. Torrens, Estimates for functions in Sobolev spaces defined on unbounded domains, J. Approx. Theory, 161 (2009), pp. 198 – 212.

    Article  MathSciNet  MATH  Google Scholar 

  4. R. Arcangéli, M. C. L. di Silanes, and J. J. Torrens, Extension of sampling inequalities to Sobolev semi-norms of fractional order and derivative data, Numer. Math,, 121 (2012), pp. 587–608.

    Article  MathSciNet  MATH  Google Scholar 

  5. L. Boytsov and B. Naidan, Learning to prune in metric and non-metric spaces, in Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., C. J. C. Burges, L. Bottou, Z. Ghahramani, and K. Q. Weinberger, eds., 2013, pp. 1574–1582.

  6. T. Bozkaya and M. Ozsoyoglu, Distance-based indexing for high-dimensional metric spaces, in Proceedings of the 1997 ACM SIGMOD International Conference on Management of Data, SIGMOD ’97, New York, NY, USA, 1997, ACM, pp. 357–368.

  7. S. Chandrasekaran, K. R. Jayaraman, and H. N. Mhaskar, Minimum Sobolev norm interpolation with trigonometric polynomials on the torus, J. Comput. Phys, 249 (2013), pp. 96 – 112.

    Article  MathSciNet  MATH  Google Scholar 

  8. S. Chandrasekaran and H. N. Mhaskar, A construction of linear bounded interpolatory operators on the torus, Preprint. https://arxiv.org/pdf/1011.5448.pdf.

  9. T. Coulhon and A. Grigor’yan, Random walks on graphs with regular volume growth, Geom. Funct. Anal., 8 (1998), pp. 656–701.

    Article  MathSciNet  MATH  Google Scholar 

  10. T. Coulhon, G. Kerkyacharian, and P. Petrushev, Heat kernel generated frames in the setting of Dirichlet spaces, J. Fourier Anal. Appl., 18 (2012), pp. 995–1066.

    Article  MathSciNet  MATH  Google Scholar 

  11. F. Cucker and D.-X. Zhou, Learning Theory: An Approximation Theory Viewpoint, Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press, 2007.

    Book  MATH  Google Scholar 

  12. P. C. Curtis, \(n\) -parameter families and best approximation, Pac. J. Math., 9 (1959), p. 10131027.

    Article  MathSciNet  Google Scholar 

  13. N. Dunford and J. Schwartz, Linear Operators, Part I: General Theory, Interscience Publishers, New York, 1958.

  14. N. Dyn, F. J. Narcowich, and J. D. Ward, Variational principles and Sobolev-type estimates for generalized interpolation on a Riemannian manifold, Constr. Approx., 15 (1999), pp. 175–208.

  15. P. F. Evangelista, M. J. Embrechts, and B. K. Szymanski, Taming the curse of dimensionality in kernels and novelty detection, in Applied Soft Computing Technologies: The Challenge of Complexity, A. Abraham, B. de Baets, M. Köppen, and B. Nickolay, eds., vol. 34 of Advances in Soft Computing, Springer Berlin Heidelberg, 2006, pp. 425–438.

  16. D. Geller and I. Z. Pesenson, Band-limited localized Parseval frames and Besov spaces on compact homogeneous manifolds, J. Geomet. Anal., 21 (2011), pp. 334–371.

  17. A. Globerson and S. Roweis, Metric learning by collapsing classes, Adv. Neural Inf. Process Syst., 18 (2006), pp. 451–458.

    Google Scholar 

  18. M. Gordina, T. Kumagai, L. Saloff-Coste, and K.-T. Sturm, Heat kernels, stochastic processes and functional inequalities, Oberwolfach Reports, 10 (2013), pp. 1359–1443.

    Article  MathSciNet  MATH  Google Scholar 

  19. M. Griebel, C. Rieger, and B. Zwicknagl, Multiscale approximation and reproducing kernel Hilbert space methods, SIAM J. Numer. Anal., 53 (2015), pp. 852–873.

    Article  MathSciNet  MATH  Google Scholar 

  20. A. Grigor’yan, Heat kernels on weighted manifolds and applications, Cont. Math., 398 (2006), pp. 93–191.

    Article  MathSciNet  MATH  Google Scholar 

  21. A. Grigor’yan, Heat Kernel and Analysis on Manifolds, vol. 47 of AMS/IP Studies in Advanced Mathematics, American Mathematical Society, USA, 2009.

  22. T. Hangelbroek, F. J. Narcowich, C. Rieger, and J. D. Ward, An inverse theorem for compact Lipschitz regions in \(\mathbb{R}^d\) using localized kernel bases. Math. Comp., AMS early view version. doi:10.1090/mcom/3256.

  23. T. Hanglebroek, F. J. Narcowich, X. Sun, and J. D. Ward, Kernel approximation on manifolds II: the \(L_{\infty }\) projector, SIAM J. Math. Anal., 43 (2011), pp. 662–684.

  24. K. Jetter, J. Stöckler, and J. D. Ward, Norming sets and scattered data approximation on spheres, in Approximation Theory IX, Vol. II: Computational Aspects, Vanderbilt University Press, 1998, pp. 137 – 144.

  25. P. Jorgensen and F. Tian, Frames and factorization of graph Laplacians, Opuscula Math., 35 (2015), pp. 293–332.

  26. A. Kaenmaki, J. Lehrback, and M. Vuorinen, Dimensions, Whitney covers, and tubular neighborhoods, Indiana Univ. Math. J., 62 (2013), pp. 1861–1889.

  27. E. Keogh and A. Mueen, Curse of dimensionality, in Encyclopedia of Machine Learning, C. Sammut and G. I. Webb, eds., Springer US, 2010, pp. 257–258.

    Google Scholar 

  28. G. Kerkyacharian and P. Petrushev, Heat kernel based decomposition of spaces of distributions in the framework of Dirichlet spaces, Trans. Amer. Math. Soc., 367 (2015), pp. 121–189.

    Article  MathSciNet  MATH  Google Scholar 

  29. S. Lin, Nonparametric regression using needlet kernels for spherical data. available at arXiv:1502.04168, 2015.

  30. W. Madych, An estimate for multivariate approximation II, J. Approx. Theory, 142 (2006), pp. 116–128.

    Article  MathSciNet  MATH  Google Scholar 

  31. M. Maggioni and H. N. Mhaskar, Diffusion polynomial frames on metric measure spaces, Appl. Comp. Harm. Anal., 24 (2008), pp. 329 – 353.

    Article  MathSciNet  MATH  Google Scholar 

  32. J. Mairhuber, On Haar’s theorem concerning Chebysheff problems having unique solutions, Proc. Amer. Math. Soc., 7 (1956), pp. 609–615.

  33. H. N. Mhaskar, A Markov- Bernstein inequality for Gaussian networks, in Trends and Applications in Constructive Approximation, vol. 151 of Internat. Ser. Numer. Math., Birkhäuser, Basel, 2005, pp. 165–180.

  34. H. N. Mhaskar, Eignets for function approximation on manifolds, Appl. Comp. Harm. Anal., 29 (2010), pp. 63 – 87.

    Article  MathSciNet  MATH  Google Scholar 

  35. H. N. Mhaskar, F. J. Narcowich, J. Prestin, and J. D. Ward, \(\text{L}^{p}\) Bernstein estimates and approximation by spherical basis functions, Math. Comp., 79 (2010), pp. 1647–1679.

  36. F. J. Narcowich, P. Petrushev, and J. D. Ward, Decomposition of Besov and Triebel-Lizorkin spaces on the sphere, J. Funct. Anal., 238 (2006), pp. 530–564.

  37. F. J. Narcowich, X. Sun, J. D. Ward, and H. Wendland, Direct and inverse Sobolev error estimates for scattered data interpolation via spherical basis functions, Found. Comput. Math., 7 (2007), pp. 369–390.

  38. F. J. Narcowich, J. D. Ward, and H. Wendland, Sobolev error estimates and a Bernstein inequality for scattered data interpolation via radial basis functions, Constr. Approx., 24 (2006), pp. 175–186.

  39. F. J. Narcowich, P. Petrushev, and J. D. Ward, Localized Tight Frames on Spheres, SIAM J. Math. Anal., 38 (2006), pp. 574–594.

    Article  MathSciNet  MATH  Google Scholar 

  40. F. J. Narcowich and J. D. Ward, Scattered-data interpolation on \(\mathbb{R} ^n\): Error estimates for radial basis and band-limited functions, SIAM J. Math. Anal., 36 (2004), pp. 284–300.

  41. F. J. Narcowich, J. D. Ward, and H. Wendland, Sobolev bounds on functions with scattered zeros, with applications to radial basis function surface fitting, Math. Comp., 74 (2005), pp. 743–763.

  42. R. Opfer, Multiscale kernels, Adv. Comp. Math., 25 (2006), pp. 357–380.

    Article  MathSciNet  MATH  Google Scholar 

  43. R. Opfer, Tight frame expansions of multiscale reproducing kernels in Sobolev spaces, Appl. Comput. Harm. Anal., 20 (2006), pp. 357–374.

    Article  MATH  Google Scholar 

  44. I. Pesenson, A sampling theorem on homogeneous manifolds, Trans. Amer. Math. Soc., 352 (2000), pp. 4257–4269.

  45. P. Petrushev and Y. Xu, Decomposition of spaces of distributions induced by Hermite expansions, J. Fourier Anal. Appl., 14 (2008), pp. 371–414.

    Article  MathSciNet  MATH  Google Scholar 

  46. C. Rieger, Sampling inequalities and applications, PhD thesis, University of Göttingen, 2008. http://hdl.handle.net/11858/00-1735-0000-0006-B3B9-0.

  47. C. Rieger, R. Schaback, and B. Zwicknagl, Sampling and stability, Mathematical Methods for Curves and Surfaces, vol. 5862 of Lecture Notes in Computer Science, M. Dæhlen, M. Floater, T. Lyche, J. L. Merrien, K. Mørken, and L. L. Schumaker, eds., Springer, Berlin, Heidelberg, 2010, pp. 347–369.

  48. C. Rieger and B. Zwicknagl, Deterministic error analysis of support vector machines and related regularized kernel methods, J. Mach. Learn. Res., 10 (2009), pp. 2115–2132.

    MathSciNet  MATH  Google Scholar 

  49. C. Rieger and B. Zwicknagl, Sampling inequalities for infinitely smooth functions, with applications to interpolation and machine learning, Adv. Comp. Math., 32(1) (2010), pp. 103–129.

    Article  MathSciNet  MATH  Google Scholar 

  50. R. Schaback and H. Wendland, Inverse and saturation theorems for radial basis function interpolation, Math. Comp., 71 (2002), pp. 669–681.

    Article  MathSciNet  MATH  Google Scholar 

  51. R. Schaback and H. Wendland, Kernel techniques: From machine learning to meshless methods, Acta Numerica, 15 (2006), pp. 543–639.

    Article  MathSciNet  MATH  Google Scholar 

  52. B. Schölkopf and A. J. Smola, Learning with kernels - Support Vector Machines, Regularisation, and Beyond, MIT Press, Cambridge, Massachusetts, 2002.

  53. S. Smale and D.-X. Zhou, Estimating the approximation error in learning theory, Anal. Appl., 01 (2003), pp. 17–41.

    Article  MathSciNet  MATH  Google Scholar 

  54. B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Schölkopf, and G. R. G. Lanckriet, Hilbert space embeddings and metrics on probability measures, J. Mach. Learn. Res., 11 (2010), pp. 1517–1561.

  55. H. Triebel, Interpolation theory, function spaces, differential operators, North-Holland Mathematical Library, Amsterdam-New York, 1978.

    MATH  Google Scholar 

  56. H. Triebel, Theory of function spaces, vol. 78 of Monographs in Math., Birkhäuser Verlag, Basel, 1983.

  57. F. Utreras, Convergence rates for multivariate smoothing spline functions, J. Approx. Theory, 52 (1988), pp. 1–27.

    Article  MathSciNet  MATH  Google Scholar 

  58. J. P. Ward, \(\text{ L }^p\) Bernstein inequalities and inverse theorems for RBF approximation on \(\text{ R }^d\) , J. Approx. Theory., 164 (2012), pp. 1577–1593.

  59. H. Wendland, Local polynomial reproduction and moving least squares approximation, IMA J. Numer. Anal., 21 (2001), pp. 285–300.

    Article  MathSciNet  MATH  Google Scholar 

  60. H. Wendland, Scattered Data Approximation, Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press, Cambridge, 2005.

    MATH  Google Scholar 

  61. H. Wendland and C. Rieger, Approximate interpolation with applications to selecting smoothing parameters, Numer. Math., 101 (2005), pp. 729–748.

    Article  MathSciNet  MATH  Google Scholar 

  62. J. Werner, Numerische Mathematik 1. Lineare und nichtlineare Gleichungssysteme, Interpolation, numerische Integration, Vieweg, Braunschweig-Wiesbaden, 1992.

    Book  MATH  Google Scholar 

  63. W. Zhang, X. Xue, Z. Sun, Y. F. Guo, and H. Lu, Optimal dimensionality of metric space for classification, in ICML ’07: Proceedings of the 24th international conference on Machine learning, New York, NY, USA, 2007, ACM Press, pp. 1135–1142.

  64. D.-X. Zhou, The covering number in learning theory, J. of Complexity, 18 (2002), pp. 739 – 767.

    Article  MathSciNet  MATH  Google Scholar 

  65. B. Zwicknagl, Mathematical analysis of microstructures and low hysteresis shape memory alloys, PhD thesis, University of Bonn, 2011.

Download references

Acknowledgements

We are grateful for the comments and suggestions of the anonymous referees. The authors acknowledge support of the Deutsche Forschungsgemeinschaft (DFG) through the Sonderforschungsbereich 1060: The Mathematics of Emergent Effects.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christian Rieger.

Additional information

Communicated by Pencho Petrushev.

Appendix

Appendix

In this appendix, we give an explicit bound on the constant \(\tilde{b}\) of Remark 2 for the Euclidean space \(\mathbb {R}^d\) with \(d\ge 2\), where we closely follow the lines of the proof of the statement in [10, Lemma 3.19]. We recall the general strategy first and then perform the necessary estimates in our setting. For measurable sets \(\varOmega \subset \mathbb {R}^d\), we set \(|\varOmega |:=\mu (\varOmega )\). In this case, (8) and (9) hold with \(\beta =d\), i.e.,

$$\begin{aligned}0<|B(x,2t)|=2^d|B(x,t)|\quad \text {for all }x\in \mathbb {R}^d, \quad \text {and all }t>0. \end{aligned}$$

Furthermore,

$$\begin{aligned}|B(x,\sqrt{t})|=\frac{(\pi t)^{d/2}}{\varGamma (\frac{d}{2}+1)},\quad \text {and }p_t(x,y)=\frac{1}{(4\pi t)^{d/2}}\exp \left( -\Vert x-y\Vert ^2/(4t)\right) .\end{aligned}$$

Consequently,

$$\begin{aligned}p_t(x,x)=(4\pi t)^{-d/2}=\frac{1}{2^d\varGamma (d/2+1)}|B(x,\sqrt{t})|^{-1}, \end{aligned}$$

and in particular, if \(\mathbf {1}\) denotes the characteristic function,

$$\begin{aligned} \mathbf {1}_{[0,\tau ]}(\sqrt{\mathcal {D}})(x,x)\le e \cdot p_{\tau ^{-2}}(x,x)=\frac{e}{2^d\varGamma (d/2+1)}|B(x,\tau ^{-1})|^{-1} \quad \text {for all }\tau >0.\nonumber \\ \end{aligned}$$
(118)

It is shown in [10, Lemma 3.19] that for \(\tau >0\) and \(r\in \mathbb {N}\), we can set \(\tau \sqrt{t}=2^{r}\) such that

$$\begin{aligned}\frac{2^{-rd}}{|B(x,\tau ^{-1})|}\left( c'-2^dc_4\sum _{k\ge r}\exp (-2^{2k})2^{kd}\right) \le \mathbf {1}_{[0,\tau ]}(x,x), \end{aligned}$$

holds, where the constants \(c_4:=\frac{e}{2^d\varGamma (d/2+1)}=:ec'\) can be obtained from (118). Hence, to make the lower bound positive, we need to choose \(r\in \mathbb {N}\) large enough such that

$$\begin{aligned} \exp (-1)>\sum _{k\ge r}\exp (-2^{2k})2^{kd}. \end{aligned}$$
(119)

Once we have an appropriate \(r\in \mathbb {N}\) at hand, we follow the argument in [10] and set (see [10, (3.44)])

$$\begin{aligned}c_3:=\frac{2^{-r}}{2^d\varGamma (d/2+1)}\left( 1-\sum _{k\ge r}\exp (-2^{2k})2^{(k+1)d}\right) >0, \end{aligned}$$

and choose a \(\ell >0\) large enough such that

$$\begin{aligned} 0<c_32^{d\ell }-c_4. \end{aligned}$$
(120)

Then, following the proof of [10, Lemma 3.19], we may set \(\tilde{b}:=2^{\ell }\).

Lemma 10

If we choose for \(d\ge 2\), \(r(d)\in \mathbb {N}\) as the smallest integer such that

$$\begin{aligned}r(d)\ge \frac{7}{2\ln (2)}\ln \left( \frac{d\ln (2)+1}{2\ln (2)}\right) , \end{aligned}$$

then (119) holds.

Proof

We determine \(r\in \mathbb {N}\) such that

$$\begin{aligned} \exp (-2^{2k})2^{(k+1)d}<e^{-k}\text { for all }k\ge r, \end{aligned}$$
(121)

since then

$$\begin{aligned} \sum _{k\ge r} \exp (-2^{2k})2^{kd}< & {} \sum _{k\ge r}\exp (-k)=\exp (-(r+1))/(1-e^{-1})\le 2\exp (-(r+1)) \nonumber \\ {}< & {} 2\exp (-2). \end{aligned}$$
(122)

To determine \(r:=r(d)\) such that (121) holds, we set

$$\begin{aligned}h_d(x):=-\exp (2x\ln (2))+d(x+1)\ln (2)+x. \end{aligned}$$

Then, \(h_d'(x)=-2\ln (2)\exp (2x\ln (2))+d\ln (2)+1\), and, since \(h_d(x)\rightarrow -\infty \) as \(x \rightarrow \pm \infty \), \(h_d\) has a unique global maximum at

$$\begin{aligned} \tilde{x}:=\frac{1}{2\ln (2)}\ln \left( \frac{d\ln (2)+1}{2\ln (2)}\right) . \end{aligned}$$

Note that \(h_d(\tilde{x})>0\). Therefore, we look for \(r\ge \tilde{x}\) such that \(h_d(r)<0\), and then (121) follows. We make the ansatz \(r=:s\tilde{x}\) with \(s\ge 1\) and use the abbreviation \(a_d:=d\ln (2)\) and \(b:=2\ln (2)\). Then,

$$\begin{aligned} h_d(s\tilde{x})= & {} -\left( \frac{a_d+1}{b}\right) ^s+\frac{s}{b}\ln \left( \frac{a_d+1}{b}\right) (a_d+1)+a_d. \end{aligned}$$
(123)

If \(d=2,\dots ,10\), it suffices to choose \(s=6\). If \(d\ge 10\), we set

$$\begin{aligned} \tilde{h}_d(s):= -\left( \frac{a_d+1}{b}\right) ^s+\frac{(a_d+1)}{b}\left( s\ln \left( \frac{a_d+1}{b}\right) +b\right) \ge h_d(s\tilde{x}). \end{aligned}$$

Now we set

$$\begin{aligned} s(d):=\max \left\{ 2,\ \frac{1+2\ln (d/2)}{\ln (d/2)-1}\right\} , \end{aligned}$$

and estimate very roughly as follows: Since \(\ln (\frac{a_d+1}{b})\le \frac{a_d+1}{b}\) and \(b\le 2\frac{a_d+1}{b}\), we have

$$\begin{aligned} \tilde{h}(s)\le \left( \frac{a_d+1}{b}\right) ^2\left[ -\left( \frac{a_d+1}{b}\right) ^{s-2}+s+2\right] <0, \end{aligned}$$

since by the choice of s,

$$\begin{aligned} \ln \left( \frac{a_d+1}{b}\right) \ge \ln (d/2)\ge \frac{s+1}{s-2}\ge \frac{\ln (s+2)}{s-2}. \end{aligned}$$

Note that for \(d\ge 10\), s(d) is monotonically decreasing. Therefore, \(s(d)\le s(10)\le 7\), and with \(r(d)\ge 7\tilde{x}\) the assertion follows. \(\square \)

We now turn to (121) and use r as obtained in Lemma 10. By (122), it suffices to choose \(\ell >0\) such that

$$\begin{aligned} 2^{d\ell -r(d)}(1-\frac{2}{e})>e, \end{aligned}$$

which holds for

$$\begin{aligned} \ell \ge \frac{1}{d}\left[ \frac{3-\ln (e-2)}{\ln (2)}+r(d)\right] . \end{aligned}$$

In particular, \(\ell \rightarrow 0\) as \(d\rightarrow \infty \), and thus \(\tilde{b}\rightarrow 1\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Griebel, M., Rieger, C. & Zwicknagl, B. Regularized Kernel-Based Reconstruction in Generalized Besov Spaces. Found Comput Math 18, 459–508 (2018). https://doi.org/10.1007/s10208-017-9346-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10208-017-9346-z

Keywords

Mathematics Subject Classification

Navigation