Skip to main content

On the V γ Dimension for Regression in Reproducing Kernel Hilbert Spaces

  • Conference paper
  • First Online:
Algorithmic Learning Theory (ALT 1999)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1720))

Included in the following conference series:

Abstract

This paper presents a computation of the V γ dimension for regression in bounded subspaces of Reproducing Kernel Hilbert Spaces (RKHS) for the Support Vector Machine (SVM) regression ε-insensitive loss function L ε, and general L p loss functions. Finiteness of the V γ dimension is shown, which also proves uniform convergence in probability for regression machines in RKHS subspaces that use the L ε or general L p loss functions. This paper presents a novel proof of this result. It also presents a computation of an upper bound of the V γ dimension under some conditions, that leads to an approach for the estimation of the empirical V γ dimension given a set of training data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. N. Alon, S. Ben-David, N. Cesa-Bianchi, and D. Haussler. Scale-sensitive dimensions, uniform convergence, and learnability. J. of the ACM, 44(4):615–631, 1997.

    Article  MATH  MathSciNet  Google Scholar 

  2. N. Aronszajn. Theory of reproducing kernels. Trans. Amer. Math. Soc., 686:337–404, 1950.

    Article  MathSciNet  Google Scholar 

  3. P. Bartlett and J. Shawe-Taylor. Generalization performance of support vector machine and other pattern classifiers. In C. Burges B. Scholkopf, editor, Advances in Kernel Methods-Support Vector Learning. MIT press, 1998.

    Google Scholar 

  4. L. Devroye, L. Györfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Number 31 in Applications of mathematics. Springer, New York, 1996.

    MATH  Google Scholar 

  5. T. Evgeniou, M. Pontil, and T. Poggio. A unified framework for regularization networks and support vector machines. A.I. Memo No. 1654, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 1999.

    Google Scholar 

  6. F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks architectures. Neural Computation, 7:219–269, 1995.

    Article  Google Scholar 

  7. L. Gurvits. A note on scale-sensitive dimension of linear bounded functionals in Banach spaces. In Proceedings of Algorithm Learning Theory, 1997.

    Google Scholar 

  8. M. Kearns and R.E. Shapire. Efficient distribution-free learning of probabilistic concepts. Journal of Computer and Systems Sciences, 48(3):464–497, 1994.

    Article  MATH  Google Scholar 

  9. M.J.D. Powell. The theory of radial basis functions approximation in 1990. In W.A. Light, editor, Advances in Numerical Analysis Volume II: Wavelets, Subdivision Algorithms and Radial Basis Functions, pages 105–210. Oxford University Press, 1992.

    Google Scholar 

  10. A. N. Tikhonov and V. Y. Arsenin. Solutions of Ill-posed Problems.W. H. Winston, Washington, D.C., 1977.

    Google Scholar 

  11. V. N. Vapnik. Statistical Learning Theory. Wiley, New York, 1998.

    MATH  Google Scholar 

  12. G. Wahba. Splines Models for Observational Data. Series in Applied Mathematics, Vol. 59, SIAM, Philadelphia, 1990.

    Google Scholar 

  13. R. Williamson, A. Smola, and B. Scholkopf. Generalization performance of regularization networks and support vector machines via entropy numbers. Technical Report NC-TR-98-019, Royal Holloway College University of London, 1998.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Evgeniou, T., Pontil, M. (1999). On the V γ Dimension for Regression in Reproducing Kernel Hilbert Spaces. In: Watanabe, O., Yokomori, T. (eds) Algorithmic Learning Theory. ALT 1999. Lecture Notes in Computer Science(), vol 1720. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46769-6_9

Download citation

  • DOI: https://doi.org/10.1007/3-540-46769-6_9

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-66748-3

  • Online ISBN: 978-3-540-46769-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics