Abstract
This chapter presents an overview of statistical learning theory, and describes key results regarding uniform convergence of empirical means and related sample complexity. This theory provides a fundamental extension of the probability inequalities studied in Chap.Ā 8 to the case when parameterized families of functions are considered, instead of a fixed function. The chapter formally studies the UCEM (uniform convergence of empirical means) property and the VC dimension in the context of the VapnikāChervonenkis theory. Extensions to the Pollard theory for continuous-valued functions are also discussed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Alamo T, Tempo R, Camacho EF (2009) A randomized strategy for probabilistic solutions of uncertain feasibility and optimization problems. IEEE Trans Autom Control 54:2545ā2559
Cover TM (1965) Geometrical and statistical properties of system of linear inequalities with applications in pattern recognition. IEEE Trans Electron Comput 14:326ā334
Devroye L, Gyƶrfi L, Lugosi G (1996) A probabilistic theory of pattern recognition. Springer, New York
Dudley RM (1978) Central limit theorems for empirical measures. Ann Probab 6:899ā929
Dudley RM (1979) Balls in āk do not cut all subsets of k+2 points. Adv Math 31:306ā308
Dudley RM (1984) A course on empirical processes. Springer, New York
Dudley RM (1999) Uniform central limit theorems. Cambridge University Press, Cambridge
Haussler D (1992) Decision theoretic generalizations of the PAC model for neural net and other learning applications. Inf Comput 100:78ā150
Karpinski M, Macintyre A (1997) Polynomial bounds for VC dimension of sigmoidal and general Pfaffian neural networks. J Comput Syst Sci 54:169ā176
Lugosi G (2002) Pattern classification and learning theory. In: Gyƶrfi L (ed) Principles of nonparametric learning. Springer, New York, pp 1ā56
Macintyre AJ, Sontag ED (1993) Finiteness results for sigmoidal āneuralā networks. In: Proceedings of the ACM symposium on theory of computing, pp 325ā334
Parrondo JM, vanĀ den Broeck C (1993) Vapnik-Chervonenkis bounds for generalization. JĀ Phys A 26:2211ā2223
Pollard D (1984) Convergence of stochastic processes. Springer, New York
Pollard D (1990) Empirical processes: theory and applications. NSF-CBMS regional conference series in probability and statistics, vol 2. Institute of Mathematical Statistics
Sauer N (1972) On the density of families of sets. J Comb Theory 13(A):145ā147
Sontag ED (1998) VC dimension of neural networks. In: Bishop CM (ed) Neural networks and machine learning. Springer, New York
Vapnik VN (1998) Statistical learning theory. Wiley, New York
Vapnik VN, Chervonenkis AY (1971) On the uniform convergence of relative frequencies to their probabilities. Theory Probab Appl 16:264ā280
Vidyasagar M (2001) Randomized algorithms for robust controller synthesis using statistical learning theory. Automatica 37:1515ā1528
Vidyasagar M (2002) Learning and generalization: with applications to neural networks, 2nd edn. Springer, New York
Wenocur RS, Dudley RM (1981) Some special Vapnik-Chervonenkis classes. Discrete Math 33:313ā318
Author information
Authors and Affiliations
Rights and permissions
Copyright information
Ā© 2013 Springer-Verlag London
About this chapter
Cite this chapter
Tempo, R., Calafiore, G., Dabbene, F. (2013). Statistical Learning Theory. In: Randomized Algorithms for Analysis and Control of Uncertain Systems. Communications and Control Engineering. Springer, London. https://doi.org/10.1007/978-1-4471-4610-0_9
Download citation
DOI: https://doi.org/10.1007/978-1-4471-4610-0_9
Publisher Name: Springer, London
Print ISBN: 978-1-4471-4609-4
Online ISBN: 978-1-4471-4610-0
eBook Packages: EngineeringEngineering (R0)