Abstract
The previous chapters have been concerned with predicting the values of one or more outputs or response variables Y = (Y 1,..., Y m ) for a given set of input or predictor variables X = (X 1,..., X P ). Denote by x i = (x i1,..., x ip ) the inputs for the ith training case, and let y i be a response measurement. The predictions are based on the training sample (x 1, y 1),..., (x N , y N ) of previously solved cases, where the joint values of all of the variables are known. This is called supervised learning or “learning with a teacher.” Under this metaphor the “student” presents an answer ŷ i for each x i in the training sample, and the supervisor or “teacher” provides either the correct answer and/or an error associated with the student’s answer. This is usually characterized by some loss function L(y, ŷ), for example, L(y, ŷ) = (y − ŷ)2.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer Science+Business Media New York
About this chapter
Cite this chapter
Hastie, T., Friedman, J., Tibshirani, R. (2001). Unsupervised Learning. In: The Elements of Statistical Learning. Springer Series in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-0-387-21606-5_14
Download citation
DOI: https://doi.org/10.1007/978-0-387-21606-5_14
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4899-0519-2
Online ISBN: 978-0-387-21606-5
eBook Packages: Springer Book Archive