Skip to main content

Part of the book series: Perspectives in Neural Computing ((PERSPECT.NEURAL))

Abstract

This chapter gives a tutorial introduction to ensemble learning, a recently developed Bayesian method. For many problems it is intractable to perform inferences using the true posterior density over the unknown variables. Ensemble Learning allows the true posterior to be approximated by a simpler approximate distribution for which the required inferences are tractable. When we say we are making a model of a system, we are setting up a tool which can be used to make inferences, predictions and decisions. Each model can be seen as a hypothesis, or explanation, which makes assertions about the quantities which are directly observable and those which can only be inferred from their effect on observable quantities.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. G. E. Hinton and D. van Camp. Keeping neural networks simple by minimizing the description length of the weights. In Proceedings of the COLT’93, pp. 5–13, Santa Cruz, California, 1993.

    Google Scholar 

  2. S. Kullback and R. A. Leible. On information and sufficiency. The Annals of Mathematical Statistics 22: 79–86, 1951.

    Article  MATH  Google Scholar 

  3. R M. Neal. Bayesian Learning for Neural Networks. Lecture Notes in Statistics No. 118. Springer-Verlag, 1996.

    Google Scholar 

  4. R M. Neal and G E. Hinton. A view of the EM algorithm that justifies incremental, sparse and other variants. In Learning in Graphical Models. M. I. Jordan, editor, 1998.

    Google Scholar 

  5. C E. Shannon. A mathematical theory of communication. Bell Systems Technical Journal 27: 379–423 and 623-656, 1948.

    MathSciNet  MATH  Google Scholar 

  6. S. M. Stigler. Translation of Laplace’s 1774 memoir on “Probability of causes”. Statistical Science, 1(3): 359–378, 1986.

    Article  MathSciNet  Google Scholar 

Download references

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag London

About this chapter

Cite this chapter

Lappalainen, H., Miskin, J.W. (2000). Ensemble Learning. In: Girolami, M. (eds) Advances in Independent Component Analysis. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-0443-8_5

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-0443-8_5

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-85233-263-1

  • Online ISBN: 978-1-4471-0443-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics