Skip to main content

Variational Learning in Graphical Models and Neural Networks

  • Conference paper
  • First Online:
ICANN 98 (ICANN 1998)

Part of the book series: Perspectives in Neural Computing ((PERSPECT.NEURAL))

Included in the following conference series:

Abstract

Variational methods are becoming increasingly popular for inference and learning in probabilistic models. By providing bounds on quantities of interest, they offer a more controlled approximation framework than techniques such as Laplace’s method, while avoiding the mixing and convergence issues of Markov chain Monte Carlo methods, or the possible computational intractability of exact algorithms. In this paper we review the underlying framework of variational methods and discuss example applications involving sigmoid belief networks, Boltzmann machines and feed-forward neural networks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. M. I. Jordan, Z. Gharamani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. In M. I. Jordan, editor, Learning in Graphical Models. Kluwer, 1998.

    Google Scholar 

  2. R. M. Neal and G. E. Hinton. A new view of the EM algorithm that justifies incremental and other variants. In M. I. Jordan, editor, Learning in Graphical Models. Kluwer, 1998.

    Google Scholar 

  3. A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, B, 39(1):1–38, 1977.

    MathSciNet  MATH  Google Scholar 

  4. L. K. Saul, T. Jaakkola, and M. I. Jordan. Mean field theory for sigmoid belief networks. Journal of Artificial Intelligence Research, 4:61–76, 1996.

    Article  Google Scholar 

  5. T. Jaakkola and M. I. Jordan. Approximating posteriors via mixture models. In M. I. Jordan, editor, Learning in Graphical Models. Kluwer, 1998.

    Google Scholar 

  6. C. M. Bishop, N. Lawrence, T. Jaakkola, and M. I. Jordan. Approximating posterior distributions in belief networks using mixtures. In Advances in Neural Information Processing Systems, volume 10, 1998.

    Google Scholar 

  7. B. Frey, N. Lawrence, and C. M. Bishop. Markovian inference in belief networks, 1998. Draft technical report.

    Google Scholar 

  8. D. Ackley, G. Hinton, and T. Sejnowski. A learning algorithm for Boltzmann machines. Cognitive Science, 9:147–169, 1985.

    Article  Google Scholar 

  9. C. Peterson and J. R. Anderson. A mean field learning algorithm for neural networks. Complex Systems, 1:995–1019, 1987.

    MATH  Google Scholar 

  10. N. Lawrence, C. M. Bishop, and M. Jordan. Mixture representations for inference and learning in Boltzmann machines. In Uncertainty in Artificial Intelligence. Morgan Kaufmann, 1998.

    Google Scholar 

  11. C. M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, 1995.

    Google Scholar 

  12. D. J. C. MacKay. A practical Bayesian framework for back-propagation networks. Neural Computation, 4(3):448–472, 1992.

    Article  Google Scholar 

  13. G. E. Hinton and D. van Camp. Keeping neural networks simple by minimizing the description length of the weights. In Proceedings of the Sixth Annual Conference on Computational Learning Theory, pages 5–13, 1993.

    Google Scholar 

  14. D. Barber and C. M. Bishop. Variational learning in Bayesian neural networks. In C. M. Bishop, editor, Generalization in Neural Networks and Machine Learning. Springer Verlag, 1998.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag London

About this paper

Cite this paper

Bishop, C.M. (1998). Variational Learning in Graphical Models and Neural Networks. In: Niklasson, L., Bodén, M., Ziemke, T. (eds) ICANN 98. ICANN 1998. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-1599-1_2

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-1599-1_2

  • Published:

  • Publisher Name: Springer, London

  • Print ISBN: 978-3-540-76263-8

  • Online ISBN: 978-1-4471-1599-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics