Skip to main content

Expectation-Maximization Algorithm

  • Living reference work entry
  • First Online:
Computer Vision

Synonyms

EM-algorithm

Related Concepts

Definition

The expectation-maximization algorithm iteratively maximizes the likelihood of a training sample with respect to unknown parameters of a probability model under the condition of missing information. The training sample is assumed to represent a set of independent realizations of a random variable defined on the underlying probability space.

Background

One of the main paradigms of statistical pattern recognition and Bayesian inference is to model the relation between the observable features \(x\in \mathcal {X}\) of an object and its hidden state \(y\in \mathcal {Y}\) by a joint probability measure p(x, y). This probability measure is, however, often known only up to some parameters θ ∈ Θ. It is thus necessary to estimate these parameters from a training sample, which is assumed to represent a sequence of independent realizations of a random variable. If, ideally,...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  1. Meintrup D, Schäffler S (2005) Stochastik. Springer, Berlin

    Book  Google Scholar 

  2. Papoulis A (1990) Probability and statistics. Prentice-Hall, Englewood Cliffs

    MATH  Google Scholar 

  3. Minka T (1998) Expectation-maximization as lower bound maximization. Tutorial, MIT, Cambridge, MA

    Google Scholar 

  4. Dellaert F (2002) The expectation maximization algorithm. Technical Report GIT-GVU-02-20, Georgia Institute of Technology

    Google Scholar 

  5. Dempster AP, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc Ser B 39(1):1–38

    MathSciNet  MATH  Google Scholar 

  6. Schlesinger MI (1968) The interaction of learning and self-organization in pattern recognition. Kibernetika 4(2):81–88

    Google Scholar 

  7. Sundberg R (1974) Maximum likelihood theory for incomplete data from an exponential family. Scand J Stat 1(2):49–58

    MathSciNet  MATH  Google Scholar 

  8. McLachlan GJ, Krishnan T (1997) The EM algorithm and extensions. Wiley, New York

    MATH  Google Scholar 

  9. Schlesinger MI, Hlavac V (2002) Ten lectures on statistical and structural pattern recognition. Kluwer Academic Publishers, Dodrecht

    Book  Google Scholar 

  10. Bishop CM (2006) Pattern recognition and machine learning. Springer, Nw York

    MATH  Google Scholar 

  11. Cheng Y (1995) Mean shift, mode seeking, and clustering. IEEE Trans Pattern Anal Mach Intell 17(8):790–799

    Article  Google Scholar 

  12. Jelinek F (1998) Statistical methods for speech recognition. MIT, Cambridge, MA

    Google Scholar 

  13. Li SZ (2009) Markov random field modeling in image analysis. Advances in pattern recognition, 3rd edn. Springer, London

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Boris Flach .

Section Editor information

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Flach, B., Hlavac, V. (2020). Expectation-Maximization Algorithm. In: Computer Vision. Springer, Cham. https://doi.org/10.1007/978-3-030-03243-2_692-1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-03243-2_692-1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-03243-2

  • Online ISBN: 978-3-030-03243-2

  • eBook Packages: Springer Reference Computer SciencesReference Module Computer Science and Engineering

Publish with us

Policies and ethics