Skip to main content

Optimal Mutation Rate Using Bayesian Priors for Estimation of Distribution Algorithms

  • Conference paper
  • First Online:
Stochastic Algorithms: Foundations and Applications (SAGA 2001)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2264))

Included in the following conference series:

Abstract

UMDA(the univariate marginal distribution algorithm) was derived by analyzing the mathematical principles behind recombination. Mutation, however, was not considered. The same is true for the FDA (factorized distribution algorithm), an extension of the UMDA which can cover dependencies between variables. In this paper mutation is introduced into these algorithms by a technique called Bayesian prior. We derive theoretically an estimate how to choose the Bayesian prior. The recommended Bayesian prior turns out to be a good choice in a number of experiments. These experiments also indicate that mutation increases in many cases the performance of the algorithms and decreases the dependence on a good choice of the population size.

Real World Computing Partnership

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. G.F. Cooper and E.A. Herskovits. A Bayesian method for the induction of probabilistic networks from data. Machine Learning, 9:309–347, 1992.

    MATH  Google Scholar 

  2. D. S. Falconer. Introduction to Quantitative Genetics. Longman, London, 1981.

    Google Scholar 

  3. D. Heckerman. Atutorial on learning with Bayesian networks. In Jordan [4], pages 301–354.

    Google Scholar 

  4. M.I. Jordan, editor. Learning in Graphical Models. MIT Press, Cambrigde, 1999.

    Google Scholar 

  5. St.A. Kauffman and S. Levin. Towards a general theory of adaptive walks on rugged landscapes. Journal of Theoretical Biology, 128:11–45, 1987.

    Article  MathSciNet  Google Scholar 

  6. H. Mühlenbein. The equation for response to selection and its use for prediction. Evolutionary Computation, 5:303–346, 1998.

    Article  Google Scholar 

  7. H. Mühlenbein and G. Paaß. From recombination of genes to the estimation of distributions i. binary parameters. In H.-M. Voigt, W. Ebeling, I. Rechenberg, and H.-P. Schwefel, editors, Lecture Notes in Computer Science 1141: Parallel Problem Solving from Nature-PPSN IV, pages 178–187, Berlin, 1996. Springer-Verlag.

    Chapter  Google Scholar 

  8. H. Mühlenbein and J. Zimmermann. Size of neighborhood more important than temperature for stochastic local search. In Proceedings of the 2000 Congress on Evolutionary Computation, pages 1017–1024, New Jersey, 2000. IEEE Press.

    Google Scholar 

  9. Heinz Mühlenbein and Thilo Mahnig. Convergence theory and applications of the factorized distribution algorithm. Journal of Computing and Information Technology, 7:19–32, 1999.

    Google Scholar 

  10. Heinz Mühlenbein and Thilo Mahnig. FDA-a scalable evolutionary algorithm for the optimization of additively decomposed functions. Evolutionary Computation, 7(4):353–376, 1999.

    Article  Google Scholar 

  11. Heinz Mühlenbein and Thilo Mahnig. Evolutionary algorithms: From recombination to search distributions. In L. Kallel, B. Naudts, and A. Rogers, editors, Theoretical Aspects of Evolutionary Computing, Natural Computing, pages 137–176, Berlin, 2000. Springer Verlag.

    Google Scholar 

  12. Heinz Mühlenbein, Thilo Mahnig, and A. Rodriguez Ochoa. Schemata, distributions and graphical models in evolutionary optimization. Journal of Heuristics, 5(2):215–247, 1999.

    Article  MATH  Google Scholar 

  13. J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufman, San Mateo, 1988.

    Google Scholar 

  14. S. Wright. Evolution in Mendelian populations. Genetics, 16:97–159, 1931.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Mahnig, T., Mühlenbein, H. (2001). Optimal Mutation Rate Using Bayesian Priors for Estimation of Distribution Algorithms. In: Steinhöfel, K. (eds) Stochastic Algorithms: Foundations and Applications. SAGA 2001. Lecture Notes in Computer Science, vol 2264. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45322-9_2

Download citation

  • DOI: https://doi.org/10.1007/3-540-45322-9_2

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-43025-4

  • Online ISBN: 978-3-540-45322-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics