Skip to main content

Knowledge Incorporation Through Lifetime Learning

  • Chapter
Knowledge Incorporation in Evolutionary Computation

Part of the book series: Studies in Fuzziness and Soft Computing ((STUDFUZZ,volume 167))

  • 1149 Accesses

Summary

Evolutionary computation is known to require long computation time for large problems. This chapter examines the possibility of improving the evolution process by incorporating domain-specific knowledge into evolutionary computation through lifetime learning. Different approaches to combining lifetime learning and evolution are compared. While the Lamarckian approach is able to speed up the evolution process and improve the solution quality, the Baldwinian approach is found to be inefficient. Through empirical analysis, it is conjectured that the inefficiency of the Baldwinian approach is due to the difficulties for genetic operations to produce the genotypic changes that match the phenotypic changes obtained by learning. This suggests that combining evolutionary computation inattentively with any learning method available is not a proper way to construct hybrid algorithms; rather, the correlation between the genetic operations and learning should be carefully considered.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. D. H. Ackley and M. L. Littman. Interactions between learning and evolution. In C. G. Langton, C. Taylor, J. D. Farmer, and S. Rasmussen, editors, Artificial Life 2, pages 487–509. Redwood City, CA: Addison-Wesley, 1992.

    Google Scholar 

  2. D. H. Ackley and M. L. Littman. A case for Lamarckian evolution. In C. G. Langton, editor, Artificial Life 3, pages 3–10. Reading, Mass.: Addison-Wesley, 1994.

    Google Scholar 

  3. R. W. Anderson. Learning and evolution: A quantitative genetics approach. Journal of Theoretical Biology, 175:89–101, 1995.

    Article  Google Scholar 

  4. T. Bäck, U. Hammel, and H.-P. Schwefel. Evolutionary computation: Comments on the history and current state. IEEE Transactions on Evolutionary Computation, 1(1):3–17, 1997.

    Article  Google Scholar 

  5. J. M. Baldwin. A new factor in evolution. American Naturalist, 30:441–451, 1896.

    Article  Google Scholar 

  6. R. K. Belew. When both individuals and populations search: adding simple learning to the genetic algorithm. In J. D. Schaffer, editor, Proceedings of the Third International Conference on Genetic Algorithms, pages 34–41. M. Kaufmann, 1989.

    Google Scholar 

  7. R. K. Belew. Evolution, learning, and culture: Computational metaphors for adaptive algorithms. Complex Systems, 4:11–49. 1990.

    MATH  Google Scholar 

  8. Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157–166, 1994.

    Article  Google Scholar 

  9. H. Braun and P. Zagorski. ENZO-M — a hybrid approach for optimizing neural networks by evolution and learning. In Y. Davidor, H.-P. Schwefel, and R. Manner, editors, Parallel Problem Solving from Nature — PPSN III, pages 440–451. Springer-Verlag, 1994.

    Google Scholar 

  10. J. A. Bullinaria. Exploring the Baldwin effect in evolving adaptable control systems. In R. F. French and J. P. Sougne, editors, Connectionist models of learning, development and evolution, pages 231–242. London: Springer, 2001.

    Chapter  Google Scholar 

  11. J. A. Bullinaria. Evolving efficient learning algorithms for binary mappings. Neural Networks, 16:793–800, 2003.

    Article  Google Scholar 

  12. R. J. Collins and D. R. Jefferson. Selection in massively parallel genetic algorithms. In Proceedings of the Fourth International Conference on Genetic Algorithms, pages 249–256, 1991.

    Google Scholar 

  13. Y. Davidor. A naturally occurring niche & species phenomenon: the model and first results. In Proceedings of the Fourth International Conference on Genetic Algorithms, pages 257–262, 1991.

    Google Scholar 

  14. T. Deacon. The Symbolic Species. New York: W.W. Norton, 1997.

    Google Scholar 

  15. D. Depew. The Baldwin effect: an archaeology. Cybernetics and Human Knowing, 7(1):7–20, 2000.

    Google Scholar 

  16. R. M. French and A. Messinger. Genes, phenes and the Baldwin effect: Learning and evolution in a simulated population. In A. B. Rodney and M. Pattie, editors, Artificial Life 4, pages 277–282. Cambridge, Mass.: MIT Press, 1994.

    Google Scholar 

  17. G. W. Greenwood. Training partially recurrent neural networks using evolutionary strategies. IEEE Transactions on Speech Audio Processing, 5(2):192–194, 1997.

    Article  Google Scholar 

  18. J. J. Grefenstette. Lamarckian learning in multi-agent environments. In R. K. Belew and L. B. Booker, editors, Proceedings of the Fourth International Conference on Genetic Algorithms, pages 303–310. San Mateo, CA: Morgan Kaufmann Pub., 1991.

    Google Scholar 

  19. F. Gruau and D. Whitley. Adding learning to the cellular development of neural networks: evolution and the Baldwin effect. Evolutionary Computation, 1(3):213–233, 1993.

    Article  Google Scholar 

  20. W. E. Hart, T. E. Kammeyer, and R. K. Belew. The role of development in genetic algorithms. In L. D. Whitley and M. D. Vose, editors, Foundations of Genetic Algorithms 3, pages 315–332. San Mateo, CA: Morgan Kaufmann Pub., 1995.

    Google Scholar 

  21. I. Harvey. The puzzle of the persistent question marks: A case study of genetic drift. In S. Forrest, editor, Proceedings of the Fifth International Conference on Genetic Algorithms, pages 15–22. San Mateo, CA: Morgan Kaufmann Pub., 1993.

    Google Scholar 

  22. G. E. Hinton and S. J. Nowlan. How learning can guide evolution. Complex Systems, 1:495–502, 1987.

    MATH  Google Scholar 

  23. C. R. Houck, J. A. Joines, M. G. Kay, and J. R. Wilson. Empirical investigation of the benefits of partial Lamarckianism. Evolutionary Computation, 5 (1):31–60, 1997.

    Article  Google Scholar 

  24. J. Huang and H. Wechsler. Visual routines for eye location using learning and evolution. IEEE Transactions on Evolutionary Computation, 4(1):73–82, 2000.

    Article  Google Scholar 

  25. R. Keesing and D. G. Stork. Evolution and learning in neural networks: the number and distribution of learning trial affect the rate of evolution. In R. P. Lippmann, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 3, pages 804–810. San Mateo, CA: Morgan Kaufmann Pub., 1991.

    Google Scholar 

  26. K. W. C. Ku, M. W. Mak, and W. C. Siu. A study of the Lamarckian evolution of recurrent neural networks. IEEE Transactions on Evolutionary Computation, 4(1):31–42, 2000.

    Article  Google Scholar 

  27. K. W. C. Ku, M. W. Mak, and W. C. Siu. Approaches to combining local and evolutionary search for training neural networks: A review and some new results. In A. Ghosh and S. Tsutsui, editors, Advances in Evolutionary Computation, pages 615–642. Springer-Verlag (UK), 2003.

    Chapter  Google Scholar 

  28. V. Maniezzo. Genetic evolution of the topology and weight distribution of neural networks. IEEE Transactions on Neural Networks, 5(1):39–53, 1994.

    Article  Google Scholar 

  29. G. Mayley. Landscapes, learning costs, and genetic assimilation. Evolutionary Computation, 4(3):213–234, 1997.

    Article  Google Scholar 

  30. D. J. Montana and L. Davis. Training feedforward neural network using genetic algorithms. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, pages 762–767, 1989.

    Google Scholar 

  31. M. C. Mozer. Induction of multiscale temporal structure. In J. E. Moody, S. J. Hanson, and R. P. Lippmann, editors, Advances in Neural Information Processing Systems 4, pages 275–282. San Mateo, CA: Morgan Kaufmann Pub., 1992.

    Google Scholar 

  32. S. Nolfi, J. L. Elman, and D. Parisi. Learning and evolution in neural networks. Adaptive Behavior, 3:5–28, 1994.

    Article  Google Scholar 

  33. H.-P. Schwefel. Evolution and Optimum Seeking. Wiley, New York, 1995.

    Google Scholar 

  34. P. Turney. Myths and legends of the Baldwin effect. In Proceedings of the Workshop on Evolutionary Computing and Machine Learning at the 19th International Conference on Machine Learning, pages 135–142, 1996.

    Google Scholar 

  35. P. Turney. How to shift bias: Lessons from the Baldwin effect. Evolutionary Computation, 4(3):271–295, 1997.

    Article  Google Scholar 

  36. D. Whitley. A genetic algorithm tutorial. Statistics&Computing, 4(2):65–85, 1994.

    Google Scholar 

  37. D. Whitley, V. S. Gordon, and K. Mathias. Lamarckian evolution, the Baldwin effect and function optimization. In Y. Davidor, H.-P. Schwefel, and R. Manner, editors, Parallel Problem Solving from Nature — PPSN III, pages 6–15. SpringerVerlag, 1994.

    Google Scholar 

  38. R. J. Williams and D. Zipser. Experimental analysis of the real-time recurrent learning algorithm. Connection Science, 1:87–111, 1989.

    Article  Google Scholar 

  39. H. Yamauchi. The difficulty of the Baldwinian account of linguistic innateness. In ECAL01, Lectures Notes in Computer Science, pages 391–400. Springer, 2001.

    Google Scholar 

  40. X. Yao. Evolving artificial neural networks. Proceedings of the IEEE, 87(9):1423–1447, 1999.

    Article  Google Scholar 

  41. X. Yao and Y. Liu. A new evolutionary system for evolving artificial neural networks. IEEE Transactions on Neural Networks, 8(3):694–713, 1997.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Ku, K.W.C., Mak, M.W. (2005). Knowledge Incorporation Through Lifetime Learning. In: Jin, Y. (eds) Knowledge Incorporation in Evolutionary Computation. Studies in Fuzziness and Soft Computing, vol 167. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-44511-1_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-44511-1_17

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-06174-5

  • Online ISBN: 978-3-540-44511-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics