Skip to main content

Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes

  • Conference paper
Anticipatory Behavior in Adaptive Learning Systems (ABiALS 2008)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5499))

Included in the following conference series:

Abstract

I argue that data becomes temporarily interesting by itself to some self-improving, but computationally limited, subjective observer once he learns to predict or compress the data in a better way, thus making it subjectively simpler and more beautiful. Curiosity is the desire to create or discover more non-random, non-arbitrary, regular data that is novel and surprising not in the traditional sense of Boltzmann and Shannon but in the sense that it allows for compression progress because its regularity was not yet known. This drive maximizes interestingness, the first derivative of subjective beauty or compressibility, that is, the steepness of the learning curve. It motivates exploring infants, pure mathematicians, composers, artists, dancers, comedians, yourself, and (since 1990) artificial systems.

First version of this preprint published 23 Dec 2008; revised April 2009. Variants are scheduled to appear as references [90] and [91] (short version), distilling some of the essential ideas in earlier work (1990-2008) on this subject: [57,58,59,60,61,68,72,76,108] and especially recent papers [81, 87, 88, 89].

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aleksander, I.: The World in My Mind, My Mind In The World: Key Mechanisms of Consciousness in Humans, Animals and Machines. Imprint Academic (2005)

    Google Scholar 

  2. Baars, B., Gage, N.M.: Cognition, Brain and Consciousness: An Introduction to Cognitive Neuroscience. Elsevier/Academic Press (2007)

    Google Scholar 

  3. Balter, M.: Seeking the key to music. Science 306, 1120–1122 (2004)

    Article  Google Scholar 

  4. Barlow, H.B., Kaushal, T.P., Mitchison, G.J.: Finding minimum entropy codes. Neural Computation 1(3), 412–423 (1989)

    Article  Google Scholar 

  5. Barto, A.G., Singh, S., Chentanez, N.: Intrinsically motivated learning of hierarchical collections of skills. In: Proceedings of International Conference on Developmental Learning (ICDL). MIT Press, Cambridge (2004)

    Google Scholar 

  6. Bense, M.: Einführung in die informationstheoretische Ästhetik. Grundlegung und Anwendung in der Texttheorie (Introduction to information-theoretical aesthetics. Foundation and application to text theory). Rowohlt Taschenbuch Verlag (1969)

    Google Scholar 

  7. Birkhoff, G.D.: Aesthetic Measure. Harvard University Press, Cambridge (1933)

    Book  MATH  Google Scholar 

  8. Bishop, C.M.: Neural networks for pattern recognition. Oxford University Press, Oxford (1995)

    MATH  Google Scholar 

  9. Blank, D., Meeden, L.: Developmental Robotics AAAI Spring Symposium, Stanford, CA (2005), http://cs.brynmawr.edu/DevRob05/schedule/

  10. Blank, D., Meeden, L.: Introduction to the special issue on developmental robotics. Connection Science 18(2) (2006)

    Google Scholar 

  11. Bongard, J.C., Lipson, H.: Nonlinear system identification using coevolution of models and tests. IEEE Transactions on Evolutionary Computation 9(4) (2005)

    Google Scholar 

  12. Butz, M.V.: How and why the brain lays the foundations for a conscious self. Constructivist Foundations 4(1), 1–14 (2008)

    MathSciNet  Google Scholar 

  13. Cañamero, L.D.: Designing emotions for activity selection in autonomous agents. In: Trappl, R., Petta, P., Payr, S. (eds.) Emotions in Humans and Artifacts, pp. 115–148. The MIT Press, Cambridge (2003)

    Google Scholar 

  14. Cohn, D.A.: Neural network exploration using optimal experiment design. In: Cowan, J., Tesauro, G., Alspector, J. (eds.) Advances in Neural Information Processing Systems 6, pp. 679–686. Morgan Kaufmann, San Francisco (1994)

    Google Scholar 

  15. Cramer, N.L.: A representation for the adaptive generation of simple sequential programs. In: Grefenstette, J.J. (ed.) Proceedings of an International Conference on Genetic Algorithms and Their Applications, Carnegie-Mellon University, July 24-26. Lawrence Erlbaum Associates, Hillsdale (1985)

    Google Scholar 

  16. Fedorov, V.V.: Theory of optimal experiments. Academic Press, London (1972)

    Google Scholar 

  17. Galton, F.: Composite portraits made by combining those of many different persons into a single figure. Nature 18(9), 97–100 (1878)

    Google Scholar 

  18. Gödel, K.: Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I. Monatshefte für Mathematik und Physik 38, 173–198 (1931)

    Article  MathSciNet  MATH  Google Scholar 

  19. Gomez, F.J.: Robust Nonlinear Control through Neuroevolution. Ph.D thesis, Department of Computer Sciences, University of Texas at Austin (2003)

    Google Scholar 

  20. Gomez, F.J., Miikkulainen, R.: Incremental evolution of complex general behavior. Adaptive Behavior 5, 317–342 (1997)

    Article  Google Scholar 

  21. Gomez, F.J., Miikkulainen, R.: Solving non-Markovian control tasks with neuroevolution. In: Proc. IJCAI 1999, Denver, CO. Morgan Kaufmann, San Francisco (1999)

    Google Scholar 

  22. Gomez, F.J., Miikkulainen, R.: Active guidance for a finless rocket using neuroevolution. In: Proc. GECCO 2003, Chicago (2003); Winner of Best Paper Award in Real World Applications. Gomez is working at IDSIA on a CSEM grant to Schmidhuber, J.

    Google Scholar 

  23. Gomez, F.J., Schmidhuber, J.: Co-evolving recurrent neurons learn deep memory POMDPs. In: Proc. of the 2005 conference on genetic and evolutionary computation (GECCO), Washington, D.C. ACM Press, New York (2005); Nominated for a best paper award

    Google Scholar 

  24. Gomez, F.J., Schmidhuber, J., Miikkulainen, R.: Efficient non-linear control through neuroevolution. Journal of Machine Learning Research JMLR 9, 937–965 (2008)

    MathSciNet  MATH  Google Scholar 

  25. Haikonen, P.: The Cognitive Approach to Conscious Machines. Imprint Academic (2003)

    Google Scholar 

  26. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Computation 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  27. Holland, J.H.: Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor (1975)

    Google Scholar 

  28. Huffman, D.A.: A method for construction of minimum-redundancy codes. Proceedings IRE 40, 1098–1101 (1952)

    Article  Google Scholar 

  29. Hutter, M.: Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability. Springer, Berlin (1847); On Schmidhuber’s, J.: SNF grant 20-61847

    MATH  Google Scholar 

  30. Hutter, M.: On universal prediction and Bayesian confirmation. Theoretical Computer Science (2007)

    Google Scholar 

  31. Hwang, J., Choi, J., Oh, S., Marks II., R.J.: Query-based learning applied to partially trained multilayer perceptrons. IEEE Transactions on Neural Networks 2(1), 131–136 (1991)

    Article  Google Scholar 

  32. Itti, L., Baldi, P.F.: Bayesian surprise attracts human attention. In: Advances in Neural Information Processing Systems 19, pp. 547–554. MIT Press, Cambridge (2005)

    Google Scholar 

  33. Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. Journal of AI research 4, 237–285 (1996)

    Google Scholar 

  34. Kolmogorov, A.N.: Three approaches to the quantitative definition of information. Problems of Information Transmission 1, 1–11 (1965)

    MATH  Google Scholar 

  35. Kullback, S.: Statistics and Information Theory. J. Wiley and Sons, New York (1959)

    MATH  Google Scholar 

  36. Levin, L.A.: Universal sequential search problems. Problems of Information Transmission 9(3), 265–266 (1973)

    Google Scholar 

  37. Li, M., Vitányi, P.M.B.: An Introduction to Kolmogorov Complexity and its Applications, 2nd edn. Springer, Heidelberg (1997)

    Book  MATH  Google Scholar 

  38. MacKay, D.J.C.: Information-based objective functions for active data selection. Neural Computation 4(2), 550–604 (1992)

    Google Scholar 

  39. Miglino, O., Lund, H., Nolfi, S.: Evolving mobile robots in simulated and real environments. Artificial Life 2(4), 417–434 (1995)

    Article  Google Scholar 

  40. Miller, G., Todd, P., Hedge, S.: Designing neural networks using genetic algorithms. In: Proceedings of the 3rd International Conference on Genetic Algorithms, pp. 379–384. Morgan Kaufmann, San Francisco (1989)

    Google Scholar 

  41. Moles, A.: Information Theory and Esthetic Perception. Univ. of Illinois Press (1968)

    Google Scholar 

  42. Moriarty, D.E., Langley, P.: Learning cooperative lane selection strategies for highways. In: Proceedings of the Fifteenth National Conference on Artificial Intelligence (AAAI 1998), Madison, WI, pp. 684–691. AAAI Press, Menlo Park (1998)

    Google Scholar 

  43. Moriarty, D.E., Miikkulainen, R.: Efficient reinforcement learning through symbiotic evolution. Machine Learning 22, 11–32 (1996)

    Google Scholar 

  44. Nake, F.: Ästhetik als Informationsverarbeitung. Springer, Heidelberg (1974)

    Book  Google Scholar 

  45. Nolfi, S., Floreano, D., Miglino, O., Mondada, F.: How to evolve autonomous robots: Different approaches in evolutionary robotics. In: Brooks, R.A., Maes, P. (eds.) Fourth International Workshop on the Synthesis and Simulation of Living Systems (Artificial Life IV), pp. 190–197. MIT, Cambridge (1994)

    Google Scholar 

  46. Olsson, J.R.: Inductive functional programming using incremental program transformation. Artificial Intelligence 74(1), 55–83 (1995)

    Article  Google Scholar 

  47. Pearlmutter, B.A.: Gradient calculations for dynamic recurrent neural networks: A survey. IEEE Transactions on Neural Networks 6(5), 1212–1228 (1995)

    Article  Google Scholar 

  48. Perrett, D.I., May, K.A., Yoshikawa, S.: Facial shape and judgements of female attractiveness. Nature 368, 239–242 (1994)

    Article  Google Scholar 

  49. Piaget, J.: The Child’s Construction of Reality. Routledge and Kegan Paul., London (1955)

    Google Scholar 

  50. Pinker, S.: How the mind works. Norton, W. W. & Company, Inc. (1997)

    Google Scholar 

  51. Plutowski, M., Cottrell, G., White, H.: Learning Mackey-Glass from 25 examples, plus or minus 2. In: Cowan, J., Tesauro, G., Alspector, J. (eds.) Advances in Neural Information Processing Systems 6, pp. 1135–1142. Morgan Kaufmann, San Francisco (1994)

    Google Scholar 

  52. Poland, J., Hutter, M.: Strong asymptotic assertions for discrete MDL in regression and classification. In: Annual Machine Learning Conference of Belgium and the Netherlands (Benelearn 2005), Enschede (2005)

    Google Scholar 

  53. Rechenberg, I.: Evolutionsstrategie - Optimierung technischer Systeme nach Prinzipien der biologischen Evolution. In: Dissertation, 1971. Fromman-Holzboog (1973)

    Google Scholar 

  54. Rissanen, J.: Modeling by shortest data description. Automatica 14, 465–471 (1978)

    Article  MATH  Google Scholar 

  55. Robinson, A.J., Fallside, F.: The utility driven dynamic error propagation network. Technical Report CUED/F-INFENG/TR.1, Cambridge University Engineering Department (1987)

    Google Scholar 

  56. Rückstieß, T., Felder, M., Schmidhuber, J.: State-Dependent Exploration for policy gradient methods. In: Daelemans, W., Goethals, B., Morik, K. (eds.) ECML PKDD 2008, Part II. LNCS(LNAI), vol. 5212, pp. 234–249. Springer, Heidelberg (2008)

    Google Scholar 

  57. Schmidhuber, J.: Dynamische neuronale Netze und das fundamentale raumzeitliche Lernproblem. Dissertation, Institut für Informatik, Technische Universität München (1990)

    Google Scholar 

  58. Schmidhuber, J.: Making the world differentiable: On using fully recurrent self-supervised neural networks for dynamic reinforcement learning and planning in non-stationary environments. Technical Report FKI-126-90, Institut für Informatik, Technische Universität München (1990)

    Google Scholar 

  59. Schmidhuber, J.: Adaptive curiosity and adaptive confidence. Technical Report FKI-149-91, Institut für Informatik, Technische Universität München (April 1991); See also [60]

    Google Scholar 

  60. Schmidhuber, J.: Curious model-building control systems. In: Proceedings of the International Joint Conference on Neural Networks, Singapore, vol. 2, pp. 1458–1463. IEEE Press, Los Alamitos (1991)

    Google Scholar 

  61. Schmidhuber, J.: A possibility for implementing curiosity and boredom in model-building neural controllers. In: Meyer, J.A., Wilson, S.W. (eds.) Proc. of the International Conference on Simulation of Adaptive Behavior: From Animals to Animats, pp. 222–227. MIT Press/Bradford Books (1991)

    Google Scholar 

  62. Schmidhuber, J.: A fixed size storage O(n 3) time complexity learning algorithm for fully recurrent continually running networks. Neural Computation 4(2), 243–248 (1992)

    Article  Google Scholar 

  63. Schmidhuber, J.: Learning complex, extended sequences using the principle of history compression. Neural Computation 4(2), 234–242 (1992)

    Article  Google Scholar 

  64. Schmidhuber, J.: Learning factorial codes by predictability minimization. Neural Computation 4(6), 863–879 (1992)

    Article  Google Scholar 

  65. Schmidhuber, J.: A computer scientist’s view of life, the universe, and everything. In: Freksa, C., Jantzen, M., Valk, R. (eds.) Foundations of Computer Science. LNCS, vol. 1337, pp. 201–208. Springer, Heidelberg (1997)

    Chapter  Google Scholar 

  66. Schmidhuber, J.: Femmes fractales (1997)

    Google Scholar 

  67. Schmidhuber, J.: Low-complexity art. Leonardo, Journal of the International Society for the Arts, Sciences, and Technology 30(2), 97–103 (1997)

    Google Scholar 

  68. Schmidhuber, J.: What’s interesting? Technical Report IDSIA-35-97, IDSIA (1997), ftp://ftp.idsia.ch/pub/juergen/interest.ps.gz ; extended abstract in Proc. Snowbird 1998, Utah (1998); see also [72]

  69. Schmidhuber, J.: Facial beauty and fractal geometry. Technical Report TR IDSIA-28-98, IDSIA (1998), Published in the Cogprint Archive: http://cogprints.soton.ac.uk

  70. Schmidhuber, J.: Algorithmic theories of everything. Technical Report IDSIA-20-00, quant-ph/0011122, IDSIA, Manno (Lugano), Switzerland, 2000. Sections 1-5: see [73]; Section 6: see [74]

    Google Scholar 

  71. Schmidhuber, J.: Sequential decision making based on direct search. In: Sun, R., Giles, C.L. (eds.) IJCAI-WS 1999. LNCS (LNAI), vol. 1828, p. 213. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  72. Schmidhuber, J.: Exploring the predictable. In: Ghosh, A., Tsuitsui, S. (eds.) Advances in Evolutionary Computing, pp. 579–612. Springer, Heidelberg (2002)

    Google Scholar 

  73. Schmidhuber, J.: Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit. International Journal of Foundations of Computer Science 13(4), 587–612 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  74. Schmidhuber, J.: The Speed Prior: a new simplicity measure yielding near-optimal computable predictions. In: Kivinen, J., Sloan, R.H. (eds.) COLT 2002. LNCS(LNAI), vol. 2375, pp. 216–228. Springer, Heidelberg (2002)

    Google Scholar 

  75. Schmidhuber, J.: Optimal ordered problem solver. Machine Learning 54, 211–254 (2004)

    Article  MATH  Google Scholar 

  76. Schmidhuber, J.: Overview of artificial curiosity and active exploration, with links to publications since 1990 (2004), http://www.idsia.ch/~juergen/interest.html

  77. Schmidhuber, J.: Overview of work on robot learning, with publications (2004), http://www.idsia.ch/~juergen/learningrobots.html

  78. Schmidhuber, J.: RNN overview, with links to a dozen journal publications (2004), http://www.idsia.ch/~juergen/rnn.html

  79. Schmidhuber, J.: Completely self-referential optimal reinforcement learners. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds.) ICANN 2005. LNCS, vol. 3697, pp. 223–233. Springer, Heidelberg (2005)

    Google Scholar 

  80. Schmidhuber, J.: Gödel machines: Towards a technical justification of consciousness. In: Kudenko, D., Kazakov, D., Alonso, E. (eds.) Adaptive Agents and Multi-Agent Systems III. LNCS, vol. 3394, pp. 1–23. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  81. Schmidhuber, J.: Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts. Connection Science 18(2), 173–187 (2006)

    Article  Google Scholar 

  82. Schmidhuber, J.: Gödel machines: Fully self-referential optimal universal self-improvers. In: Goertzel, B., Pennachin, C. (eds.) Artificial General Intelligence, pp. 199–226. Springer, Heidelberg (2006), arXiv:cs.LO/0309048

    Google Scholar 

  83. Schmidhuber, J.: The new AI: General & sound & relevant for physics. In: Goertzel, B., Pennachin, C. (eds.) Artificial General Intelligence, pp. 175–198. Springer, Heidelberg (2006), TR IDSIA-04-03, arXiv:cs.AI/0302012

    Google Scholar 

  84. Schmidhuber, J.: Randomness in physics. Nature 439(3), 392 (2006) (Correspondence)

    Article  MathSciNet  Google Scholar 

  85. Schmidhuber, J.: 2006: Celebrating 75 years of AI - history and outlook: the next 25 years. In: Lungarella, M., Iida, F., Bongard, J., Pfeifer, R. (eds.) 50 Years of Aritficial Intelligence. LNCS (LNAI), vol. 4850, pp. 29–41. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  86. Schmidhuber, J.: New millennium AI and the convergence of history. In: Duch, W., Mandziuk, J. (eds.) Challenges to Computational Intelligence. Studies in Computational Intelligence, vol. 63, pp. 15–36. Springer, Heidelberg (2007), arXiv:cs.AI/0606081

    Chapter  Google Scholar 

  87. Schmidhuber, J.: Simple algorithmic principles of discovery, subjective beauty, selective attention, curiosity & creativity. In: Hutter, M., Servedio, R.A., Takimoto, E. (eds.) ALT 2007. LNCS (LNAI), vol. 4754, pp. 32–33. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  88. Schmidhuber, J.: Simple algorithmic principles of discovery, subjective beauty, selective attention, curiosity & creativity. In: Corruble, V., Takeda, M., Suzuki, E. (eds.) DS 2007. LNCS (LNAI), vol. 4755, pp. 26–38. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  89. Schmidhuber, J.: Driven by compression progress. In: Lovrek, I., Howlett, R.J., Jain, L.C. (eds.) KES 2008, Part I. LNCS, vol. 5177, p. 11. Springer, Heidelberg (2008); Abstract of invited keynote

    Chapter  Google Scholar 

  90. Schmidhuber, J.: Driven by compression progress: A simple principle explains essential aspects of subjective beauty, novelty, surprise, interestingness, attention, curiosity, creativity, art, science, music, jokes. In: Pezzulo, G., Butz, M.V., Sigaud, O., Baldassarre, G. (eds.) Anticipatory Behavior in Adaptive Learning Systems. LNCS (LNAI), vol. 5499, pp. 48–76. Springer, Heidelberg (2009) (in press)

    Chapter  Google Scholar 

  91. Schmidhuber, J.: Simple algorithmic theory of subjective beauty, novelty, surprise, interestingness, attention, curiosity, creativity, art, science, music, jokes. Journal of SICE 48(1) (2009) (in press)

    Google Scholar 

  92. Schmidhuber, J.: Ultimate cognition à la Gödel. Cognitive Computation (2009) (in press)

    Google Scholar 

  93. Schmidhuber, J., Bakker, B.: NIPS, RNNaissance workshop on recurrent neural networks, Whistler, CA (2003), http://www.idsia.ch/~juergen/rnnaissance.html

  94. Schmidhuber, J., Graves, A., Gomez, F.J., Fernandez, S., Hochreiter, S.: How to Learn Programs with Artificial Recurrent Neural Networks. Cambridge University Press, Cambridge (2009) (in preparation)

    Google Scholar 

  95. Schmidhuber, J., Heil, S.: Sequential neural text compression. IEEE Transactions on Neural Networks 7(1), 142–146 (1996)

    Article  Google Scholar 

  96. Schmidhuber, J., Huber, R.: Learning to generate artificial fovea trajectories for target detection. International Journal of Neural Systems 2(1&2), 135–141 (1991)

    Google Scholar 

  97. Schmidhuber, J., Zhao, J., Schraudolph, N.: Reinforcement learning with self-modifying policies. In: Thrun, S., Pratt, L. (eds.) Learning to learn, pp. 293–309. Kluwer, Dordrecht (1997)

    Google Scholar 

  98. Schmidhuber, J., Zhao, J., Wiering, M.: Shifting inductive bias with success-story algorithm, adaptive Levin search, and incremental self-improvement. Machine Learning 28, 105–130 (1997)

    Article  Google Scholar 

  99. Schwefel, H.P.: Numerische Optimierung von Computer-Modellen. Dissertation, 1974. Birkhäuser, Basel (1977)

    MATH  Google Scholar 

  100. Sehnke, F., Osendorfer, C., Rückstieß, T., Graves, A., Peters, J., Schmidhuber, J.: Policy gradients with parameter-based exploration for control. In: Proceedings of the International Conference on Artificial Neural Networks ICANN (2008)

    Google Scholar 

  101. Seth, A.K., Izhikevich, E., Reeke, G.N., Edelman, G.M.: Theories and measures of consciousness: An extended framework. Proc. Natl. Acad. Sciences USA 103, 10799–10804 (2006)

    Article  Google Scholar 

  102. Shannon, C.E.: A mathematical theory of communication (parts I and II). Bell System Technical Journal XXVII, 379–423 (1948)

    Google Scholar 

  103. Sims, K.: Evolving virtual creatures. In: Glassner, A. (ed.) Proceedings of SIGGRAPH 1994, Computer Graphics Proceedings, Annual Conference, Orlando, Florida, July 1994, pp. 15–22. ACM SIGGRAPH, ACM Press, New York (1994) ISBN 0-89791-667-0

    Google Scholar 

  104. Singh, S., Barto, A.G., Chentanez, N.: Intrinsically motivated reinforcement learning. In: Advances in Neural Information Processing Systems 17 (NIPS). MIT Press, Cambridge (2005)

    Google Scholar 

  105. Sloman, A., Chrisley, R.L.: Virtual machines and consciousness. Journal of Consciousness Studies 10(4-5), 113–172 (2003)

    Google Scholar 

  106. Solomonoff, R.J.: A formal theory of inductive inference. Part I. Information and Control 7, 1–22 (1964)

    Article  MathSciNet  MATH  Google Scholar 

  107. Solomonoff, R.J.: Complexity-based induction systems. IEEE Transactions on Information Theory IT-24(5), 422–432 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  108. Storck, J., Hochreiter, S., Schmidhuber, J.: Reinforcement driven information acquisition in non-deterministic environments. In: Proceedings of the International Conference on Artificial Neural Networks, Paris, vol. 2, pp. 159–164. EC2 & Cie (1995)

    Google Scholar 

  109. Sutton, R., Barto, A.: Reinforcement learning: An introduction. MIT Press, Cambridge (1998)

    Google Scholar 

  110. Sutton, R.S., McAllester, D.A., Singh, S.P., Mansour, Y.: Policy gradient methods for reinforcement learning with function approximation. In: Solla, S.A., Leen, T.K., Müller, K.-R. (eds.) Advances in Neural Information Processing Systems 12, NIPS Conference, Denver, Colorado, USA, November 29 - December 4, pp. 1057–1063. The MIT Press, Cambridge (1999)

    Google Scholar 

  111. Turing, A.M.: On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, Series 2(41), 230–267 (1936)

    MathSciNet  MATH  Google Scholar 

  112. Wallace, C.S., Boulton, D.M.: An information theoretic measure for classification. Computer Journal 11(2), 185–194 (1968)

    Article  MATH  Google Scholar 

  113. Wallace, C.S., Freeman, P.R.: Estimation and inference by compact coding. Journal of the Royal Statistical Society, Series “B” 49(3), 240–265 (1987)

    MathSciNet  MATH  Google Scholar 

  114. Watkins, C.J.C.H.: Learning from Delayed Rewards. Ph.D thesis, King’s College, Oxford (1989)

    Google Scholar 

  115. Werbos, P.J.: Generalization of backpropagation with application to a recurrent gas market model. Neural Networks 1 (1988)

    Google Scholar 

  116. Whitehead, S.D.: Reinforcement Learning for the adaptive control of perception and action. Ph.D thesis, University of Rochester (February 1992)

    Google Scholar 

  117. Wierstra, D., Schaul, T., Peters, J., Schmidhuber, J.: Fitness expectation maximization. In: Rudolph, G., Jansen, T., Lucas, S., Poloni, C., Beume, N. (eds.) PPSN 2008. LNCS, vol. 5199. Springer, Heidelberg (2008)

    Google Scholar 

  118. Wierstra, D., Schaul, T., Peters, J., Schmidhuber, J.: Natural evolution strategies. In: Congress of Evolutionary Computation, CEC 2008 (2008)

    Google Scholar 

  119. Wierstra, D., Schmidhuber, J.: Policy gradient critics. In: Kok, J.N., Koronacki, J., Lopez de Mantaras, R., Matwin, S., Mladenič, D., Skowron, A. (eds.) ECML 2007. LNCS, vol. 4701, pp. 466–477. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  120. Williams, R.J., Zipser, D.: Gradient-based learning algorithms for recurrent networks and their computational complexity. In: Back-propagation: Theory, Architectures and Applications. Erlbaum, Hillsdale (1994)

    Google Scholar 

  121. Yamauchi, B.M., Beer, R.D.: Sequential behavior and learning in evolved dynamical neural networks. Adaptive Behavior 2(3), 219–246 (1994)

    Article  Google Scholar 

  122. Yao, X.: A review of evolutionary artificial neural networks. International Journal of Intelligent Systems 4, 203–222 (1993)

    Google Scholar 

  123. Zuse, K.: Rechnender Raum. Elektronische Datenverarbeitung 8, 336–344 (1967)

    MATH  Google Scholar 

  124. Zuse, K.: Rechnender Raum. Friedrich Vieweg & Sohn, Braunschweig (1969); English translation: Calculating Space, MIT Technical Translation AZT-70-164-GEMIT, Massachusetts Institute of Technology (Proj. MAC), Cambridge, Mass. 02139 (Febuary 1970)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Schmidhuber, J. (2009). Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes. In: Pezzulo, G., Butz, M.V., Sigaud, O., Baldassarre, G. (eds) Anticipatory Behavior in Adaptive Learning Systems. ABiALS 2008. Lecture Notes in Computer Science(), vol 5499. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02565-5_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-02565-5_4

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-02564-8

  • Online ISBN: 978-3-642-02565-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics