Skip to main content
Log in

On Chaos and Neural Networks: The Backpropagation Paradigm

  • Published:
Artificial Intelligence Review Aims and scope Submit manuscript

Abstract

In training feed-forward neural networks using the backpropagation algorithm, a sensitivity to the values of the parameters of the algorithm hasbeen observed. In particular, it has been observed that this sensitivity with respect to the values of the parameters, such as thelearning rate, plays an important role in the final outcome. In thistutorial paper, we will look at neural networks from a dynamical systemspoint of view andexamine its properties. To this purpose, we collect results regarding chaostheory as well as the backpropagation algorithmand establish a relationship between them. We study in detail as an example the learning of the exclusive OR,an elementary Boolean function. The following conclusions hold for our XOR neural network: no chaos appears for learning rates lower than 5, when chaosoccurs, it disappears as learning progresses. For non-chaotic learning rates, the network learns faster than for other learning rates for which chaos occurs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Aihara, Takabe & Toyoda (1990). Chaotic Neural Networks. Physics Letters A 144(6, 7): 333–340.

    Google Scholar 

  • Aleksander & Morton (1991). An Introduction to Neural Computing. Chapman & Hall: London, 240 p.

    Google Scholar 

  • Bergé, Pomeau & Vidal (1992). L'ordre dans le chaos. Hermann: Paris, 352 p.

    Google Scholar 

  • Broer, Dumortier, van Strien & Takens (1991). Structures in Dynamics. North-Holland: Amsterdam, 309 p.

    Google Scholar 

  • Broer & Takens (1992). Wegen naar Chaos en Vreemde Aantrekking. In Broer & Verhulst (eds.) Dynamische systemen en chaos, 1–76. Epsilon Uitgaven: Utrecht.

    Google Scholar 

  • Broer & Verhulst (1992). Dynamische systemen en chaos. Epsilon Uitgaven: Utrecht, 349 p.

    Google Scholar 

  • Derrida & Meir (1988). Chaotic Behavior of a Layered Neural Network. Physical Review A (September) 38(6): 3116–3119.

    Google Scholar 

  • Feigenbaum (1980). Universal Behavior in Nonlinear Systems, 4–27. Los Alamos Science.

  • Grassberger & Procaccia (1983). Estimation of the Kolmogorov Entropy from a Choatic Signal. Physical Review A 28: 2591–2593.

    Google Scholar 

  • Lorenz (1989). Nonlinear Dynamical Economics and Chaotic Motion. Springer Verlag, 248 p.

  • Moon (1987). Chaotic Vibrations. Wiley & Sons: New York.

    Google Scholar 

  • McClelland & Rumelhart (1988). Parallel Distributed Processing, Explorations in the microstructure of Cognition, vols. 1 and 2. MIT Press: Cambridge USA.

    Google Scholar 

  • Sompolinsky & Crisanti (1988). Chaos in Random Neural Networks. Physical Review Letters 61(3): 259–262.

    Google Scholar 

  • Takens (1981). Detecting Strange Attractors in Turbulence. In Rand & Young (eds.) Dynamical Systems and Turbulence. Springer: Berlin/Heidelberg/New York.

    Google Scholar 

  • Van der Maas, Verschure & Molenaar (1990). A Note on Chaotic Behavior in Simple Neural Networks. Neural Networks 3: 119–122.

    Google Scholar 

  • Weisss & Kulikowski (1991). Computer Systems that Learn. Morgan Kaufman Publ., San Mateo, 223 p.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Bertels, K., Neuberg, L., Vassiliadis, S. et al. On Chaos and Neural Networks: The Backpropagation Paradigm. Artificial Intelligence Review 15, 165–187 (2001). https://doi.org/10.1023/A:1011045100817

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1011045100817

Keywords

Navigation