Skip to main content

Dynamic Search Trajectory Methods for Neural Network Training

  • Conference paper
Artificial Intelligence and Soft Computing - ICAISC 2004 (ICAISC 2004)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3070))

Included in the following conference series:

Abstract

Training multilayer feedforward neural networks corresponds to the global minimization of the network error function. To address this problem we utilize the Snyman and Fatti [1] approach by considering a system of second order differential equations of the form, ẍ\(=-\nabla E(x)\), where x is the vector of network weights and \(\nabla E\) is the gradient of the network error function E. Equilibrium points of the above system of differential equations correspond to optimizers of the network error function. The proposed approach is described and experimental results are discussed.

This work is partially supported by the “Pythagoras” research grant awarded by the Greek Ministry of Education and Religious Affairs and the European Union.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Snyman, J., Fatti, L.: A multi–start global minimization algorithm with dynamic search trajectories. JOTA 54, 121–141 (1987)

    Article  MATH  MathSciNet  Google Scholar 

  2. Törn, A., Žilinskas, A.: Global optimization. In: Törn, A., Žilinskas, A. (eds.) Global Optimization. LNCS, vol. 350, Springer, Heidelberg (1989)

    Google Scholar 

  3. Incerti, S., Parisi, V., Zirilli, F.: A new method for solving nonlinear simultaneous equations. SIAM J. Num. Anal 16, 779–789 (1979)

    Article  MATH  MathSciNet  Google Scholar 

  4. Inomata, S., Cumada, M.: On the golf method. Bulletin of the Electronical Laboratory 25, 495–512 (1964)

    Google Scholar 

  5. Zhidkov, N., Shchdrin, B.: On the search of minimum of a function of several variables. Computing methods and Programming 10, 203–210 (1978)

    Google Scholar 

  6. Pshenichnyi, B., Marchenko, D.: On one approach to the search for the global minimum. Optimal Decision theory 2, 3–12 (1967)

    Google Scholar 

  7. Griewank, A.: Generalized descnet for global optimization. JOTA 34, 11–39 (1981)

    Article  MATH  MathSciNet  Google Scholar 

  8. Petalas, Y.G., Tasoulis, D.K., Vrahatis, M.N.: Trajectory methods for neural network training. In: Proceedings of the IASTED International Conference on Artificial Intelligence and Applications (AIA 2004), vol. 1, pp. 400–408. ACTA press (2004)

    Google Scholar 

  9. Armijo, L.: Minimization of function having lipschitz continuous first partial derivatives. Pac J. Math. 16, 1–3 (1966)

    MATH  MathSciNet  Google Scholar 

  10. Magoulas, G., Vrahatis, M.N., Androulakis, G.: Effective backpropagation training with variable stepsize. Neural Networks 10, 69–82 (1997)

    Article  Google Scholar 

  11. Magoulas, G., Vrahatis, M.N., Androulakis, G.: Increasing the convergence rate of the error backpropagation algorithm by learning rate adaptation methods. Neural Computation 11, 1769–1796 (1999)

    Article  Google Scholar 

  12. Vogl, T., Mangis, J., Rigler, A., Zink, W., Alkon, D.: Accelerating the convergence of the back-propagation method. Biol. Cybern. 59, 257–263 (1988)

    Article  Google Scholar 

  13. Rao, S.: Optimization theory and Applications. Wiley Eastern Limited, Chichester (1992)

    Google Scholar 

  14. Riedmiller, M., Braun, H.: A direct adaptive method for faster backpropagation learning: The rprop algorithm. In: Proceedings of the IEEE International Conference on Neural Networks, San Francisco, CA, pp. 586–591 (1993)

    Google Scholar 

  15. Igel, C., Hüsken, M.: Improving the Rprop learning algorithm. In: Bothe, H., Rojas, R. (eds.) Proceedings of the Second International ICSC Symposium on Neural Computation (NC 2000), pp. 115–121. ICSC Academic Press, London (2000)

    Google Scholar 

  16. Prechelt, L.: Proben1: A set of neural network benchmark problems and benchmarking rules. Technical Report 21/94 (1994)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Petalas, Y.G., Tasoulis, D.K., Vrahatis, M.N. (2004). Dynamic Search Trajectory Methods for Neural Network Training. In: Rutkowski, L., Siekmann, J.H., Tadeusiewicz, R., Zadeh, L.A. (eds) Artificial Intelligence and Soft Computing - ICAISC 2004. ICAISC 2004. Lecture Notes in Computer Science(), vol 3070. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-24844-6_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-24844-6_32

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-22123-4

  • Online ISBN: 978-3-540-24844-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics