Skip to main content

Avoiding Local Minima in Feedforward Neural Networks by Simultaneous Learning

  • Conference paper
AI 2007: Advances in Artificial Intelligence (AI 2007)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4830))

Included in the following conference series:

Abstract

Feedforward neural networks are particularly useful in learning a training dataset without prior knowledge. However, weight adjusting with a gradient descent may result in the local minimum problem. Repeated training with random starting weights is among the popular methods to avoid this problem, but it requires extensive computational time. This paper proposes a simultaneous training method with removal criteria to eliminate less promising neural networks, which can decrease the probability of achieving a local minimum while efficiently utilizing resources. The experimental results demonstrate the effectiveness and efficiency of the proposed training method in comparison with conventional training.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ferrari, S., Stengel, R.F.: Smooth Function Approximation Using Neural Networks. IEEE Transactions on Neural Networks 16, 24–38 (2005)

    Article  Google Scholar 

  2. Bishop, C.M.: Neural Networks for Pattern Recognition. Clarendon Press, Oxford (1995)

    Google Scholar 

  3. Park, Y.R., Murray, T.J., Chung, C.: Predicting Sun Spots Using a Layered Perceptron Neural Network. IEEE Transactions on Neural Networks 7, 501–505 (1996)

    Article  Google Scholar 

  4. Iyer, M.S., Rhinehart, R.R.: A Method to Determine the Required Number of Neural-Network Training Repetitions. IEEE Transactions on Neural Networks 10, 427–432 (1999)

    Article  Google Scholar 

  5. Riedmiller, M., Braun, H.: A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm. In: Proceedings of the IEEE International Conference on Neural Networks, pp. 586–591 (1993)

    Google Scholar 

  6. Poston, T., Lee, C.N., Choie, Y., Kwon, Y.: Local Minima and Back Propagation. In: IJCNN-91-Seattle International Joint Conference on Neural Networks, vol. 2, pp. 173–176 (1991)

    Google Scholar 

  7. Yu, X.H.: Can Backpropagation Error Surface Not Have Local Minima. IEEE Transactions on Neural Networks 3, 1019–1021 (1992)

    Article  Google Scholar 

  8. Gori, M., Tesi, A.: On the Problem of Local Minima in Backpropagation. IEEE Transactions on Pattern Analysis and Machine Intelligence 14, 76–86 (1992)

    Article  Google Scholar 

  9. Xiao-Hu, Y., Guo-An, C.: On the Local Minima Free Condition of Backpropagation Learning. IEEE Transactions on Neural Networks 6, 1300–1303 (1995)

    Article  Google Scholar 

  10. Fukumizu, K., Amari, S.: Local Minima and Plateaus in Multilayer Neural Networks. In: 9th International Conference on Artificial Neural Networks, vol. 2, pp. 597–602 (1999)

    Google Scholar 

  11. De-Shuang, H.: The Local Minima-Free Condition of Feedforward Neural Networks for Outer-Supervised Learning. IEEE Transactions on Systems, Man and Cybernetics, part B 28, 477–480 (1998)

    Article  Google Scholar 

  12. Sprinkhuizen-Kuyper, I.G., Boers, E.J.W.: A Local Minimum for the 2-3-1 XOR Network. IEEE Transactions on Neural Networks 10, 968–971 (1999)

    Article  Google Scholar 

  13. Cetin, B.C., Burdick, J.W., Barhen, J.: Global Descent Replaces Gradient Descent to Avoid Local Minima Problem in Learning with Artificial Neural Networks. IEEE International Conference on Neural Networks 2, 836–842 (1993)

    Article  Google Scholar 

  14. Toh, K.A.: Deterministic Global Optimization for FNN Training. IEEE Transactions on Systems, Man and Cybernetics, part B 33, 977–983 (2003)

    Article  Google Scholar 

  15. Jordanov, I.N., Rafik, T.A.: Local Minima Free Neural Network Learning. In: 2nd International IEEE Conference on Intelligent Systems, vol. 1, pp. 34–39 (2004)

    Google Scholar 

  16. Wessels, L.F.A., Barnard, E.: Avoiding False Local Minima by Proper Initialization of Connections. IEEE Transactions on Neural Networks 3, 899–905 (1992)

    Article  Google Scholar 

  17. Yao, X., Liu, Y.: A New Evolutionary System for Evolving Artificial Neural Networks. IEEE Transactions on Neural Networks 8, 694–713 (1997)

    Article  Google Scholar 

  18. Yao, X.: Evolving Artificial Neural Networks. Proceedings of the IEEE 87, 1423–1447 (1999)

    Article  Google Scholar 

  19. Sexton, R.S., Gupta, J.N.D.: Comparative Evaluation of Genetic Algorithm and Backpropagation for Training Neural Networks. Information Sciences 129, 45–59 (2000)

    Article  MATH  Google Scholar 

  20. Cantu-Paz, E., Kamath, C.: An Empirical Comparison of Combinations of Evolutionary Algorithms and Neural Networks for Classification Problems. IEEE Transactions on Systems, Man and Cybernetics, part B 35, 915–927 (2005)

    Article  Google Scholar 

  21. Prechelt, L.: Proben1: A Set of Neural Network Benchmark Problems and Benchmarking Rules. Technical report 21/94, Univ. Karlsruhe, Karlsruhe, Germany (1994)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Mehmet A. Orgun John Thornton

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Atakulreka, A., Sutivong, D. (2007). Avoiding Local Minima in Feedforward Neural Networks by Simultaneous Learning. In: Orgun, M.A., Thornton, J. (eds) AI 2007: Advances in Artificial Intelligence. AI 2007. Lecture Notes in Computer Science(), vol 4830. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-76928-6_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-76928-6_12

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-76926-2

  • Online ISBN: 978-3-540-76928-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics