Skip to main content

Optimality of pocket algorithm

  • Poster Presentations 1
  • Conference paper
  • First Online:
Artificial Neural Networks — ICANN 96 (ICANN 1996)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1112))

Included in the following conference series:

  • 218 Accesses

Abstract

Many constructive methods use the pocket algorithm as a basic component in the training of multilayer perceptrons. This is mainly due to the good properties of the pocket algorithm confirmed by a proper convergence theorem which asserts its optimality.

Unfortunately the original proof holds vacuously and does not ensure the asymptotical achievement of an optimal weight vector in a general situation. This inadequacy can be overcome by a different approach that leads to the desired result.

Moreover, a modified version of this learning method, called pocket algorithm with ratchet, is shown to obtain an optimal configuration within a finite number of iterations independently of the given training set.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Hertz, J., Krogh, A., and Palmer, R. G.Introduction to the Theory of Neural Computation. Redwood City, CA: Addison-Wesley, 1991.

    Google Scholar 

  2. Mézard, M., and Nadal, J.-P. Learning in feedforward layered networks: The tiling algorithm. Journal of Physics A22 (1989), 2191–2203.

    Google Scholar 

  3. Frean, M. The upstart algorithm: A method for constructing and training feed-forward neural networks. Neural Computation2 (1990), 198–209.

    Google Scholar 

  4. Muselli, M. On sequential construction of binary neural networks. IEEE Transactions on Neural Networks6 (1995), 678–690.

    Google Scholar 

  5. Gallant, S. I. Perceptron-based learning algorithms. IEEE Transactions on Neural Networks1 (1990), 179–191.

    Google Scholar 

  6. Rosenblatt, F.Principles of Neurodynamics. Washington, DC:Spartan Press, 1961.

    Google Scholar 

  7. Muselli, M. On convergence properties of pocket algorithm. Submitted for publication on IEEE Transactions on Neural Networks.

    Google Scholar 

  8. Minsky, M., and Papert, S.Perceptrons: An Introduction to Computational Geometry. Cambridge, MA: MIT Press, 1969.

    Google Scholar 

  9. Nummelin, E.General Irreducible Markov Chains and Non-Negative Operators. New York: Cambridge University Press, 1984.

    Google Scholar 

  10. Godbole, A. P. Specific formulae for some success run distributions. Statistics & Probability Letters10 (1990), 119–124.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Christoph von der Malsburg Werner von Seelen Jan C. Vorbrüggen Bernhard Sendhoff

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Muselli, M. (1996). Optimality of pocket algorithm. In: von der Malsburg, C., von Seelen, W., Vorbrüggen, J.C., Sendhoff, B. (eds) Artificial Neural Networks — ICANN 96. ICANN 1996. Lecture Notes in Computer Science, vol 1112. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-61510-5_87

Download citation

  • DOI: https://doi.org/10.1007/3-540-61510-5_87

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-61510-1

  • Online ISBN: 978-3-540-68684-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics