Skip to main content

Extending and benchmarking the CasPer algorithm

  • Neural Networks
  • Conference paper
  • First Online:
Advanced Topics in Artificial Intelligence (AI 1997)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1342))

Included in the following conference series:

Abstract

The CasPer algorithm is a constructive neural network algorithm. CasPer creates cascade network architectures in a similar manner to Cascade Correlation. CasPer, however, uses a modified form of the RPROP algorithm, termed Progressive RPROP, to train the whole network after the addition of each new hidden neuron. Previous work with CasPer has shown that it builds networks which generalise better than CasCor, often using less hidden neurons. This work adds two extensions to CasPer. First, an enhancement to the RPROP algorithm, SARPROP, is used to train newly installed hidden neurons. The second extension involves the use of a pool of hidden neurons, each trained using SARPROP, with the best performing selected for insertion into the network. These extensions are shown to result in CasPer producing more compact networks which often generalise better than those produced by the original CasPer algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Treadgold, N.K. and Gedeon, T.D. “A Cascade Network Employing Progressive RPROP” Int. Work Conf. on Artificial and Natural Neural Networks, pp. 733–742, 1997.

    Google Scholar 

  2. Fahlman, S.E. and Lebiere, C. “The cascade-correlation learning architecture,” Advances in Neural Information Processing, vol. 2, D.S. Touretzky, (Ed.) San Mateo, CA: Morgan Kauffman, pp. 524–532, 1990.

    Google Scholar 

  3. J. Hwang, S. You, S. Lay, and I. Jou, “The Cascade-Correlation Learning: A Projection Pursuit Learning Perspective” IEEE Trans. Neural Networks 7(2), pp. 278–289, 1996.

    Article  Google Scholar 

  4. T. Kwok and D. Yeung, “Experimental Analysis of Input Weight Freezing in Constructive Neural Networks” Proc IEEE Int. Conf. On Neural Networks, pp. 511–516, 1993.

    Google Scholar 

  5. Treadgold, N.K. and Gedeon, T.D. “A Simulated Annealing Enhancement to Resilient Backpropagation,” Proc. Int. Panel Conf. Soft and Intelligent Computing, Budapest, pp. 293–298, 1996.

    Google Scholar 

  6. Riedmiller, M. and Braun, H. “A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm,” Proc IEEE Int. Conf. on Neural Networks, pp. 586–591, 1993.

    Google Scholar 

  7. Fahlman, S.E. “An empirical study of learning speed in back-propagation networks,” Technical Report CMU-CS-88-162, Carnegie Mellon University, Pittsburgh, PA, 1988.

    Google Scholar 

  8. Murphy, P.M. and Aha, D.W. “UCI Repository of machine learning databases,” [http://www.ics.uci.edu/~mlearn/MLRepository.html], Irvine, CA: University of California, Department of Information and Computer Science, 1994.

    Google Scholar 

  9. Treadgold, N.K. and Gedeon, T.D. “Extending CasPer: A Regression Survey” Int. Conf. On Neural Information Processing, to appear, 1997.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Abdul Sattar

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Treadgold, N.K., Gedeon, T.D. (1997). Extending and benchmarking the CasPer algorithm. In: Sattar, A. (eds) Advanced Topics in Artificial Intelligence. AI 1997. Lecture Notes in Computer Science, vol 1342. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-63797-4_93

Download citation

  • DOI: https://doi.org/10.1007/3-540-63797-4_93

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63797-4

  • Online ISBN: 978-3-540-69649-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics