Skip to main content

Combining regularized neural networks

  • Part I: Coding and Learning in Biology
  • Conference paper
  • First Online:
Artificial Neural Networks — ICANN'97 (ICANN 1997)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1327))

Included in the following conference series:

Abstract

In this paper we show that the improvement in performance which can be achieved by averaging depends critically on the degree of regularization which is used in training the individual neural networks. We compare four different averaging approaches: simple averaging, bagging, variance-based weighting and variance-based bagging. Bagging and variance-based bagging seem to be the overall best combining methods over a wide range of degrees of regularization.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Breiman, L.: Bagging Predictors. TR No. 421, Dept. of Statist., Berkeley (1994)

    Google Scholar 

  2. Efron, B. and Tibshirani, R.: An Introduction to the Bootstrap. Chapman/Hall (1993).

    Google Scholar 

  3. Jacobs, R. A.: Methods for Combining Experts' Probability Assessment. Neural Computation, 7 (1995) 867–888.

    Google Scholar 

  4. Krogh, A. and Vedelsby, J.: Neural Network Ensembles, Cross Validation, and Active Learning. Adv. in Neural Inf. Proc. Systems 7. Cambridge MA: MIT Press (1995).

    Google Scholar 

  5. Meir, R.:. Bias, Variance and the Combination of Estimators: The Case of Linear Least Squares. TR: Dept. of Electrical Engineering, Technion, Haifa (1994)

    Google Scholar 

  6. Perrone, M. P.: Improving Regression Estimates: Averaging Methods for Variance Reduction with Extensions to General Convex Measure Optimization. PhD thesis. Brown University (1993)

    Google Scholar 

  7. Taniguchi, M., Tresp, V.: Variance-based Combination of Estimators trained by Bootstrap Replicates. Proc. Inter. Symp. on Artificial Neural Networks, Taiwan (1995)

    Google Scholar 

  8. Tibshirani, R.: A Comparison of Some Error Estimates for Neural Network Models. TR Dep. of Stat, Univ. of Toronto (1994)

    Google Scholar 

  9. Tresp, V. and Taniguchi, M.: Combining Estimators Using Non-Constant Weighting Functions. Adv. in Neural Inf. Proc. Systems 7 Cambridge MA: MIT Press (1995)

    Google Scholar 

  10. Wolpert, D. H.: Stacked Generalization. Neural Networks, 5 (1992) 241–159

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Wulfram Gerstner Alain Germond Martin Hasler Jean-Daniel Nicoud

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Taniguchi, M., Tresp, V. (1997). Combining regularized neural networks. In: Gerstner, W., Germond, A., Hasler, M., Nicoud, JD. (eds) Artificial Neural Networks — ICANN'97. ICANN 1997. Lecture Notes in Computer Science, vol 1327. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0020179

Download citation

  • DOI: https://doi.org/10.1007/BFb0020179

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63631-1

  • Online ISBN: 978-3-540-69620-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics