Skip to main content

Improving Learning and Generalization in Neural Networks through the Acquisition of Multiple Related Functions

  • Conference paper
4th Neural Computation and Psychology Workshop, London, 9–11 April 1997

Part of the book series: Perspectives in Neural Computing ((PERSPECT.NEURAL))

Abstract

This paper presents evidence from connectionist simulations providing support for the idea that forcing neural networks to learn several related functions together results in both improved learning and better generalization. More specifically, if a neural network employing gradient descent learning is forced to capture the regularities of many semi-correlated sources of information within the same representational substrate, it then becomes necessary for it to only represent hypotheses that are consistent with all the cues provided. When the different sources of information are sufficiently correlated the number of candidate solutions will be reduced through the development of more efficient representations. To illustrate this, the paper draws briefly on research in the neural network engineering literature, while focusing on recent work on the segmentation of speech using connectionist networks. Finally, some implications for language acquisition of the present approach are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Y.S. Abu-Mostafa, Learning from hints in neural networks, Journal of Cornplexity, 6, 192–198, 1990.

    Article  MATH  MathSciNet  Google Scholar 

  2. Y.S. Abu-Mostafa, Hints and the VC Dimension, Neural Computation, 5, 278–288, 1993.

    Article  Google Scholar 

  3. J. Allen & M.H. Christiansen, Integrating multiple cues in word segmentation: A connectionist model using hints, in Proceedings of the Eighteenth Annual Cognitive Science Society Conference, pp. 370–375. Mahwah, NJ: Lawrence Erlbaum Associates, 1996.

    Google Scholar 

  4. R.N. Aslin, J.Z. Woodward, N.P. LaMendola & T.G. Bever, Models of word segmentation in fluent maternal speech to infants, in J.L. Morgan &.K. Demuth (Eds.), Signal to Syntax, pp. 117–134, Mahwah, NJ, Lawrence Erlbaum Associates, 1996.

    Google Scholar 

  5. M.R. Brent & T.A. Cartwright, Distributional regularity and phonotactic constraints are useful for segmentation, Cognition, 61, 93–125, 1996.

    Article  Google Scholar 

  6. N. Chater & P. Conkey, Finding linguistic structure with recurrent neural networks, in Proceedings of the Fourteenth Annual Meeting of the Cognitive Science Society, pp. 402–407, Hillsdale, NJ: Lawrence Erlbaum Associates, 1992.

    Google Scholar 

  7. N. Chomsky, Knowledge of Language, New York: Praeger, 1986.

    Google Scholar 

  8. M.H. Christiansen & J. Allen, Coping with variation in speech segmentation, in submission.

    Google Scholar 

  9. M.H. Christiansen, J. Allen & M.S. Seidenberg, Learning to segment speech using multiple cues: A connectionist model, Language and Cognitive Processes,in press.

    Google Scholar 

  10. A. Cleeremans, Mechanisms of implicit learning: Connectionist models of sequence processing, Cambridge, Mass: MIT Press, 1993.

    Google Scholar 

  11. A. Cutler & J. Mehler, The periodicity bias, Journal of Phonetics, 21, 103–108, 1993.

    Google Scholar 

  12. J.L. Elman, Finding structure in time. Cognitive Science, 14, 179–211, 1990.

    Article  Google Scholar 

  13. M. Korman, Adaptive aspects of maternal vocalizations in differing contexts at ten weeks, First Language, 5, 44–45, 1984.

    Google Scholar 

  14. B. MacWhinney, The CHILDES Project, Hillsdale, NJ: Lawrence Erlbaum Associates, 1991.

    Google Scholar 

  15. J. Morgan & K. Demuth (Eds), From Signal to Syntax, Mahwah, NJ: Lawrence Erlbaum Associates, 1996.

    Google Scholar 

  16. C. Omlin & C. Giles, Training second-order recurrent neural networks using hints, in Proceedings of the Ninth International Conference on Machine Learning (D. Sleeman & P. Edwards, Eds.), pp. 363–368, San Mateo, CA, Morgan Kaufmann Publishers, 1992.

    Google Scholar 

  17. W. Ramsey & S. Stich, Connectionism and three levels of nativism, in W. Ramsey, S. Stich & D. Rumelhart (Eds.), Philosophy and Connectionist Theory, Hillsdale, NJ: Lawrence Erlbaum Associates, pp. 287–310, 1991.

    Google Scholar 

  18. J.R Saffran, R.N. Aslin & E.L. Newport, Statistical learning by 8-monthold infants, Science, 274, 1926–1928, 1996.

    Article  Google Scholar 

  19. J.R Saffran, E.L. Newport, R.N. Aslin, R.A. Tunick & S. Barruego, Incidental language learning - listening (and learning) out of the corner of your ear, Psychological Science, 8, 101–105, 1997.

    Article  Google Scholar 

  20. S.C. Suddarth & A.D.C. Holden, Symbolic-neural systems and the use of hints for developing complex systems, International Journal of Man-Machine Studies, 35, 291–311, 1991.

    Article  Google Scholar 

  21. S.C. Suddarth & Y.L.Kergosien, Rule-injection hints as a means of improving network performance and learning time, in Proceedings of the Networks/EURIP Workshop 1990 (L.B. Almeida & C.J. Wellekens, Eds.), (Lecture Notes in Computer Science, Vol. 412), pp. 120–129, Berlin, Springer-Verlag, 1991.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag London Limited

About this paper

Cite this paper

Christiansen, M.H. (1998). Improving Learning and Generalization in Neural Networks through the Acquisition of Multiple Related Functions. In: Bullinaria, J.A., Glasspool, D.W., Houghton, G. (eds) 4th Neural Computation and Psychology Workshop, London, 9–11 April 1997. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-1546-5_6

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-1546-5_6

  • Publisher Name: Springer, London

  • Print ISBN: 978-3-540-76208-9

  • Online ISBN: 978-1-4471-1546-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics