Skip to main content

Pseudo-network Growing for Gradual Interpretation of Input Patterns

  • Conference paper
Neural Information Processing. Models and Applications (ICONIP 2010)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 6444))

Included in the following conference series:

  • 2602 Accesses

Abstract

In this paper, we propose a new information-theoretic method to interpret competitive learning. The method is called ”pseudo-network growing,” because a network re-grows gradually after learning, taking into account the importance of components. In particular, we try to apply the method to clarify the class structure of self-organizing maps. First, the importance of input units is computed, and then input units are gradually added, according to their importance. We can expect that the corresponding number of competitive units will be gradually increased, showing the main characteristics of network configurations and input patterns. We applied the method to the well-known Senate data with two distinct classes. By using the conventional SOM, explicit class boundaries could not be obtained, due to the inappropriate map size imposed in the experiment. However, with the pseudo-network growing, a clear boundary could be observed in the first growing stage, and gradually the detailed class structure could be reproduced.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Alexander, J.A., Mozer, M.C.: Template-based procedures for neural network interpretation. Neural Networks 12, 479–498 (1999)

    Article  Google Scholar 

  2. Ishikawa, M.: Rule extraction by successive regularization. Neural Networks 13, 1171–1183 (2000)

    Article  Google Scholar 

  3. Marsland, S., Shapiro, J., Nehmzow, U.: A self-organizing network that grows when required. Neural Networks 15, 1041–1058 (2002)

    Article  Google Scholar 

  4. Fritzke, B.: Growing cell structures – a self-organizing network for unsupervised and supervised learning. Neural Networks 7(9), 1441–1460 (1994)

    Article  Google Scholar 

  5. Fritzke, B.: Growing self-organizing networks – why? In: ESANN 1996: European Symposium on Artificial Neural Networks, pp. 61–72 (1996)

    Google Scholar 

  6. Kamimura, R.: Information-theoretic competitive learning with inverse Euclidean distance output units. Neural Processing Letters 18, 163–184 (2003)

    Article  Google Scholar 

  7. Anderson, J.R.: Cognitive Psychology and its Implication. Worth Publishers, New York (1980)

    Google Scholar 

  8. Korsten, N.J.H., Fragopanagos, N., Hartle, M., Taylor, N., Taylor, J.G.: Attention as a controller. Neural Networks 19, 1408–1421 (2006)

    Article  MATH  Google Scholar 

  9. Hamker, F.H., Zirnsak, M.: V4 receptive field dynamics as predicted by a systems-level model of visual attention using feedback from the frontal eye field. Neural Networks 19, 1371–1382 (2006)

    Article  MATH  Google Scholar 

  10. Romesburg, H.C.: Cluster Analysis for Researchers. Krieger Publishing Company, Florida (1984)

    Google Scholar 

  11. Vesanto, J., Himberg, J., Alhoniemi, E., Parhankangas, J.: SOM toolbox for Matlab. tech. rep., Laboratory of Computer and Information Science, Helsinki University of Technology (2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kamimura, R. (2010). Pseudo-network Growing for Gradual Interpretation of Input Patterns. In: Wong, K.W., Mendis, B.S.U., Bouzerdoum, A. (eds) Neural Information Processing. Models and Applications. ICONIP 2010. Lecture Notes in Computer Science, vol 6444. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-17534-3_46

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-17534-3_46

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-17533-6

  • Online ISBN: 978-3-642-17534-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics