Skip to main content

Self-sustained Thought Processes in a Dense Associative Network

  • Conference paper
KI 2005: Advances in Artificial Intelligence (KI 2005)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3698))

Included in the following conference series:

Abstract

Several guiding principles for thought processes are proposed and a neural-network-type model implementing these principles is presented and studied. We suggest to consider thinking within an associative network built-up of overlapping memory states. We consider a homogeneous associative network as biological considerations rule out distinct conjunction units between the information (the memories) stored in the brain. We therefore propose that memory states have a dual functionality: They represent on one side the stored information and serve, on the other side, as the associative links in between the different dynamical states of the network which consists of transient attractors.

We implement these principles within a generalized winners-take-all neural network with sparse coding and an additional coupling to local reservoirs. We show that this network is capable to generate autonomously a self-sustained time-series of memory states which we identify with a thought process. Each memory state is associatively connected with its predecessor.

This system shows several emerging features, it is able (a) to recognize external patterns in a noisy background, (b) to focus attention autonomously and (c) to represent hierarchical memory states with an internal structure.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Dorogovtsev, S.N., Mendes, J.F.F.: Evolution of Networks. Oxford University Press, Oxford (2003)

    Book  MATH  Google Scholar 

  2. Hubel, D., Wiesel, T.: Receptive fields and functional architecture in two nonstriate visual areas (18 and 19) in the cat. J. Neurophysiol. 28, 229–289 (1965)

    Google Scholar 

  3. McLeod, P., Plunkett, K., Rolls, E.T.: Introduction to connectionist modelling of cognitive processes. Oxford University Press, Oxford (1998)

    Google Scholar 

  4. Müller, B., Reinhardt, J., Strickland, M.T.: Neural Networks, An Introduction. Springer, Heidelberg (1995)

    MATH  Google Scholar 

  5. Fukai, T., Tanaka, S.: A simple neural network exhibiting selective activation of neuronal ensembles: from winner-take-all to winners-share-all. Neural Comp. 9, 77–97 (1997)

    Article  MATH  Google Scholar 

  6. O’Reilly, R.C.: Six principles for biologically based computational models of cortical cognition. Trends Cog. Sci. 2, 455–462 (1998)

    Article  Google Scholar 

  7. Kwon, T.M., Zervakis, M.: KWTA networks and their application. Multidim. Syst. and Sig. Proccessing 6, 333–346 (1995)

    Article  MATH  Google Scholar 

  8. Riesenhuber, M., Poggio, T.: Are cortical models really bound by the “Binding Problem? Neuron 24, 87–93 (1999)

    Article  Google Scholar 

  9. Mel, B., Fiser, J.: Minimizing Binding Errors Using Learned Conjunctive Features. Neural Comp. 12, 731–762 (2000)

    Article  Google Scholar 

  10. Buhmann, J., Divko, R., Schulten, K.: Associative memory with high information content. Phys. Rev. A 39, 2689–2692 (1989)

    Article  MathSciNet  Google Scholar 

  11. Schuster, H.G.: Complex Adaptive Systems: An Introduction. Scator (2001)

    Google Scholar 

  12. Reynolds, J.H., Desimone, R.: The role of neural mechanisms of attention to solve the binding problem. Neuron 24, 19–29 (1999)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Gros, C. (2005). Self-sustained Thought Processes in a Dense Associative Network. In: Furbach, U. (eds) KI 2005: Advances in Artificial Intelligence. KI 2005. Lecture Notes in Computer Science(), vol 3698. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11551263_29

Download citation

  • DOI: https://doi.org/10.1007/11551263_29

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-28761-2

  • Online ISBN: 978-3-540-31818-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics