Skip to main content

Annealed RNN learning of finite state automata

  • Poster Presentations 1
  • Conference paper
  • First Online:
Artificial Neural Networks — ICANN 96 (ICANN 1996)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1112))

Included in the following conference series:

Abstract

In recurrent neural network (RNN) learning of finite state automata (FSA), we discuss how a neuro gain (β) influences the stability of the state representation and the performance of the learning. We formally show that the existence of the critical neuro gain (β 0): any β larger than β 0 makes an RNN maintain the stable representation of states of an acquired FSA. Considering the existence of β 0 and avoidance of local minima, we propose a new RNN learning method with the scheduling of β, called an annealed RNN learning. Our experiments show that the annealed RNN learning went beyond than a constant β learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. J. Hertz, A. Krogh and R.G. Palmer, Introduction to the Theory of Neural Computation, Addison-Wesley (1991)

    Google Scholar 

  2. Leong K.L., Fixed Point Analysis for Discrete-Time Recurrent Neural Networks, Proc. IJCNN IV (1992) 134–139

    Google Scholar 

  3. P. Tiňo, B.G. Horne, C.L. Giles and P.C. Collingwood, Finite State Machines and Recurrent Neural Networks — Automata and Dynamical System Approaches, UMIACS-TR-95-1, CS-TR-3396 (1995)

    Google Scholar 

  4. M.W. Goudreau, C.L. Giles, S.T. Chakradhar and D. Chen, First-Order Versus Second-Order Single-Layer Recurrent Neural Networks, IEEE Neural Networks, Vol. 5 No. 3 (1994) 511–111

    Google Scholar 

  5. P. Manolios and R. Fanelli, First-Order Recurrent Neural Networks and Deterministic Finite State Automata, Neural Computation, Vol. 6 (1994) 1155–1173

    Google Scholar 

  6. S. Das, C.L. Giles and G.Z. Sun, Using Prior Knowledge in a NNPDA to Learn Context-Free Languages, Advances in Neural Information Processing Systems, Vol. 5 (1993) 65–71.

    Google Scholar 

  7. D.E. Rumelhart, G.E. Hinton and R.J. Williams, Learning Internal Representations by Error Propagation, in Parallel Distributed Processing, D.E. Rumelhart and J.L. McClelland (1986) Vol. 1 318–362

    Google Scholar 

  8. N. Ueda and R. Nakano, Deterministic Annealing Variant of the EM Algorithm, Advances in Neural Information Processing Systems, Vol. 7 (1995) 545–552.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Christoph von der Malsburg Werner von Seelen Jan C. Vorbrüggen Bernhard Sendhoff

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Arai, Ki., Nakano, R. (1996). Annealed RNN learning of finite state automata. In: von der Malsburg, C., von Seelen, W., Vorbrüggen, J.C., Sendhoff, B. (eds) Artificial Neural Networks — ICANN 96. ICANN 1996. Lecture Notes in Computer Science, vol 1112. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-61510-5_89

Download citation

  • DOI: https://doi.org/10.1007/3-540-61510-5_89

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-61510-1

  • Online ISBN: 978-3-540-68684-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics