Skip to main content
Log in

Hybrid neural state machine for neural network

  • Research Paper
  • Published:
Science China Information Sciences Aims and scope Submit manuscript

Abstract

The integration of computer-science-oriented and neuroscience-oriented approaches is believed to be a promising way for the development of artificial general intelligence (AGI). Recently, a hybrid Tianjic chip that integrates both approaches has been reported, providing a general platform to facilitate the research of AGI. The control algorithm for handling various neural networks is the key to this platform; however, it is still primitive. In this work, we propose a hybrid neural state machine (H-NSM) framework that can efficiently cooperate with artificial neural networks and spiking neural networks and control the workflows to accomplish complex tasks. The H-NSM receives input from different types of networks, makes decisions according to the fusing of various information, and sends control signals to the sub-network or actuator. The H-NSM can be trained to adapt to context-aware tasks or sequential tasks, thereby improving system robustness. The training algorithm works correctly even if only 50% of the forced state information is provided. It achieved performance comparable to the optimum algorithm on the Tower of Hanoi task and achieved multiple tasks control on a self-driving bicycle. After only 50 training epochs, the transfer accuracy reaches 100% in the test case. It proves that H-NSM has the potential to advance control logic for hybrid systems, paving the way for designing complex intelligent systems and facilitating the research towards AGI.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Goertzel B. Artificial general intelligence: concept, state of the art, and future prospects. J Artif General Intell, 2014, 5: 1–48

    Article  Google Scholar 

  2. Pei J, Deng L, Song S, et al. Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature, 2019, 572: 106–111

    Article  Google Scholar 

  3. Graves A, Wayne G, Danihelka I. Neural turing machines. 2014. ArXiv:1410.5401

  4. Sutton R S, Barto A G. Reinforcement Learning: An Introduction. Cambridge: MIT Press, 2018

    MATH  Google Scholar 

  5. Pierrot T, Ligner G, Reed S, et al. Learning compositional neural programs with recursive tree search and planning. 2019. ArXiv:1905.12941

  6. Liang D C, Indiveri G. A neuromorphic computational primitive for robust context-dependent decision making and context-dependent stochastic computation. IEEE Trans Circ Syst II, 2019, 66: 843–847

    Google Scholar 

  7. Liang D C, Indiveri G. Robust state-dependent computation in neuromorphic electronic systems. In: Proceedings of IEEE Biomedical Circuits and Systems Conference (BioCAS), 2017

  8. Rutishauser U, Douglas R J. State-dependent computation using coupled recurrent networks. Neural Comput, 2009, 21: 478–509

    Article  MathSciNet  Google Scholar 

  9. Neftci E, Binas J, Rutishauser U, et al. Synthesizing cognition in neuromorphic electronic systems. Proc Natl Acad Sci USA, 2013, 110: 3468–3476

    Article  Google Scholar 

  10. Graves A, Wayne G, Reynolds M, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 2016, 538: 471–476

    Article  Google Scholar 

  11. Kaiser L, Gomez A N, Shazeer N, et al. One model to learn them all. 2017. ArXiv:1706.05137

  12. Giles C L, Miller C B, Chen D, et al. Learning and extracting finite state automata with second-order recurrent neural networks. Neural Comput, 1992, 4: 393–405

    Article  Google Scholar 

  13. Arai K, Nakano R. Stable behavior in a recurrent neural network for a finite state machine. Neural Netw, 2000, 13: 667–680

    Article  Google Scholar 

  14. Wennekers T. Synfire graphs: from spike patterns to automata of spiking neurons. 2013. https://oparu.uni-ulm.de/xmlui/handle/123456789/2524

  15. Wennekers T, Ay N. Finite state automata resulting from temporal information maximization and a temporal learning rule. Neural Comput, 2005, 17: 2258–2290

    Article  MathSciNet  Google Scholar 

  16. Clarke D A, Minsky M L. Computation: finite and infinite machines. Am Math Mon, 1968, 75: 428

    Article  MathSciNet  Google Scholar 

  17. Horne B G, Hush D R. Bounds on the complexity of recurrent neural network implementations of finite state machines. Neural Netw, 1996, 9: 243–252

    Article  Google Scholar 

  18. Forcada M L, Carrasco R C. Finite-state computation in analog neural networks: steps towards biologically plausible models? In: Emergent Neural Computational Architectures, LNAI 2036, 2001. 480–493

    Article  Google Scholar 

  19. Tvardovskii A S, Vinarskii E M, Yevtushenko N V. Experimental evaluation of timed finite state machine based test derivation. In: Proceedings of the 20th International Conference of Young Specialists on Micro/Nanotechnologies and Electron Devices (EDM), 2019. 102–107

  20. Laputenko A V. Logic circuit based test derivation for microcontrollers. In: Proceedings of the 20th International Conference of Young Specialists on Micro/Nanotechnologies and Electron Devices (EDM), 2019. 70–73

  21. Mavridou A, Laszka A. Designing secure ethereum smart contracts: a finite state machine based approach. In: Proceedings of International Conference on Financial Cryptography and Data Security, 2018. 523–540

  22. Le L H, Bezerra C E, Pedone F. Dynamic scalable state machine replication. In: Proceedings of the 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 2016. 13–24

  23. Wang J, Song J W, Chen M Q, et al. Road network extraction: a neural-dynamic framework based on deep learning and a finite state machine. Int J Remote Sens, 2015, 36: 3144–3169

    Article  Google Scholar 

  24. Said W, Quante J, Koschke R. Towards interactive mining of understandable state machine models from embedded software. In: Proceedings of International Conference on Model-driven Engineering & Software Development, 2018. 117–128

  25. Chen M Z, Saad W, Yin C C. Liquid state machine learning for resource and cache management in LTE-U unmanned aerial vehicle (UAV) networks. IEEE Trans Wirel Commun, 2019, 18: 1504–1517

    Article  Google Scholar 

  26. Zhang Y, Li P, Jin Y, et al. A digital liquid state machine with biologically inspired learning and its application to speech recognition. IEEE Trans Neural Netw Learn Syst, 2015, 26: 2635–2649

    Article  MathSciNet  Google Scholar 

  27. Smith M R, Hill A J, Carlson K D, et al. A novel digital neuromorphic architecture efficiently facilitating complex synaptic response functions applied to liquid state machines. In: Proceedings of International Joint Conference on Neural Networks (IJCNN), 2017. 2421–2428

  28. Ghasemiyeh R, Moghdani R, Sana S S. A hybrid artificial neural network with metaheuristic algorithms for predicting stock price. Cybern Syst, 2017, 48: 365–392

    Article  Google Scholar 

  29. Hudson D, Manning C D. Learning by abstraction: the neural state machine. 2019. ArXiv:1907.03950

  30. Takami M A, Sheikh R, Sana S S. Product portfolio optimisation using teaching-learning-based optimisation algorithm: a new approach in supply chain management. Int J Syst Sci, 2016, 3: 236–246

    Google Scholar 

  31. Ameri Z, Sana S S, Sheikh R. Self-assessment of parallel network systems with intuitionistic fuzzy data: a case study. Soft Comput, 2019, 23: 12821–12832

    Article  Google Scholar 

  32. Deng L, Wu Y J, Hu X, et al. Rethinking the performance comparison between SNNS and ANNS. Neural Netw, 2020, 121: 294–307

    Article  Google Scholar 

  33. Beers S R, Rosenberg D R, Dick E L, et al. Neuropsychological study of frontal lobe function in psychotropic-naive children with obsessive-compulsive disorder. Am J Psychiat, 1999, 156: 777–779

    Google Scholar 

  34. Mayer H, Perkins D. Towers of Hanoi revisited a nonrecursive surprise. Sigplan Not, 1984, 19: 80–84

    Article  Google Scholar 

  35. Gonzalez W G, Zhang H, Harutyunyan A, et al. Persistence of neuronal representations through time and damage in the hippocampus. Science, 2019, 365: 821–825

    Article  Google Scholar 

  36. Deng B L, Li G, Han S, et al. Model compression and hardware acceleration for neural networks: a comprehensive survey. Proc IEEE, 2020, 108: 485–532

    Article  Google Scholar 

Download references

Acknowledgements

This work was partly supported by National Natural Science Foundation of China (Grant No. 61836004), Brain-Science Special Program of Beijing (Grant No. Z181100001518006), and CETC Haikang Group-Brain Inspired Computing Joint Research Center, the Suzhou-Tsinghua Innovation Leading Program (Grant No. 2016SZ0102).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luping Shi.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tian, L., Wu, Z., Wu, S. et al. Hybrid neural state machine for neural network. Sci. China Inf. Sci. 64, 132202 (2021). https://doi.org/10.1007/s11432-019-2988-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11432-019-2988-1

Keywords

Navigation