SCIENCE CHINA Information Sciences, Volume 64 , Issue 3 : 132202(2021) https://doi.org/10.1007/s11432-019-2988-1

Hybrid neural state machine for neural network

More info
  • ReceivedNov 5, 2019
  • AcceptedJun 29, 2020
  • PublishedJan 22, 2021



This work was partly supported by National Natural Science Foundation of China (Grant No. 61836004), Brain-Science Special Program of Beijing (Grant No. Z181100001518006), and CETC Haikang Group-Brain Inspired Computing Joint Research Center, the Suzhou-Tsinghua Innovation Leading Program (Grant No. 2016SZ0102).


[1] Goertzel B. Artificial General Intelligence: Concept, State of the Art, and Future Prospects. J Artificial General Intelligence, 2014, 5: 1-48 CrossRef ADS Google Scholar

[2] Pei J, Deng L, Song S. Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature, 2019, 572: 106-111 CrossRef ADS Google Scholar

[3] Graves A, Wayne G, Danihelka I. Neural turing machines. 2014,. arXiv Google Scholar

[4] Sutton R S, Barto A G. Reinforcement learning: an introduction. Cambridge: MIT press, 2018. Google Scholar

[5] Pierrot T, Ligner G, Reed S, et al. Learning compositional neural programs with recursive tree search and planning. 2019,. arXiv Google Scholar

[6] Liang D, Indiveri G. A Neuromorphic Computational Primitive for Robust Context-Dependent Decision Making and Context-Dependent Stochastic Computation. IEEE Trans Circuits Syst II, 2019, 66: 843-847 CrossRef Google Scholar

[7] Liang D, Indiveri G. Robust state-dependent computation in neuromorphic electronic systems. In: Proceedings of IEEE Biomedical Circuits and Systems Conference (BioCAS), 2017. Google Scholar

[8] Rutishauser U, Douglas R J. State-Dependent Computation Using Coupled Recurrent Networks. Neural Computation, 2009, 21: 478-509 CrossRef Google Scholar

[9] Neftci E, Binas J, Rutishauser U. Synthesizing cognition in neuromorphic electronic systems. Proc Natl Acad Sci USA, 2013, 110: E3468-E3476 CrossRef ADS Google Scholar

[10] Graves A, Wayne G, Reynolds M. Hybrid computing using a neural network with dynamic external memory. Nature, 2016, 538: 471-476 CrossRef ADS Google Scholar

[11] Kaiser L, Gomez A N, Shazeer N, et al. One model to learn them all. 2017,. arXiv Google Scholar

[12] Giles C L, Miller C B, Chen D. Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks. Neural Computation, 1992, 4: 393-405 CrossRef Google Scholar

[13] Arai K, Nakano R. Stable behavior in a recurrent neural network for a finite state machine. Neural Networks, 2000, 13: 667-680 CrossRef Google Scholar

[14] Wennekers T. Synfire graphs: from spike patterns to automata of spiking neurons. 2013. https://oparu.uni-ulm.de/xmlui/handle/123456789/2524. Google Scholar

[15] Wennekers T, Ay N. Finite State Automata Resulting from Temporal Information Maximization and a Temporal Learning Rule. Neural Computation, 2005, 17: 2258-2290 CrossRef Google Scholar

[16] Clarke D A, Minsky M L. Computation: Finite and Infinite Machines.. Am Math Mon, 1968, 75: 428 CrossRef Google Scholar

[17] Horne B G, Hush D R. Bounds on the complexity of recurrent neural network implementations of finite state machines. Neural Networks, 1996, 9: 243-252 CrossRef Google Scholar

[18] Forcada M L, Carrasco R C. Finite-state computation in analog neural networks: steps towards biologically plausible models? In: Proceedings of Emergent Neural Computational Architectures Based on Neuroscience, 2001. 480--493. Google Scholar

[19] Tvardovskii A S, Vinarskii E M, Yevtushenko N V. Experimental evaluation of timed finite state machine based test derivation. In: Proceedings of the 20th International Conference of Young Specialists on Micro/Nanotechnologies and Electron Devices (EDM), 2019. 102--107. Google Scholar

[20] Laputenko A V. Logic circuit based test derivation for microcontrollers. In: Proceedings of the 20th International Conference of Young Specialists on Micro/Nanotechnologies and Electron Devices (EDM), 2019. 70--73. Google Scholar

[21] Mavridou A, Laszka A. Designing secure ethereum smart contracts: a finite state machine based approach. In: Proceedings of International Conference on Financial Cryptography and Data Security, 2018. 523--540. Google Scholar

[22] Le L H, Bezerra C E, Pedone F. Dynamic scalable state machine replication. In: Proceedings of the 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 2016. 13--24. Google Scholar

[23] Wang J, Song J, Chen M. Road network extraction: a neural-dynamic framework based on deep learning and a finite state machine. Int J Remote Sens, 2015, 36: 3144-3169 CrossRef ADS Google Scholar

[24] Said W, Quante J, Koschke R. Towards interactive mining of understandable state machine models from embedded software. In: Proceedings of International Conference on Model-driven Engineering & Software Development, 2018. 117--128. Google Scholar

[25] Chen M, Saad W, Yin C. Liquid State Machine Learning for Resource and Cache Management in LTE-U Unmanned Aerial Vehicle (UAV) Networks. IEEE Trans Wireless Commun, 2019, 18: 1504-1517 CrossRef Google Scholar

[26] Zhang Y, Li P, Jin Y. A Digital Liquid State Machine With Biologically Inspired Learning and Its Application to Speech Recognition. IEEE Trans Neural Netw Learning Syst, 2015, 26: 2635-2649 CrossRef Google Scholar

[27] Smith M R, Hill A J, Carlson K D, et al. A novel digital neuromorphic architecture efficiently facilitating complex synaptic response functions applied to liquid state machines. In: Proceedings of International Joint Conference on Neural Networks (IJCNN), 2017. 2421--2428. Google Scholar

[28] Ghasemiyeh R, Moghdani R, Sana S S. A Hybrid Artificial Neural Network with Metaheuristic Algorithms for Predicting Stock Price. Cybernetics Syst, 2017, 48: 365-392 CrossRef Google Scholar

[29] Hudson D, Manning C D. Learning by abstraction: the neural state machine. 2019,. arXiv Google Scholar

[30] Takami M A, Sheikh R, Sana S S. Product portfolio optimisation using teaching-learning-based optimisation algorithm: a new approach in supply chain management. Int J Syst Sci, 2016, 3: 236--246. Google Scholar

[31] Ameri Z, Sana S S, Sheikh R. Self-assessment of parallel network systems with intuitionistic fuzzy data: a case study. Soft Comput, 2019, 23: 12821-12832 CrossRef Google Scholar

[32] Deng L, Wu Y, Hu X. Rethinking the performance comparison between SNNS and ANNS. Neural Networks, 2020, 121: 294-307 CrossRef Google Scholar

[33] Beers S R, Rosenberg D R, Dick E L, et al. Neuropsychological study of frontal lobe function in psychotropic-naive children with obsessive-compulsive disorder. Am J Psychiat, 1999, 156: 777--779. Google Scholar

[34] Mayer H, Perkins D. Towers of Hanoi revisited a nonrecursive surprise. SIGPLAN Not, 1984, 19: 80-84 CrossRef Google Scholar

[35] Gonzalez W G, Zhang H, Harutyunyan A. Persistence of neuronal representations through time and damage in the hippocampus. Science, 2019, 365: 821-825 CrossRef ADS Google Scholar

[36] Deng B L, Li G, Han S. Model Compression and Hardware Acceleration for Neural Networks: A Comprehensive Survey. Proc IEEE, 2020, 108: 485-532 CrossRef Google Scholar

  • Figure 1

    (Color online) (a) Traditional neural network workflow; (b) H-NSM takes input from ANNs and/or SNNs, and controls the working flow according to diverse tasks, and then sends control signals to the inference networks or actuators; (c) H-NSM-C makes decision based on different conditions and activates the desired branches to accomplish different tasks; (d) H-NSM-S accomplishes sequential tasks and sends control signals according to the current step.

  • Figure 2

    (Color online) Demonstrate a three-state complete Moore state machine using SNN-based H-NSM.

  • Figure 3

    (Color online) Training procedure of an SNN-based H-NSM.

  • Figure 4

    (Color online) (a) State transfer matrix training with accurate supervised signals; (b) state transfer matrix training with 60% correct supervised signals after 32 training epochs; (c) state transfer matrix training with 50% correct supervised signals after 32 training epochs; (d) state transfer matrix training with 50% correct supervised signals after 100 training epochs.

  • Figure 5

    (Color online) (a) The state transfer rules for the Tower of Hanoi; (b) the process flow for function $f(s,t)$ using LIF neural network.

  • Figure 6

    (Color online) Multitask bicycle platform, which is able to take command from the environment to accomplish different tasks such as following a target person and avoiding the obstacles.

  • Figure 7

    (Color online) The 6-state state machine for autopilot bicycle demo. (a) The state transfer rules; (b) the H-NSM-C receives the camera video streams and microphone voice streams, and controls a steering motor for following a target person, executing voice command or avoiding the obstacles; (c) neuron state and event signal recorded during the testing.

  • Table 1  

    Table 1State configuration

    State Context Action
    $S_0$ First move Select the source and target peg for the first move
    $S_1$ Select Random select a source peg and a target peg
    $S_2$ Verify Perform function $f$
    $S_3$ Move Perform move
    $S_4$ Finish

    Algorithm 1 Inference and training procedure

    SubProc 1 is applied for deciding current state according to previous states and transfer signals: ${\rm~SIn}=[{\rm~SOut},{\rm~TOut}]$;

    $V_S={\rm~SIn}~\cdot~{\rm~CM}$; ${\rm~SOut}_j=1~{\rm~if}~(V_{S_j}>{\rm~ST})~{\rm~else}~~0,j\in[1,S]$;

    SubProc 2 is the STDP-like training of state transfer matrix: ${\rm~If}~({\rm~SOut}_j==0~\&~{\rm~SForce}_j~(t)==1~\&~{\rm~SIn}_{(S+r)}==1~\&~{\rm~CM}_{(r,j)}<P_{\rm~Ths})~{\rm~then}$

    ${\rm~CM}_{(r,j)}={\rm~CM}_{(r,j)}+\delta$; ${\rm~If}~({\rm~SOut}_j==1~\&~{\rm~SForce}_j~(t)==0~\&~{\rm~SIn}_{(S+r)}==1~\&~{\rm~CM}_{(r,j)}>N_{\rm~Ths})~{\rm~then}$ ${\rm~CM}_{(r,j)}={\rm~CM}_{(r,j)}-\delta$; $r\in[1,T],~j\in[1,S]$;

    SubProc 3: ${\rm~TIn}=[{\rm~Trigger},{\rm~SOut}]$;

    $V_T={\rm~TIn}~\cdot~{\rm~TM}$; ${\rm~TOut}_j=1~{\rm~if}~(V_{T_j}>{\rm~STT})~{\rm~else}~~0,j\in[1,T]$;

    SubProc 4: ${\rm~If}~({\rm~TOut}_j==0~\&~{\rm~TForce}_j~(t)==1~\&~{\rm~TIn}_r==1~\&~{\rm~TM}_{(r,j)}<P_{\rm~Tht})~{\rm~then}~$

    ${\rm~TM}_{(r,j)}={\rm~TM}_{(r,j)}+\delta$; ${\rm~If}~({\rm~TOut}_j==1~\&~{\rm~TForce}_j~(t)==0~\&~{\rm~TIn}_r==1~\&~{\rm~TM}_{(r,j)}~>N_{\rm~Tht}~)~~{\rm~then}$ ${\rm~TM}_{(r,j)}={\rm~TM}_{(r,j)}-\delta$; $r\in[1,S],j\in[1,T]$.

    for $t~=~1~\TO~{\rm~Tmax}$

    SubProc 1. Integrate and fire of state neurons;

    SubProc 2. If training, learning the state transfer matrix (SM);

    SubProc 3. Integrate and fire of transfer neurons;

    SubProc 4. If training, learning the trigger matrix (TM);

    end for

  • Table 2  

    Table 2Transfer condition

    Transfer Condition Action
    $T_0$ $S_0$–$S_3$ Always true
    $T_1$ $S_1$–$S_2$ Always true
    $T_2$ $S_2$–$S_3$ Function $f$ returns true
    $T_3$ $S_2$–$S_1$ Function $f$ returns false
    $T_4$ $S_3$–$S_1$ Not finish
    $T_5$ $S_3$–$S_4$ Finish (all the disks are on peg C)
  • Table 3  

    Table 3Time (ms)/number of steps cost using different methods

    Method Number of pegs
    4 5 6 7 8
    Optimum 0.6/16 1.3/32 3.4/64 6.9/128 11.8/256
    Random 25.1/315 681.5/8328 3068.8/39930 5927.0/78599 86107.6/1273359
    H-NSM 0.9/16 2.3/32 3.9/64 9.8/128 16.9/256