SCIENCE CHINA Information Sciences, Volume 64 , Issue 3 : 132201(2021) https://doi.org/10.1007/s11432-019-2878-y

## Rapid dynamical pattern recognition for sampling sequences

More info
• ReceivedOct 23, 2019
• AcceptedFeb 29, 2020
• PublishedFeb 1, 2021
Share
Rating

### Acknowledgment

This work was supported in part by National Natural Science Foundation of China (Grant No. 61890922) and in part by National Major Scientific Instruments Development Project (Grant No. 61527811).

### Supplement

Appendix

Analysis in the interval $\mathcal{I}'_k$

Let $H_i^s[k]:=\bar{W}_i^{s{\rm~T}}S(X[k])-f_i(X[k],p^{r})$, $\forall~k\in\mathcal{I}_k$. The solution of the synchronization error 13 can be expressed as follows: $$\tilde{x}_i^{s}[k]=b_i^{k-T_{ak}}\tilde{x}_i^{s}[T_{ak}]+\sum\limits_{j=T_{ak}}\limits^{k-1}Tb_i^{k-1-j}H_i^s[j]. \tag{33}$$

Since we do not know any information about the synchronization error in the beginning of $\mathcal{I}_k$ (i.e., in $T_{ak}$ steps), we have to discuss three cases: (i) the magnitude of $\tilde{x}_i^s[T_{ak}]$ is small; (ii) the magnitude of $\tilde{x}_i^s[T_{ak}]$ is large and $\tilde{x}_i^s[T_{ak}]$ has the same sign with $H_i^s[k]$; (iii) the magnitude of $\tilde{x}_i^s[T_{ak}]$ is large and $\tilde{x}_i^s[T_{ak}]$ has a different sign with $H_i^s[k]$.

In order to facilitate analysis, we assume the sign of $H_i^s[k]$ is positive in follows. (The opposite situation can directly carried out by following the same proof with the negative sign.)

Case (i). If $|\tilde{x}_i^s[T_{ak}]|<\frac{T(\epsilon_i^*+\varsigma_i^*+\frac{\mu_i}{2})}{1-b_i}$, the synchronization error satisfies: \begin{align} \tilde{x}_i^{s}[k] &>-b_i^{k-T_{ak}}\frac{T(\epsilon_i^*+\varsigma_i^* +\frac{\mu_i}{2})}{1-b_i}+\sum\limits_{j=T_{ak}}\limits^{k-1}Tb_i^{k-1-j}H_i^s[j] \\ &>-\frac{Tb_i^{k-T_{ak}}(\epsilon_i^*+\varsigma_i^* +\frac{\mu_i}{2})}{1-b_i}+\frac{T(1-b_i^{k-T_{ak}})(\epsilon_i^*+\varsigma_i^*+\mu_i)}{1-b_i} \\ &=-\frac{Tb_i^{k-T_{ak}}(2\epsilon_i^*+2\varsigma_i^* +\frac{3\mu_i}{2})}{1-b_i}+\frac{T(\epsilon_i^*+\varsigma_i^*+\mu_i)}{1-b_i}. \tag{34} \end{align} Next, we have to estimate the maximum time for passing the area $|\tilde{x}_i^s|<\frac{T(\epsilon_i^*+\varsigma_i^*+\frac{\mu_i}{2})}{1-b_i}$. With the property 34, we can estimate the maximum passing time by using the following condition: \begin{align} &-\frac{Tb_i^{k-T_{ak}}(2\epsilon_i^*+2\varsigma_i^*+\frac{3\mu_i}{2})}{1-b_i}+\frac{T(\epsilon_i^*+\varsigma_i^*+\mu_i)}{1-b_i}\ge \frac{T(\epsilon_i^*+\varsigma_i^*+\frac{\mu_i}{2})}{1-b_i}, \\ &\Leftrightarrow \frac{Tb_i^{k-T_{ak}}(2\epsilon_i^*+2\varsigma_i^*+\frac{3\mu_i}{2})}{1-b_i}\le \frac{T\mu_i}{2(1-b_i)}. \tag{35} \end{align} Based on the analysis in 35, we can conclude that $|\tilde{x}_i^s[k]|>\frac{T(\epsilon_i^*+\varsigma_i^*+\frac{\mu_i}{2})}{1-b_i}$ satisfies, if $$k>T_{ak}+\log_{b_i}{\frac{\mu_i}{4\epsilon_i^*+4\varsigma_i^*+3\mu_i}}. \tag{36}$$

In case (i), a time interval $\mathcal{I}'_k=\{k\,|\,~|\tilde{x}_i^s[k]|<\frac{T(\epsilon_i^*+\varsigma_i^*+\frac{\mu_i}{2})}{1-b_i}\}=[T_{a},T_{ak}']$ appears in the beginning of the time interval $\mathcal{I}_k$. The maximum time length of the interval $\mathcal{I}'_k$ is $T_{ak}'-T_{ak}+1\le~l':=\log_{b_i}{\frac{\mu_i}{4\epsilon_i^*+4\varsigma_i^*+3\mu_i}}+1$. In the time interval $k\in\mathcal{I}_k-\mathcal{I}'_k$, $|\tilde{x}_i^s[k]|>\frac{T(\epsilon_i^*+\varsigma_i^*+\frac{\mu_i}{2})}{1-b_i}$.

Case (ii). If $|\tilde{x}_i^s[T_{ak}]|\ge\frac{T(\epsilon_i^*+\varsigma_i^*+\frac{\mu_i}{2})}{1-b_i}$ and $\tilde{x}_i^s[T_{ak}]$ has the same sign with $H_i^s[k]$, the synchronization error satisfies \begin{align} \tilde{x}_i^{s}[k] &>b_i^{k-T_{ak}}\frac{T(\epsilon_i^*+\varsigma_i^* +\frac{\mu_i}{2})}{1-b_i}+\sum\limits_{j=T_{ak}}\limits^{k-1}Tb_i^{k-1-j}H_i^s[j] \\ &>\frac{Tb_i^{k-T_{ak}}(\epsilon_i^*+\varsigma_i^* +\frac{\mu_i}{2})}{1-b_i}+\frac{T(1-b_i^{k-T_{ak}})(\epsilon_i^*+\varsigma_i^*+\mu_i)}{1-b_i} \\ &=\frac{T(\epsilon_i^*+\varsigma_i^*+\frac{\mu_i}{2})}{1-b_i} +\frac{T(1-b_i^{k-T_{ak}})(\frac{\mu_i}{2})}{1-b_i}. \tag{37} \end{align} We know that $\frac{T(1-b_i^{k-T_{ak}})(\frac{\mu_i}{2})}{1-b_i}$ is always larger than $0$. Thus, we have $$\tilde{x}_i^{s}[k]>\frac{T(\epsilon_i^*+\varsigma_i^*+\frac{\mu_i}{2})}{1-b_i}. \tag{38}$$

In case (ii), the time interval $\mathcal{I}'_k=\{k\,|\,~|\tilde{x}_i^s[k]|<\frac{T(\epsilon_i^*+\varsigma_i^*+\frac{\mu_i}{2})}{1-b_i}\}$ will be an empty set $\mathcal{I}'_k=\emptyset$. In the time interval $k\in\mathcal{I}_k-\mathcal{I}'_k$, $|\tilde{x}_i^s[k]|>\frac{T(\epsilon_i^*+\varsigma_i^*+\frac{\mu_i}{2})}{1-b_i}$.

Case (iii). If $|\tilde{x}_i^s[T_{ak}]|\ge\frac{T(\epsilon_i^*+\varsigma_i^*+\frac{\mu_i}{2})}{1-b_i}$ and $\tilde{x}_i^s[T_{ak}]$ has a different sign with $H_i^s[k]$, there exists a time interval $\mathcal{I}'_k=\{k\,|\,~|\tilde{x}_i^s[k]|<\frac{T(\epsilon_i^*+\varsigma_i^*+\frac{\mu_i}{2})}{1-b_i}\}=[T_{ak}',T_{bk}']$ in $\mathcal{I}_k$ such that $$|\tilde{x}_i^s[k]|\ge\frac{T(\epsilon_i^*+\varsigma_i^*+\frac{\mu_i}{2})}{1-b_i}, \forall k\in[T_{ak},T_{ak}'-1], \tag{39}$$ $$|\tilde{x}_i^s[k]|\le\frac{T(\epsilon_i^*+\varsigma_i^*+\frac{\mu_i}{2})}{1-b_i}, \forall k\in[T_{ak}',T_{bk}'], \tag{40}$$ $$|\tilde{x}_i^s[k]|\ge\frac{T(\epsilon_i^*+\varsigma_i^*+\frac{\mu_i}{2})}{1-b_i}, \forall k\in[T_{bk}'+1,T_{bk}]. \tag{41}$$

In case (iii), we can conclude that the time length of $\mathcal{I}_k'$ is $T_{bk}'-T_{ak}'+1=l'=\log_{b_i}{\frac{\mu_i}{4\epsilon_i^*+4\varsigma_i^*+3\mu_i}}+1$ according to the analysis of case (i). In the time interval $k\in\mathcal{I}_k-\mathcal{I}'_k$, $|\tilde{x}_i^s[k]|>\frac{T(\epsilon_i^*+\varsigma_i^*+\frac{\mu_i}{2})}{1-b_i}$ holds.

### References

[1] Kadous M W. Temporal classification: extending the classification paradigm to multivariate time series. Dissertation for Ph.D. Degree. Kensington: University of New South Wales, 2002. Google Scholar

[2] Dietterich T G. Machine learning for sequential data: a review, In: Proceedings of Joint IAPR International Workshops SSPR and SPR, 2002. 15--30. Google Scholar

[3] Keogh E, Ratanamahatana C A. Exact indexing of dynamic time warping. Knowl Inf Syst, 2005, 7: 358-386 CrossRef Google Scholar

[4] Xi X P, Keogh E, Shelton C, et al. Fast time series classification using numerosity reduction. In: Proceedings of International Conference on Machine Learning, 2006. Google Scholar

[5] Wang M, Wang Z, Chen Y. Adaptive Neural Event-Triggered Control for Discrete-Time Strict-Feedback Nonlinear Systems. IEEE Trans Cybern, 2020, 50: 2946-2958 CrossRef Google Scholar

[6] Huang Z, Cao J, Raffoul Y N. Hilger-type impulsive differential inequality and its application to impulsive synchronization of delayed complex networks on time scales. Sci China Inf Sci, 2018, 61: 78201 CrossRef Google Scholar

[7] Ljung L. Perspectives on System Identification. IFAC Proc Volumes, 2008, 41: 7172-7184 CrossRef Google Scholar

[8] Yang Q, Wu X. 10 CHALLENGING PROBLEMS IN DATA MINING RESEARCH. Int J Info Tech Dec Mak, 2006, 05: 597-604 CrossRef Google Scholar

[9] Gales M, Young S. The Application of Hidden Markov Models in Speech Recognition. FNT Signal Processing, 2007, 1: 195-304 CrossRef Google Scholar

[10] Rabiner L R. A tutorial on hidden Markov models and selected applications in speech recognition. Proc IEEE, 1989, 77: 257-286 CrossRef Google Scholar

[11] Yamato J, Ohya J, Ishii K. Recognizing human action in time-sequential images using hidden Markov model. In: Proceedings of Computer Vision and Pattern Recognition, 1992. 379--385. Google Scholar

[12] Turaga P, Chellappa R, Subrahmanian V S. Machine Recognition of Human Activities: A Survey. IEEE Trans Circuits Syst Video Technol, 2008, 18: 1473-1488 CrossRef Google Scholar

[13] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature, 2015, 521: 436-444 CrossRef ADS Google Scholar

[14] Pascanu R, Gulcehre C, Cho K, et al. How to construct deep recurrent neural networks. In: Proceedings of International Conference on Learning Representations, 2014. Google Scholar

[15] Sutskever I, Martens J, Hinton G E. Generating text with recurrent neural networks. In: Proceedings of International Conference on Machine Learning, 2011. 1017--1024. Google Scholar

[16] Sutskever I, Vinyals O, Le Q V. Sequence to sequence learning with neural networks. In: Proceedings of Advances in Neural Information Processing Systems, 2014. 3104--3112. Google Scholar

[17] Cho K, van Merrienboer B, Gulcehre C, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Proceedings of Conference on Empirical Methods in Natural Language Processing, 2014. Google Scholar

[18] Graves A, Mohamed A R, Hinton G. Speech recognition with deep recurrent neural networks. In: Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, 2013. Google Scholar

[19] Huang Z Y, Tang J, Xue S F, et al. Speaker adaptation of RNN-BLSTM for speech recognition based on speaker code. In: Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, 2016. 5305--5309. Google Scholar

[20] Li X G, Wu X H. Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition. In: Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, 2015. 4520--4524. Google Scholar

[21] Li J, Xu H, He X W, et al. Tweet modeling with LSTM recurrent neural networks for hashtag recommendation. In Proceedings of IEEE International Joint Conference on Neural Networks, 2016. 1570--1577. Google Scholar

[22] Wang M S, Song L, Yang X K, et al. A parallel-fusion RNN-LSTM architecture for image caption generation. In: Proceedings of IEEE International Conference on Image Processing, 2016. 4448--4452. Google Scholar

[23] Hochreiter S, Schmidhuber J. Long Short-Term Memory. Neural Computation, 1997, 9: 1735-1780 CrossRef Google Scholar

[24] Bengio Y, Simard P, Frasconi P. Learning long-term dependencies with gradient descent is difficult. IEEE Trans Neural Netw, 1994, 5: 157-166 CrossRef Google Scholar

[25] Xu K, Ba J, Kiros R, et al. Show, attend and tell: neural image caption generation with visual attention. In: Proceedings of International Conference on Machine Learning, 2015. 2048--2057. Google Scholar

[26] Charles A, Yin D, Rozell C. Distributed sequence memory of multidimensional inputs in recurrent networks. J Mach Learn Res, 2017, 18: 181--217. Google Scholar

[27] Wang C, Hill D J. Learning From Neural Control. IEEE Trans Neural Netw, 2006, 17: 130-146 CrossRef Google Scholar

[28] Wang C, Hill D J. Deterministic Learning Theory for Identification, Recognition, and Control. Boca Raton: CRC Press, 2009. Google Scholar

[29] Wang M, Zhang Y, Wang C. Learning from neural control for non-affine systems with full state constraints using command filtering. Int J Control, 2018, 47: 1-15 CrossRef Google Scholar

[30] Cong Wang , Hill D J. Deterministic Learning and Rapid Dynamical Pattern Recognition. IEEE Trans Neural Netw, 2007, 18: 617-630 CrossRef Google Scholar

[31] Cong Wang , Tianrui Chen . Rapid Detection of Small Oscillation Faults via Deterministic Learning. IEEE Trans Neural Netw, 2011, 22: 1284-1296 CrossRef Google Scholar

[32] Lin P, Wang C, Chen T. A Stall Warning Scheme for Aircraft Engines With Inlet Distortion via Deterministic Learning. IEEE Trans Contr Syst Technol, 2018, 26: 1468-1474 CrossRef Google Scholar

[33] Lin P, Wang M, Wang C. Abrupt stall detection for axial compressors with non-uniform inflow via deterministic learning. Neurocomputing, 2019, 338: 163-171 CrossRef Google Scholar

[34] Rapid Oscillation Fault Detection and Isolation for Distributed Systems via Deterministic Learning. IEEE Trans Neural Netw Learning Syst, 2014, 25: 1187-1199 CrossRef Google Scholar

[35] Chen T, Wang C, Hill D J. Small oscillation fault detection for a class of nonlinear systems with output measurements using deterministic learning. Syst Control Lett, 2015, 79: 39-46 CrossRef Google Scholar

[36] Chen T, Wang C, Chen G. Small Fault Detection for a Class of Closed-Loop Systems via Deterministic Learning. IEEE Trans Cybern, 2019, 49: 897-906 CrossRef Google Scholar

[37] Yuan C Z, Wang C. Design and performance analysis of deterministic learning of sampled-data nonlinear systems. Sci China Inf Sci, 2014, 57: 1-18 CrossRef Google Scholar

[38] Wu W, Wang C, Yuan C. Deterministic learning from sampling data. Neurocomputing, 2019, 358: 456-466 CrossRef Google Scholar

[39] Fradkov A L, Evans R J. Control of chaos: Methods and applications in engineering. Annu Rev Control, 2005, 29: 33-56 CrossRef Google Scholar

[40] Chen G R, Dong X N. From Chaos to Order: Methodologies, Perspectives and Applications. Singapore: World Scientific, 1998. Google Scholar

• Figure 1

(Color online) Function approximation of $f_d$ of (a) Duf1, (b) Duf2, (c) Duf3, and (d) Duf4 in space. Function approximation of $f_v$ of (e) DVan1 and (f) DVan2 in space.

• Figure 2

(Color online) Function approximation of $f_d$ of (a) Duf1, (b) Duf2, (c) Duf3, and (d) Duf4 along the time axis. Function approximation of $f_v$ of (e) DVan1 and (f) DVan2 along the time axis.

• Figure 3

(Color online) State trajectories in (a) the first and (b) the second scenarios.

• Figure 4

(Color online) (a) Synchronization error of four training patterns in the first scenario; (b) average $L_1$ norm of synchronization error of four training patterns; (c) information of dynamic differences in the sense of Definition 2.

• Figure 5

(Color online) Information of dynamic differences in the sense of Definition 2 in the steady-state process.

• Figure 6

(Color online) (a) The length of subinterval of the information of dynamic differences of different TRPs; (b) distance from TEP to TRP1.

• Figure 7

(Color online) (a) RBF representation of $f_d$ of TRP1 along the test pattern trajectories in space; (b) RBF representation of $f_d$ of TRP1 along the time axis.

• Figure 8

(Color online) (a) Synchronization error of four training patterns in the second scenario; (b) average $L_1$ norm of synchronization error of four training patterns; (c) information of dynamic differences in the sense of Definition 2.

• Figure 9

(Color online) (a) Comparison in trajectories; (b) dynamic differences between TEP and TRPs.

• Figure 10

(Color online) Distance from TEP to TRPs.

• Figure 11

(Color online) RBF representation of $f_d$ of (a) TRP1 and (b) TRP2 along the time axis.

• Table 1

Table 1System parameters of training patterns

 Pattern $p_1$ $p_2$ $p_3$ $q$ $\omega$ Initial state $X_0$ Duf1 $1.2$ $-1.5$ $1$ $0.9$ $1.8$ $[0.438;0.07713]$ Duf2 $0.4$ $-1.5$ $1$ $0.9$ $1.8$ $[0.438;0.07713]$ Duf3 $0.55$ $-1.1$ $1$ $1.498$ $1.8$ $[0.438;0.07713]$ Duf4 $0.2$ $-1.1$ $1$ $1.498$ $1.8$ $[0.438;0.07713]$ DVan1 $0.6$ $1$ $0.8$ $1$ $1.498$ $[1.3;2.2]$ DVan2 $0.6$ $1$ $1.3$ $1$ $1.498$ $[1.3;2.2]$
• Table 2

Table 2Transformation parameters of different systems

 System Shifting of $x_1$ ($S_{h1}$) Scaling of $x_1$ ($S_{c1}$) Shifting of $x_2$ ($S_{h2}$) Scaling of $x_2$ ($S_{c2}$) Duf(1,2) $0$ $1/1.2$ $-0.8$ $1/1.2$ Duf(3,4) $0$ $1/3.5$ $0$ $1/3.5$ DVan $0$ $1/5$ $0$ $1/5$
• Table 3

Table 3System parameters of test patterns

 Pattern $p_1$ $p_2$ $p_3$ $q$ $\omega$ Initial state $X_0$ TEP in Scenario 1 (Duf) $1.22$ $-1.5$ $1$ $0.9$ $1.8$ $[0.438;0.07713]$ TEP in Scenario 2 (Duf) $2$ $-1.3$ $1$ $1.498$ $1.8$ $[0.438;0.07713]$

Citations

Altmetric