# SCIENCE CHINA Information Sciences, Volume 62, Issue 9: 192203(2019) https://doi.org/10.1007/s11432-018-9685-8

## Optimal control with irregular performance • AcceptedSep 30, 2018
• PublishedAug 2, 2019
Share
Rating

### Abstract

In this paper, we solve the long-standing fundamental problem of irregular linear-quadratic (LQ) optimal control, which has received significant attention since the 1960s.We derive the optimal controllers via the key technique of finding the analytical solutions to two different forward and backward differential equations (FBDEs).We give a complete solution to the finite-horizon irregular LQ control problem using a new `two-layer optimization' approach. We also obtain the necessary and sufficient condition for the existence of optimal and stabilizing solutionsin the infinite-horizon case in terms of solutions to two Riccati equationsand the stabilization of one specific system.For the first time, we explore the essential differences between irregular and standard LQ control, making a fundamental contribution to classical LQ control theory.We show that irregular LQ control is totally different from regular control as the irregular controller must guarantee the terminal state constraint of $P_1(T)x(T)=0$.

### Acknowledgment

This work was supported by National Natural Science Foundation of China (Grant Nos. 61633014, 61573221, 61873332) and the Qilu Youth Scholar Discipline Construction Funding from Shandong University.

### Supplement

Appendix

Proof of Lemma 3.2

Proof of necessity. Based on the discussion of (9)–(11), we can see that $p(t)\neq~P(t)x(t)$ under the condition (11), where $P(t)$ is the solution to (8). We therefore define a new variable $\Theta(t)$ as \begin{eqnarray}p(t)=P(t)x(t)+\Theta(t), \tag{36} \end{eqnarray} where it is clear that $\Theta(T)=0$. Next, we aim to derive the new FBDEs (16)–(18) under the solvability of Problem 1.

First, we take the derivative of (36), obtaining \begin{eqnarray}\dot{p}(t)=\dot{P}(t)x(t)+P(t)\big[Ax(t) +Bu(t)\big]+\dot{\Theta}(t). \tag{37} \end{eqnarray} From (6) and (36), we then find that \begin{eqnarray}\dot{p}(t)=-\big[A'P(t)x(t)+A'\Theta(t) +Qx(t)\big]. \tag{38} \end{eqnarray} By comparing (37) and (38), we obtain \begin{eqnarray}0&=&\dot{P}(t)x(t)+P(t)Ax(t)+P(t)Bu(t)+\dot{\Theta}(t) +A'P(t)x(t)+A'\Theta(t)+Qx(t). \tag{39} \end{eqnarray} Second, we aim to find the controller $u(t)$ and the new equilibrium condition (16). By using (36), we can formulate the equilibrium condition (7) as \begin{eqnarray}0=Ru(t)+B'p(t)=Ru(t)+B'P(t)x(t)+B'\Theta(t). \tag{40} \end{eqnarray} Taken together with (11), this can also be written as \begin{eqnarray}u(t)&=&-R^{\dag}\left(B'P(t)x(t)+B'\Theta(t)\right)+(I-R^{\dag}R)z(t), \tag{41} \end{eqnarray} where $z(t)$ is an arbitrary vector with compatible dimension such that the following equality holds: \begin{eqnarray}0&=&(I-RR^{\dag})\left(B'P(t)x(t)+B'\Theta(t)\right). \tag{42} \end{eqnarray} Let \begin{eqnarray}T_0(I-R^{\dag}R)z(t)=\left[ \begin{array}{c} 0 \\ u_1(t) \\ \end{array} \right], \tag{43} \end{eqnarray} where $u_1(t)=\Upsilon_{T_0}~z(t)\in~\mathbb{R}^{m-m_0(t)}$. Now, we can rewrite (42) as (16). Note that \begin{eqnarray}I-RR^{\dag} &=&(I-RR^{\dag})(I-RR^{\dag}) \\ &=&(I-RR^{\dag})T'_0(T^{-1}_0)'(I-RR^{\dag}) \\ &=&\left[ \begin{array}{cc} 0 & \Upsilon'_{T_0} \\ \end{array} \right](T^{-1}_0)'(I-RR^{\dag}), \tag{44} \end{eqnarray} where we have used (14) to derive the last equality. By using the definitions below (14), we can rewrite (42) as \begin{eqnarray}0&=&\Upsilon'_{T_0}\Big[C_0(t)x(t)+B_0'\Theta(t)\Big]. \tag{45} \end{eqnarray} Note that $\Upsilon'_{T_0}$ is of full column rank, and thus Eq. (45) can be directly rewritten as (16).

Third, we derive the dynamics of $\Theta(t)$. Substituting (41) into (39) and using (8) yields \begin{eqnarray}0&=&\dot{P}(t)x(t)+P(t)Ax(t) +A'P(t)x(t)+A'\Theta(t)+Qx(t)+\dot{\Theta}(t) \\ & &-P(t)BR^{\dag}\left(B'P(t)x(t)+B'\Theta(t)\right)+P(t)B(I-R^{\dag}R)z(t) \\ &=&\dot{\Theta}(t)+\left(A'-P(t)BR^{\dag}B'\right)\Theta(t)+P(t)B(I-R^{\dag}R)z(t). \tag{46} \end{eqnarray} As $(I-R^{\dag}R)^2=I-R^{\dag}R$, we find that \begin{eqnarray}P(t)B(I-R^{\dag}R)z(t) &=&P(t)B(I-R^{\dag}R)T_0^{-1}T_0(I-R^{\dag}R)z(t) \\ &=&P(t)B(I-R^{\dag}R)T_0^{-1}\left[ \begin{array}{c} 0 \\ u_1(t) \\ \end{array} \right] \\ &=&\left[ \begin{array}{cc} * & C_0'(t) \\ \end{array} \right]\left[ \begin{array}{c} 0 \\ u_1(t) \\ \end{array} \right]=C_0'(t)u_1(t). \tag{47} \end{eqnarray} Thus, from (46) we obtain $\dot{\Theta}(t)=-[A'_0(t)\Theta(t)+C'_0(t)~u_1(t)],$ which implies that the dynamics of $\Theta(t)$ is given by (18).

Finally, we derive the dynamics equation (17). By substituting (41) into (5) and combining this with the fact that $B(I-R^{\dag}R)z(t)=B_0u_1(t)$, which can be obtained in a similar way to (47), we can derive the state dynamics (17).

Proof of sufficiency. Now, we show Problem 1 is solvable if there exists a $u_1(t)$ that enables us to achieve (16). In fact, if Eq. (16) is true then Eqs. (41) and (42) can be jointly rewritten as (40). Further, by reversing the process for (36)–(40), we can easily verify that $p(t)=P(t)x(t)+\Theta(t)$, where $x(t)$ and $\Theta(t)$ satisfy (16)–(18), solving (5)–(7). Thus, Problem 1 is solvable, completing the proof.

Proof of Theorem 3.3

Proof of sufficiency. Based on Lemma 3.2, it is sufficient to verify that $(\Theta(t),x(t))=(P_1(t)x(t),x(t))$ is the solution to the FBDEs (16)–(18). Taking the derivative of $P_1(t)x(t)$ yields \begin{eqnarray}\frac{{\rm d}[P_1(t)x(t)]}{{\rm d}t} &=&\dot{P}_1(t)x(t)+P_1(t)[A_0(t)+D_0P_1(t)] x(t)+P_1(t)B_0u_1(t) \\ &=&-A_0'(t)P_1(t)x(t)+P_1(t)B_0u_1(t) \\ &=&-A_0'(t)P_1(t)x(t)-C_0'(t)u_1(t), \tag{48} \end{eqnarray} where we have used (15) and (19) to derive the last equality. In addition, again using (19), we have \begin{eqnarray}C_0(t)x(t)+B_0'P_1(t)x(t)=0. \tag{49} \end{eqnarray} By comparing (16)–(18) with (49), (48), and (21), we can see that Eqs. (16)–(18) are solvable with $\Theta(t)=P_1(t)x(t)$ if $P_1(T)x(T)=0$. Thus, based on Lemma 3.2, Problem 1 is solvable.

Proof of necessity. This proof is divided into two parts. First, we consider the case where the optimal solution is of closed-loop form, namely $u_1(t)=K_1(t)x(t)$. Based on Lemma 3.2, Eqs. (16)–(18) are solvable if Problem 1 is solvable. By substituting $u_1(t)=K_1(t)x(t)$ into (17) and (18), we obtain \begin{eqnarray}\dot{x}(t)&=&A_0(t)x(t)+D_0\Theta(t)+B_{0}K_1(t)x(t), \\ \dot{\Theta}(t)&=&-\left(A'_0(t)\Theta(t)+C'_0(t)K_1(t)x(t)\right). \end{eqnarray} Solving the above FBDEs gives us $\Theta(t)=\bar{P}(t)x(t)$, where $\bar{P}(t)$ satisfies \begin{eqnarray}0=\dot{\bar{P}}(t)+\bar{P}(t)A_0(t)+\bar{P}(t)D_0\bar{P}(t)+A_0'(t)\bar{P}(t)+\left(\bar{P}(t)B_0+C_0'(t)\right)K_1(t). \tag{50} \end{eqnarray} In addition, substituting $\Theta(t)=\bar{P}(t)x(t)$ into (16) yields \begin{eqnarray}0=C_0(t)+B_0'\bar{P}(t). \tag{51} \end{eqnarray} Thus, we can reformulate (50) as \begin{eqnarray}0=\dot{\bar{P}}(t)+\bar{P}(t)A_0(t)+\bar{P}(t)D_0\bar{P}(t)+A_0'(t)\bar{P}(t). \end{eqnarray} Comparing this with (15), we find that $\bar{P}(t)=P_1(t)$. Thus, Eq. (19) follows from (51) and Eq. (20) follows from $\Theta(T)=0$ and $\Theta(T)=P_1(T)x(T)$.

Second, the case where the controller $u_1(t)$ is of open-loop form can be solved similarly to the closed-loop case. This completes the proof.

Proof of Theorem 3.5

Proof of sufficiency. Under the condition (19), it is sufficient to verify that $P_1(T)x(T)=0$, given Theorem 3.3. To do this, we first state a formula relating $P_1(t)x(t)$ to the control $u_1(t)$ in terms of its dynamics. Similar to (48), the dynamics of $P_1(t)x(t)$ is given by \begin{eqnarray}\frac{{\rm d}[P_1(t)x(t)]}{{\rm d}t}=-A_0'(t)P_1(t)x(t)-C_0'(t)u_1(t). \end{eqnarray} Solving this differential equation yields \begin{eqnarray}P_1(t)x(t) &=&\int_{t}^TP_2(t,s)C_0'(s)u_1(s){\rm d}s+P_2(t,T)C, \tag{52} \end{eqnarray} where $C=P_1(T)x(T)$.

Next, we aim to prove that $C=0$ under the controller $u_1(t)$ defined in (25). If Eq. (23) holds, then for any $x_0$, there exists a $\zeta$ such that $P_1(t_0)x_0=G_1[t_0,T]\zeta,~$ where $\zeta=G_1^{\dag}[t_0,T]P_1(t_0)x_0$. We can now rewrite $u_1(t)$ in (25) as $u_1(t)=C_0(t)P_2'(t_0,t)\zeta$. By substituting $u_1(t)$ into (52), we obtain \begin{eqnarray}P_1(t_0)x_0 =\left[\int_{t_0}^TP_2(t_0,s)C_0'(s)C_0(s)P_2'(t_0,s){\rm d}s\right]\zeta+P_2(t_0,T)C=P_1(t_0)x_0+P_2(t_0,T)C. \end{eqnarray} As $P_2(t_0,T)$ is invertible, we have $C=0$, implying that $P_1(T)x(T)=0$. This completes the proof of sufficiency based on Theorem 3.3.

Proof of necessity. If the control problem is solvable, it follows from Theorem 3.3 that there exists a $P_1(t)$ such that Eq. (19) holds. We now prove that Eq. (23) does indeed hold. Otherwise, we would have that ${\rm~Range}~\big[P_1(t_0)\big]\nsubseteq {\rm~Range}~\left(G_1[t_0,T]\right)$, meaning that a non-zero vector $\rho$ would exist such that $\rho'P_1(t_0)\rho\neq0,\rho'G_1[t_0,T]\rho=0$. Then, we would obtain \begin{eqnarray}0=\rho'G_1[t_0,T]\rho=\rho'\left[\int_{t_0}^TP_2(t_0,s)C_0'(s)C_0(s)P_2'(t_0,s){\rm d}s\right]\rho =\int_{t_0}^T\|C_0(s)P_2'(t_0,s)\rho\|^2{\rm d}s, \end{eqnarray} implying that $C_0(s)P_2'(t_0,s)\rho=0$. Thus, we would have $\rho'\int_{t_0}^TP_2(t_0,s)C_0'(s)u_1(s){\rm~d}s=0$. Let $x_0=\rho$. From $\Theta(t_0)=P_1(t_0)x_0$, we would then have $\Theta(t_0)=P_1(t_0)\rho$. Combining this with $\Theta(t)=\int_{t}^TP_2(t,s)C_0'(s)u_1(s){\rm~d}s$ gives $$\rho'P_1(t_0)\rho=\rho'\left[\int_{t}^TP_2(t,s)C_0'(s)u_1(s){\rm d}s\right]\rho=0.$$ This is a contradiction, so Eq. (23) must hold, completing the proof.

Proof of Theorem 3.6

Let \begin{eqnarray}y(t)=\mathcal{T}_1'(t)x(t)=\left[ \begin{array}{c} y_1(t) \\ y_2(t) \\ \end{array} \right]. \tag{53} \end{eqnarray} Then, using (17) and the feedback controller $u_1(t)=K(t)x(t)$, we have \begin{eqnarray}\dot{y}(t)&=&\dot{\mathcal{T}}_1'(t)x(t)+\mathcal{T}_1'(t)\left(A_0(t)x(t)+D_0\Theta(t)+B_0K(t)x(t)\right) \\ &=&[\dot{\mathcal{T}}_1'(t)\mathcal{T}_1(t)+\mathcal{T}_1'(t)\left(A_0(t)+D_0P_1(t)+B_0K(t)\right)\mathcal{T}_1(t)]\mathcal{T}_1'(t)x(t) \\ &=&\left(\left[ \begin{array}{c} \tilde{T}_{1}(t) \\ \tilde{T}_{2}(t) \\ \end{array} \right]+\left[ \begin{array}{c} \hat{A}_{1}(t) \\ \hat{A}_{2}(t) \\ \end{array} \right]+\left[ \begin{array}{c} B_{1}(t) \\ B_{2}(t) \\ \end{array} \right]\mathcal{T}_1'(t) K(t)\mathcal{T}_1(t)\right)y(t) \\ &=&\left[ \begin{array}{c} \tilde{T}_{1}(t)+\hat{A}_{1}(t)+B_{1}(t)\mathcal{T}_1'(t)K(t)\mathcal{T}_1(t) \\ \tilde{T}_{2}(t)+\hat{A}_{2}(t)+B_{2}(t)\mathcal{T}_1'(t)K(t)\mathcal{T}_1(t) \\ \end{array} \right]y(t). \end{eqnarray} By applying (26), we obtain $$\left[ \begin{array}{c} \dot{y}_1(t) \\ \dot{y}_2(t) \\ \end{array} \right]=\left[ \begin{array}{cc} \frac{I}{t-T} & 0 \\ * & * \\ \end{array} \right]\left[ \begin{array}{c} y_1(t) \\ y_2(t) \\ \end{array} \right].$$ This implies that $\dot{y}_1(t)=\frac{I}{t-T}y_1(t)$. Then, solving this equation gives us $y_1(t)=\frac{T-t}{T-t_0}y_1(t_0)$, further implying that \begin{eqnarray}y_1(T)=0. \tag{54} \end{eqnarray} As \begin{eqnarray}0=\mathcal{T}_1'(T)P_1(T)\mathcal{T}_1(T)\mathcal{T}_1'(T)x(T)=\left[ \begin{array}{cc} \hat{P}(T) & 0 \\ 0 & 0 \\ \end{array} \right]y(T)=\hat{P}(T)y_1(T), \tag{55} \end{eqnarray} we can combine this with the invertibility of $\hat{P}(t)$ to obtain $y_1(T)=0$. We also have (53) and $y(T)=\mathcal{T}_1'(T)x(T)={\tiny[ ~~~~~~~~~~~~~~~~~~~~~~~~~~~\begin{array}{c} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~0~\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~y_2(T)~\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~\end{array} ~~~~~~~~~~~~~~~~~~~~~~~~~]}$, so that \begin{eqnarray}P_1(T)x(T)&=&P_1(T)\mathcal{T}_1(T) \left[ \begin{array}{c} 0 \\ y_2(T) \\ \end{array} \right] =\mathcal{T}_1(T)\mathcal{T}'_1(T) P_1(T)\mathcal{T}_1(T) \left[ \begin{array}{c} 0 \\ y_2(T) \\ \end{array} \right] \\ &=&\mathcal{T}_1(T) \left[ \begin{array}{cc} \hat{P}(T) & 0 \\ 0 & 0 \\ \end{array} \right] \left[ \begin{array}{c} 0 \\ y_2(T) \\ \end{array} \right] =0. \end{eqnarray} This completes the proof.

Proof of Theorem 4.3

Proof of sufficiency. As solutions $P$ and $P_1$ exist to (27) and (32) that satisfy $P+P_1\geq~0$, we will show that the cost function is bounded below by $x_0'(P+P_1)x_0$. By taking the derivative of $x'(t)(P+P_1)x(t)$, we obtain \begin{eqnarray}\frac{\rm d}{{\rm d}t}\left(x'(t)(P+P_1)x(t)\right) &=&\left(Ax(t)+Bu(t)\right)'(P+P_1)x(t)+x'(t)(P+P_1)\left(Ax(t)+Bu(t)\right) \\ &=&-x'(t)Qx(t)+x'(t)(P+P_1)BR^{\dag}B'(P+P_1)x(t)+u'(t)B'(P+P_1)x(t) \\ & &+x'(t)(P+P_1)Bu(t), \end{eqnarray} where we have used (27) and (32) to derive the last equality. By integrating this from $0$ to $T$, we can further obtain \begin{eqnarray}\int_{0}^T& &\!\!\!\!\left(x'(t)Qx(t)+u'(t)Ru(t)\right){\rm d}t \\ & &=x'(0)(P+P_1)x(0)-x'(T)(P+P_1)x(T)+\int_0^T\bigg(u'(t)Ru(t)+x'(t)(P+P_1)BR^{\dag}B'(P+P_1)x(t) \\ & & +u'(t)B'(P+P_1)x(t)+x'(t)(P+P_1)Bu(t)\bigg){\rm d}t \\ & &=x'(0)(P+P_1)x(0)-x'(T)(P+P_1)x(T)+\int_0^T\Big[\left(u(t)+R^{\dag}B'(P+P_1)x(t)\right)'R(u(t) \\ & & +R^{\dag}B'(P+P_1)x(t))+u'(t)(I-RR^{\dag})B'(P+P_1)x(t)+x'(t)(P+P_1)B(I-R^{\dag}R)u(t)\Big]{\rm d}t \\ & &=x'(0)(P+P_1)x(0)-x'(T)(P+P_1)x(T)+\int_0^T\left(u(t)+R^{\dag}B'(P+P_1)x(t)\right)'R(u(t) \\ & & +R^{\dag}B'(P+P_1)x(t)){\rm d}t, \end{eqnarray} where we have used $(I-RR^{\dag})B'(P+P_1)=0$ to derive the last equality, obtained from (33). As $u(t)\in~\mathcal{U}$, we thus have $\lim_{T\rightarrow\infty}x'(T)(P+P_1)x(T)=0$. This implies that \begin{eqnarray}J(x_0;u)&=&\lim_{T\rightarrow\infty}\int_{0}^T\left(x'(t)Qx(t)+u'(t)Ru(t)\right){\rm d}t \\ &=&x'(0)(P+P_1)x(0)+\lim_{T\rightarrow\infty}\int_0^T\left(u(t)+R^{\dag}B'(P+P_1)x(t)\right)'R\left(u(t)+R^{\dag}B'(P+P_1)x(t)\right){\rm d}t. \tag{56} \end{eqnarray} Because $R\geq~0$, we obtain $J(x_0;u)\geq~x'(0)(P+P_1)x(0)$.

Next, we show that the controller (35) is stabilizing. Substituting (35) into (1) yields \begin{eqnarray}\dot{x}(t)&=&Ax(t)-BR^{\dag}B'(P+P_1)x(t)+BG_0Kx(t) \\ &=&(A_0+D_0P_1)x(t)+BG_0Kx(t) \\ &=&(A_0+D_0P_1)x(t)+BT_0^{-1}\left[ \begin{array}{c} 0 \\ K \\ \end{array} \right]x(t). \tag{57} \end{eqnarray} Because $\Upsilon_{T_0}$ is of full row rank, there exists a $K_1$ such that $\Upsilon_{T_0}K_1=K$. From (14), we find that \begin{eqnarray}T_0(I-R^{\dag}R)K_1=\left[ \begin{array}{c} 0 \\ \Upsilon_{T_0} \\ \end{array} \right]K_1=\left[ \begin{array}{c} 0 \\ K \\ \end{array} \right]. \tag{58} \end{eqnarray} By substituting the above equation into (57), we then have \begin{eqnarray}\dot{x}(t)&=&(A_0+D_0P_1)x(t)+BT_0^{-1}T_0(I-R^{\dag}R)K_1x(t) \\ &=&(A_0+D_0P_1)x(t)+B(I-R^{\dag}R)T_0^{-1}T_0(I-R^{\dag}R)K_1x(t) \\ &=&(A_0+D_0P_1)x(t)+B_0\Upsilon_{T_0}K_1x(t) \\ &=&(A_0+D_0P_1)x(t)+B_0Kx(t). \end{eqnarray} $K$ was chosen to satisfy that $A_0+D_0P_1+B_0K$ is stable, so the above system is stable, and hence the controller (35) is stabilizing.

Finally, we substitute the stabilizing controller (35) into the cost function (56) to verify that Eq. (35) is an optimal controller as desired. In fact, with this controller, Eq. (56) becomes \begin{eqnarray}J(x_0;u) &=&x'(0)(P+P_1)x(0)+\lim_{T\rightarrow\infty}\int_0^T\left([-R^{\dag}B'(P+P_1)+G_0K]x(t)+R^{\dag}B'(P+P_1)x(t)\right)' \\ & &\times R\left([-R^{\dag}B'(P+P_1)+G_0K]x(t)+R^{\dag}B'(P+P_1)x(t)\right){\rm d}t \\ &=&x'(0)(P+P_1)x(0)+\lim_{T\rightarrow\infty}\int_0^Tx'(t)K'G_0'RG_0Kx(t){\rm d}t. \tag{59} \end{eqnarray} By again using (58), it follows that $G_0K=T_0^{-1}{\tiny[ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\begin{array}{c} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~0~\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~K~\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\end{array} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~]}=(I-R^{\dag}R)K_1$. This implies that $RG_0K=R(I-R^{\dag}R)K_1=0$. Thus, with the controller (35), the cost function (59) reduces to \begin{eqnarray}J(x_0;u)=x'(0)(P+P_1)x(0). \end{eqnarray} This shows that Eq. (35) is an optimal controller, and the optimal cost is $J^*(x_0;u)=x_0'(P+P_1)x_0$.

Proof of necessity. Here, we derive the three conditions given in the theorem. First, we discuss the results for the finite-horizon optimization problem. Considering the asymptotic behavior of the solutions to the Riccati differential equations enables us to obtain the first and second conditions. Then, by applying the maximum principle, we find that the stabilizability condition is as stated by the third condition. The detailed proof is given below.

First, based on Theorem 3.3, there exists a $P^T(t)$ in (8) and a $P_1^T(t)$ in (15) with terminal values of $P^T(T)=0$ and $P_1^T(T)$ such that Eq. (19) holds, and there also exists a $u_{1}(t)$ that achieves (20), where $x(t)$ obeys (21) with initial value $x(0)=x_0$. In this case, the optimal cost is given by \begin{eqnarray}J_T^*(x_0;u)=x_0'\hat{P}^T(0)x_0, \tag{60} \end{eqnarray} where $\hat{P}^T(t)=P^T(t)+P_1^T(t)$. Given that $Q\geq0$ and $R\geq~0$, we have $J_T(x_0;u)\geq0$. Accordingly, for $T_1\leq~T_2$, we obtain $J_{T_1}(x_0;u)\leq~J_{T_2}(x_0;u)$. Together with (60) and the arbitrariness of $x_0$, we thus find that \begin{eqnarray}\hat{P}^{T_1}(0)\leq \hat{P}^{T_2}(0). \tag{61} \end{eqnarray} In addition, consider the cost function \begin{eqnarray}J_T^t(x_0;u)=\int_{t}^T[x'(t)Qx(t)+u'(t)Ru(t)]{\rm d}t. \end{eqnarray} By applying a similar argument to that for Theorem 3.3, the optimal cost yielded by minimizing $J_T^t(x_0;u)$ subject to (1) is given by \begin{eqnarray}J_T^t(x_0;u)=x'(t)\hat{P}^{T}(t)x(t). \end{eqnarray} For $t_1\leq~t_2$, we have that \begin{eqnarray}J_T^{t_1}(x_0;u)\geq J_T^{t_2}(x_0;u), \end{eqnarray} which implies that \begin{eqnarray}\hat{P}^{T}(t_1)\geq \hat{P}^{T}(t_2). \tag{62} \end{eqnarray} Combining (61) and (62), we see that $\hat{P}^T(t)$ is non-decreasing with respect to $T$ and that $\hat{P}^T(t)$ is non-increasing with respect to $t$.

Next, we show the uniform boundedness of $\hat{P}^T(t)$. As there exists an optimal and stabilizing controller, there also exists a positive constant $c$ such that \begin{eqnarray}J_T^t(x_0;u)&\leq& \int_0^\infty\left(x'(t)Qx(t)+u'(t)Ru(t)\right){\rm d}t\leq c\|x_0\|^2. \end{eqnarray} Combining this with (60), it follows that $\hat{P}^T(t)\leq~cI.$ As all the system matrices are time-invariant, $\hat{P}^T(t)$ is also time-invariant, i.e., $\hat{P}^T(t)=\hat{P}^{T-t}(0).$ Recalling (61) and (62), this shows that the limit $\lim_{T\rightarrow\infty}~\hat{P}^T(t)=\hat{P}$ exists. Moreover, by letting $t\rightarrow\infty$ in $\hat{P}^T(t)=P^T(t)+P_1^T(t)$, we see that $\hat{P}$ satisfies \begin{eqnarray}0&=&A'\hat{P}+\hat{P}A+Q-\hat{P}BR^{\dag}B'\hat{P}. \end{eqnarray} This is exactly the same equation for $P$; hence, Eq. (27) is solvable. This further implies that Eq. (32) admits a solution $P_1$ and that $\hat{P}=P+P_1$. Likewise, letting $t\rightarrow\infty$ in (19) yields $C_0+B_0'P_1=0,$ which is exactly (33).

Finally, by applying the maximum principle, the optimal solution satisfies \begin{eqnarray}& &\dot{x}(t)=Ax(t)+Bu(t), \tag{63} \\ & &\dot{p}(t)=-A'p(t)-Qx(t), \tag{64} \\ & &0=Ru(t)+B'p(t), \tag{65} \end{eqnarray} with $\lim_{t\rightarrow\infty}p(t)=0$ and $x(0)=x_0$. Recalling that the optimal solution is also stabilizing, we obtain \begin{eqnarray}\lim_{t\rightarrow\infty}x(t)=0, \end{eqnarray} and hence that \begin{eqnarray}\lim_{t\rightarrow\infty}Px(t)=0, \tag{66} \\ \lim_{t\rightarrow\infty}P_1x(t)=0. \tag{67} \end{eqnarray} Let \begin{eqnarray}p(t)=Px(t)+\Theta(t), \tag{68} \end{eqnarray} where $P(t)$ obeys (27) and $\Theta(t)$ is to be determined. From (66) and $\lim_{t\rightarrow\infty}p(t)=0$, we then have $\lim_{t\rightarrow\infty}\Theta(t)=0$.

By substituting (68) into (65), we obtain \begin{eqnarray}0&=&Ru(t)+B'Px(t)+B'\Theta(t). \end{eqnarray} This implies that \begin{eqnarray}u(t)&=&-R^{\dag}\left(B'Px(t)+B'\Theta(t)\right)+(I-R^{\dag}R)z(t) \\ \tag{69} \end{eqnarray} and \begin{eqnarray}C_0x(t)+B_0'\Theta(t)=0, \tag{70} \end{eqnarray} where $z(t)$ is an arbitrary vector of compatible dimension.

Substituting (69) into (1) reduces the state dynamics to \begin{eqnarray}\dot{x}(t)&=&Ax(t)-BR^{\dag}\left(B'Px(t)+B'\Theta(t)\right)+B(I-R^{\dag}R)z(t) \\ &=&A_0x(t)+D_0\Theta(t)+B_0u_1(t), \tag{71} \end{eqnarray} where we have used $T_0(I-R^{\dag}R)z(t)={\tiny[ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\begin{array}{c} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~0~\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\Upsilon_{T_0}~\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\end{array} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~]}z(t)\triangleq{\tiny[ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\begin{array}{c} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~0~\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~u_1(t)~\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\end{array} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~]}$ in the derivation of the last equality. Taking the derivative of (68) yields \begin{eqnarray}\dot{p}(t)=P\dot{x}(t)+\dot{\Theta}(t)=P\left(A_0x(t)+D_0\Theta(t)+B_0u_1(t)\right)+\dot{\Theta}(t). \end{eqnarray} Comparing this with (64) and using (32), we have \begin{eqnarray}\dot{\Theta}(t)&=&-A_0'\Theta(t)-C_0'u_1(t). \tag{72} \end{eqnarray} We now prove that the solution to the FBDEs (70)–(72) is $\Theta(t)=P_1x(t)$, where $x(t)$ satisfies (34). By taking the derivative of $P_1x(t)$, we obtain \begin{eqnarray}\frac{\rm d}{{\rm d}t}\left(P_1x(t)\right) &=&P_1(A_0+D_0P_1)x(t)+P_1B_0u_1(t) \\ &=&-A_0'P_1x(t)+P_1B_0u_1(t) \\ &=&-A_0'P_1x(t)-C_0'u_1(t), \tag{73} \end{eqnarray} where we have used (33) to derive the last equality. By comparing (34), (73), and (33) with (70)–(72), we immediately obtain \begin{eqnarray}\Theta(t)=P_1x(t). \end{eqnarray} Accordingly, the state dynamics is given by (34). To ensure the state's stability, the state dynamics must be stabilizable, giving us the third condition. This completes the proof.

### References

 Ho Y C. Linear stochastic singular control problems. J Optim Theor Appl, 1972, 9: 24-31 CrossRef Google Scholar

 Jacobson D H, Speyer J L. Necessary and sufficient conditions for optimality for singular control problems: A limit approach. J Math Anal Appl, 1971, 34: 239-266 CrossRef Google Scholar

 Chen H-F. Unified controls applicable to general case under quadratic index. Acta Math Appl Sin, 1982, 5: 45--52. Google Scholar

 Chen S P, Li X J, Zhou X Y. Stochastic Linear Quadratic Regulators with Indefinite Control Weight Costs. SIAM J Control Optim, 1998, 36: 1685-1702 CrossRef Google Scholar

 Sun J R, Li X, Yong J M. Open-Loop and Closed-Loop Solvabilities for Stochastic Linear Quadratic Optimal Control Problems. SIAM J Control Optim, 2016, 54: 2274-2308 CrossRef Google Scholar

 Bell D J. Singular problems in optimal control-a survey. Int J Control, 1975, 21: 319-331 CrossRef Google Scholar

 Clements D, Anderson B D O. Singular Optimal Control: The Linear-Quadratic Problem. New York: Springer-Verlag, 1978. Google Scholar

 Bellman R, Glicksberg I, Gross O. Some Aspects of the Mathematical Theory of Control Processes. Rand Corporation, R-313. 1958. Google Scholar

 Anderson B D O, Moore J B. Optimal control: linear quadratic methods. Englewood Cliffs: Prentice Hall, 1990. Google Scholar

 Lewis F L, Vrabie D L, Syrmos V L. Optimal Control. Hoboken: John Wiley & Sons, Inc., 2012. Google Scholar

 Xu J J, Shi J T, Zhang H S. A leader-follower stochastic linear quadratic differential game with time delay. Sci China Inf Sci, 2018, 61: 112202 CrossRef Google Scholar

 Qi Q Y, Zhang H S. Time-inconsistent stochastic linear quadratic control for discrete-time systems. Sci China Inf Sci, 2017, 60: 120204 CrossRef Google Scholar

 Shi J T, Wang G C, Xiong J. Linear-quadratic stochastic Stackelberg differential game with asymmetric information. Sci China Inf Sci, 2017, 60: 092202 CrossRef Google Scholar

 Ju P J, Zhang H S. Achievable delay margin using LTI control for plants with unstable complex poles. Sci China Inf Sci, 2018, 61: 092203 CrossRef Google Scholar

 Zhang H S, Li L, Xu J J. Linear Quadratic Regulation and Stabilization of Discrete-Time Systems With Delay and Multiplicative Noise. IEEE Trans Automat Contr, 2015, 60: 2599-2613 CrossRef Google Scholar

 Zhang H S, Xu J J. On Irregular Linear Quadratic Control: Deterministic Case. 2018. ArXiv: 1712.08866v1. Google Scholar

 Xu J J, Zhang H S. Consensus Control of Multi-agent Systems with Optimal Performance,. arXiv Google Scholar

 Zhang H S, Xu J J. Control for It? Stochastic Systems With Input Delay. IEEE Trans Automat Contr, 2017, 62: 350-365 CrossRef Google Scholar

 Kalman R E. Contributions to the theory of optimal control. Bol Soc Mat Mexicana, 1960, 5: 102--119. Google Scholar

 Letov A M. The analytical design of control systems. Autom Remote Control, 1961, 22: 363--372. Google Scholar

 Pontryagin L S, Boltyanskii V G, Gamkrelidze R V, et al. The Mathematical Theory of Optimal Process. Hoboken: John Wiley & Sons Inc, 1962. Google Scholar

 Bellman R. The theory of dynamic programming. Bull Amer Math Soc, 1954, 60: 503-516 CrossRef Google Scholar

 Gurman V. The method of multiple maxima and optimization problems for space maneuvers. In: Proceedings of Second Readings of K. E. Tsiolkovskii, Moscow, 1968. 39--51. Google Scholar

 Moore J B. The singular solutions to a singular quadratic minimization problem?. Int J Control, 1974, 20: 383-393 CrossRef Google Scholar

 Willems J C, Kitap?i A, Silverman L M. Singular Optimal Control: A Geometric Approach. SIAM J Control Optim, 1986, 24: 323-337 CrossRef Google Scholar

 Gabasov R, Kirillova F M. High Order Necessary Conditions for Optimality. SIAM J Control, 1972, 10: 127-168 CrossRef Google Scholar

 Krener A J. The High Order Maximal Principle and Its Application to Singular Extremals. SIAM J Control Optim, 1977, 15: 256-293 CrossRef Google Scholar

 Hoehener D. Variational Approach to Second-Order Optimality Conditions for Control Problems with Pure State Constraints. SIAM J Control Optim, 2012, 50: 1139-1173 CrossRef Google Scholar

 Bonnans J F, Silva F J. First and Second Order Necessary Conditions for Stochastic Optimal Control Problems. Appl Math Optim, 2012, 65: 403-439 CrossRef Google Scholar

 Zhang H S, Zhang X. Pointwise Second-order Necessary Conditions for Stochastic Optimal Controls, Part I: The Case of Convex Control Constraint. SIAM J Control Optim, 2015, 53: 2267-2296 CrossRef Google Scholar

 Penrose R. A generalized inverse of matrices, Mathematical Proceedings of the Cambridge Philosophical Society, 1955, 52: 17--19. Google Scholar

 Rami M A, Moore J B, Zhou X Y. Indefinite Stochastic Linear Quadratic Control and Generalized Differential Riccati Equation. SIAM J Control Optim, 2002, 40: 1296-1311 CrossRef Google Scholar

 Kliger I, Wonham W. Discussion on the stability of the singular trajectory with respect to "Bang-bang" control. IEEE Trans Automat Contr, 1964, 9: 583-585 CrossRef Google Scholar

 Tien Hsia . On the existence and synthesis of optimal singular control with quadratic performance index. IEEE Trans Automat Contr, 1967, 12: 778-779 CrossRef Google Scholar

 Yong J M, Zhou X Y. Stochatic Controls: Hamiltonian Systems and HJB Equations. Berlin: Springer, 1999. Google Scholar

 Zhang F F, Zhang H S, Tan C. A new approach to distributed control for multi-agent systems based on approximate upper and lower bounds. Int J Control Autom Syst, 2017, 15: 2507-2515 CrossRef Google Scholar

 Xu J J, Zhang H S, Chai T Y. Necessary and Sufficient Condition for Two-Player Stackelberg Strategy. IEEE Trans Automat Contr, 2015, 60: 1356-1361 CrossRef Google Scholar

 Zhang H S, Qi Q Y, Fu M Y. Optimal Stabilization Control for Discrete-time Mean-field Stochastic Systems. IEEE Trans Automat Contr, 2018, : 1-1 CrossRef Google Scholar

 Tan C, Zhang H S. Necessary and Sufficient Stabilizing Conditions for Networked Control Systems With Simultaneous Transmission Delay and Packet Dropout. IEEE Trans Automat Contr, 2017, 62: 4011-4016 CrossRef Google Scholar

Copyright 2020 Science China Press Co., Ltd. 《中国科学》杂志社有限责任公司 版权所有