SCIENCE CHINA Information Sciences, Volume 61 , Issue 1 : 012201(2018) https://doi.org/10.1007/s11432-016-9054-x

## Quasi-consistent fusion navigation algorithm for DSS

• AcceptedMar 15, 2017
• PublishedAug 30, 2017
Share
Rating

### Abstract

A fusion navigation algorithm for the distributed satellites system (DSS) utilizing relative range measurements is proposed in this paper.Based on the quasi-consistent extended Kalman filter (QCEKF), an on-line evaluation of the navigation precision can be provided by the fusion navigation algorithm.In addition, the upper bound for the estimation error obtained from the fusion navigation algorithm is lower than those with any groups of measurements,which indicates that the fusion navigation algorithm can automatically choose the suitable redundant measurements to improve the navigation precision.The simulations show the feasibility and effectiveness of the proposed fusion navigation algorithm.

### Acknowledgment

This work was supported by National Basic Research Program of China (973 Program) (Grant No. 2014CB845303) and National Center for Mathematics and Interdisciplinary Sciences, Chinese Academy of Sciences.

### Supplement

Appendix

Proof of Proposition sect. 3

Since the trackers are observable according to the inertial measurements obtained from the GPS. The observability analysis of the DSS is to investigate the observability of the users based on the relative range measurements. The observability matrix associated with the relative range measurement vector $H(X(t))$ can be denoted by $$D^{\rm relative}=\left[ (C^{\rm relative})^{\rm T}, (C^{\rm relative}A)^{\rm T} , (C^{\rm relative}A^2)^{\rm T}, \ldots \right]^{\rm T}, \tag{31}$$ where $A=\frac{\partial~F}{\partial~X^{\rm~T}},C^{\rm~relative}=\frac{\partial~H}{\partial~X^{\rm~T}}$. The fourth-order submatrix of the observability matrix $D^{\rm~relative}$ which is associated with the measurement $H_l=\rho_{ij}$, $l=\frac{(14-i)(i-1)}{2}+j-i$, can be partitioned as $$\left[ c_{ij}^{\rm T}, (c_{ij}A)^{\rm T},(c_{ij}A^2)^{\rm T}, (c_{ij}A^3)^{\rm T} \right]^{\rm T} =\left[ 0_{4\times 6(i-1)}, P_{ij}, 0_{4\times 6(j-i-1)}, Q_{ij}, 0_{4\times 6(n-j)} \right], \tag{32}$$ where $c_{ij}=\frac{\partial~\rho_{ij}}{\partial~X^{\rm~T}}$, and $$P_{ij}= \left[ \begin{array}{cc} \frac{\partial \rho_{ij}}{\partial r_i^{\rm T}}& 0\\ 0& \frac{\partial \rho_{ij}}{\partial r_i^{\rm T}} \\ \frac{\partial \rho_{ij}}{\partial r_i^{\rm T}} \frac{\partial f_i}{\partial r_i^{\rm T}} & 0\\0 &\frac{\partial \rho_{ij}}{\partial r_i^{\rm T}} \frac{\partial f_i}{\partial r_i^{\rm T}} \\ \end{array} \right], Q_{ij}= \left[ \frac{\partial \rho_{ij}}{\partial r_j^{\rm T}}& 0\\ 0& \frac{\partial \rho_{ij}}{\partial r_j^{\rm T}} \\ \frac{\partial \rho_{ij}}{\partial r_j^{\rm T}} \frac{\partial f_j}{\partial r_j^{\rm T}} & 0\\ 0 &\frac{\partial \rho_{ij}}{\partial r_j^{\rm T}} \frac{\partial f_j}{\partial r_j^{\rm T}} \\ \end{array} \right].$$ Since the $L_2$-norm of the matrix $\frac{\partial~f_i}{\partial~r_i^{\rm~T}}$, which is the maximum of the absolute value of its eigenvalues, satisfies $\|~\frac{\partial~f_i}{\partial~r_i^{\rm~T}}~\|~\leq~3.0824\times~10^{-6}$, the observability of the users can be analyzed by the submatrix $D^{\rm~relative}_2=[~(C^{\rm~relative})^{\rm~T},~~~(C^{\rm~relative}A)^{\rm~T}~]^{\rm~T}$. Therefore, the DSS is observable if and only if $${\rm{rank}} (D^{\rm relative}_2)=6(n-3). \tag{33}$$

Then a necessary condition for the DSS to be observable is that $${\rm{rank}} (D_i)=6, D_i=\left[ C_i^{\rm T}, (C_i A_i)^{\rm T} \right]^{\rm T}, i=4,\ldots,n, \tag{34}$$ where $A_i=\frac{\partial~F_i}{\partial~X_i^{\rm~T}}$, $C_i$ is composed of $\{\frac{\partial~\rho_{ij}}{\partial~X_i^{\rm~T}},~\rho_{ij}\in~\pi_i\}$.

The condition $\left|\pi_i\right|~\geq~3$ in sect. 3.1 means that the number of the relative range measurements associated with $S_i$ is larger than 3, which is a necessary condition for 34. Moreover, $\left|\pi_4~\cup~\pi_5~\cup~\cdots~\cup~\pi_n\right|~\geq~3(n-3)$ indicts that the number of the components of the measurement vector $H(X(t))$ is larger than $3(n-3)$, which is a necessary condition for 33. Therefore, $\left|\pi_i\right|~\geq~3$ and $\left|\pi_4~\cup~\cdots~\cup~\pi_n\right|~\geq~3(n-3)$ are the necessary conditions for the DSS to be observable.

Proof of Proposition TH-obs

The submatrix of the observability matrix $D^{\rm~relative}$ in 31, which is associated with the users, can be denoted by $$D_{\rm user}=\left[ C_{\rm user}^{\rm T}, (C_{\rm user}A_{\rm user})^{\rm T} \right]^{\rm T}, \tag{35}$$ where $A_{\rm~user}=\frac{\partial~F_{\rm~user}}{\partial~X_{\rm~user}^{\rm~T}}$, $C_{\rm~user}=\frac{\partial~H}{\partial~X_{\rm~user}^{\rm~T}}$, $F_{\rm~user}=[F_4^{\rm~T},\ldots,F_n^{\rm~T}]^{\rm~T}$, $X_{\rm~user}=[X_4^{\rm~T},\ldots,X_n^{\rm~T}]^{\rm~T}$. Then, the observability matrix $D_{\rm~user}$ with the relative measurement vector $H_{\rm~nec}(X(t))$ in 9 can be partitioned as $$D_{\rm user}=\left[ D_i=\left[ d_i&0_{3\times3} \\ 0_{3\times3}&d_i \end{array}\right]_{6\times 6}, d_i=\left[ \dfrac{(r_i-r_1)}{\rho_{1i}} , \dfrac{(r_i-r_2)}{\rho_{2i}}, \dfrac{(r_i-r_3)}{\rho_{3i}} \end{array}\right]^{\rm T}, i=4,\ldots,n. \tag{36}$$

If the satellites $S_i$, $i=4,\ldots,n$, are not coplanar with the trackers, $S_1,S_2,S_3$, then ${\rm{rand}}~(D_i)=6$. Therefore, we can obtain ${\rm{rand}}~(D_{\rm~user})=6(n-3)$, which means that the DSS is observable.

Proof of Theorem TH_QCEKF

We use the mathematical induction to prove the quasi-consistency of the algorithm. Assume that at the $k$th sampling point, there is $E[(\hat~X_k-X_k)(\hat~X_k-X_k)^{\rm~T}]\leq~P_k$. Linearizing the process equations in 10 and 14 at $\forall\xi\in~\mathbb{R}^{6n}$ result in $$X_k+\Delta t F(X_k)=\xi+\Delta t F(\xi)+A_\xi( X_k- \xi)+\varphi_1, \hat X_k+ \Delta t F(\hat X_k)=\xi+\Delta t F(\xi)+A_\xi(\hat X_k- \xi)+\varphi_2, \tag{37}$$ where $A_\xi=I_{6n}+\Delta~t~\frac{\partial~F}{\partial~X^{\rm~T}}|_{~X=\xi}$, the linearization errors are $$\varphi_1=\frac{\Delta t^2}{2} (X_k- \xi)^{\rm T} \left.\frac{\partial^2 F}{\partial X^2}\right|_{\xi} (X_k- \xi)+\cdots, \varphi_2=\frac{\Delta t^2}{2} (\hat X_k- \xi)^{\rm T} \left.\frac{\partial^2 F}{\partial X^2}\right|_{\xi} (\hat X_k- \xi)+\cdots. \tag{38}$$ Thus the predicted error at the $(k+1)$th sampling point is \begin{align}\bar X_{k+1}-X_{k+1}=A_\xi(\hat X_k-X_k)+\varphi_k -d_k, \tag{39} \end{align} where $\varphi_k=\varphi_2-\varphi_1$. If $\Delta~Q_k$ satisfies the inequality: \begin{aligned} \Delta Q_k \geq & A_\xi E\left[(\hat X_k-X_k)(\hat X_k-X_k)^{\rm T}\right]A_\xi^{\rm T}+A_\xi E\left[(\hat X_k-X_k)(\varphi_k-d_k)^{\rm T}\right]+E\left[(\varphi_k-d_k)(\hat X_k-X_k)^{\rm T}\right]A_\xi^{\rm T} \\ &+E\left[(\varphi_k-d_k)(\varphi_k-d_k)^{\rm T}\right]-A_kE\left[(\hat X_k-X_k)(\hat X_k-X_k)^{\rm T}\right] A_k^{\rm T}, \\ \end{aligned} \tag{40} then the mean square error of the predicted value can be obtained as follows: \begin{aligned} E\left[(\bar X_{k+1}-X_{k+1})(\bar X_{k+1}-X_{k+1})^{\rm T}\right] =&A_\xi E\left[(\hat X_k-X_k)(\hat X_k-X_k)^{\rm T}\right]A_\xi^{\rm T}+A_\xi E\left[(\hat X_k-X_k)(\varphi_k-d_k)^{\rm T}\right] \\ &+E\left[(\varphi_k-d_k)(\hat X_k-X_k)^{\rm T}\right]A_\xi^{\rm T}+E\left[(\varphi_k-d_k)(\varphi_k-d_k)^{\rm T}\right] \\ \leq& A_k P_{k} A_k^{\rm T} + \Delta Q_k =\bar P_{k+1}, \end{aligned} \tag{41} which indicates that the prediction at the $(k+1)$th sampling point is quasi-consistent.

Similarly, the linearizations of the measurement functions $G(~X_{k+1})$ and $G(\bar~X_{k+1})$ at $\forall\eta~\in~\mathbb{R}^{6n}$ are $$G( X_{k+1})=G(\eta)+C_\eta(X_{k+1}-\eta)+\psi_1, G(\bar X_{k+1})=G(\eta)+C_\eta(\bar X_{k+1}-\eta)+\psi_2, \tag{42}$$ where $C_\eta=\frac{\partial~G}{\partial~X^{\rm~T}}|_{X=\eta}$, the linearization errors are $$\psi_1=\frac{1}{2} (X_{k+1}-\eta)^{\rm T} \left.\frac{\partial^2 G}{\partial X^2}\right|_{\eta} (X_{k+1}-\eta)+\cdots, \psi_2=\frac{1}{2} (\bar X_{k+1}-\eta)^{\rm T} \left.\frac{\partial^2 G}{\partial X^2}\right|_{\eta} (\bar X_{k+1}-\eta)+\cdots. \\ \tag{43}$$ Hence, the filtering error at the $(k+1)$th sampling point is $$\hat X_{k+1}-X_{k+1}=(I-K_kC_\eta)(\bar X_{k+1}-X_{k+1})-K_k\psi_{k+1}+K_k w_{k+1}, \tag{44}$$ where $\psi_{k+1}=\psi_2-\psi_1$. If $\Delta~R_{k+1}$ satisfies the inequality: \begin{aligned} \Delta R_{k+1}\geq &E\left[(I-K_kC_\eta)(\bar X_{k+1}-X_{k+1})(\bar X_{k+1}-X_{k+1})^{\rm T}(I-K_kC_\eta)^{\rm T}\right] -E\left[(I-K_k C_\eta)(\bar X_{k+1}-X_{k+1}) \psi_{k+1}^{\rm T} K_k^{\rm T}\right] \\ &-E\left[K_k\psi_{k+1}(\bar X_{k+1}-X_{k+1})^{\rm T}(I-K_kC_\eta)^{\rm T}\right] +E\left[K_k\psi _{k+1} \psi_{k+1}^{\rm T} K_k^{\rm T}\right] +E\left[K_kw_{k+1}w_{k+1}^{\rm T} K_k^{\rm T}\right] \\ &-K_kE\left[w_{k+1}w_{k+1}^{\rm T}\right] K_k^{\rm T} -(I-K_kC_{k+1})E\left[(\bar X_{k+1}-X_{k+1})(\bar X_{k+1}-X_{k+1})^{\rm T}\right](I-K_kC_{k+1})^{\rm T}, \end{aligned} \tag{45} then the mean square error of the filtering value satisfies \begin{aligned} &E\left[(\hat X_{k+1}-X_{k+1})(\hat X_{k+1}-X_{k+1})^{\rm T}\right] \\ & =E\left[(I-K_kC_\eta)(\bar X_{k+1}-X_{k+1})(\bar X_{k+1}-X_{k+1})^{\rm T}(I-K_kC_\eta)^{\rm T}\right] -E\left[(I-K_kC_\eta)(\bar X_{k+1}-X_{k+1}) \psi_{k+1}^{\rm T} K_k^{\rm T}\right] \\ & -E\left[K_k\psi_{k+1}(\bar X_{k+1}-X_{k+1})^{\rm T}(I-K_kC_\eta)^{\rm T}\right] +E\left[K_k\psi _{k+1} \psi_{k+1}^{\rm T} K_k^{\rm T}\right]+E\left[K_kw_{k+1}w_{k+1}^{\rm T} K_k^{\rm T}\right] \\ & \leq (I-K_kC_{k+1})\bar P_{k+1}(I-K_kC_{k+1})^{\rm T}+K_kR_{k+1}K_k^{\rm T}+ \Delta R_{k+1} =P_{k+1}, \end{aligned} \tag{46} which means that the filtering result at the $(k+1)$th sampling point is quasi-consistent. Therefore, when $\Delta~Q_k$ and $\Delta~R_{k+1}$ satisfy the inequalities 40 and 45, the results of the prediction, the filtering, as well as the estimation are all quasi-consistent.

Next, we prove $\Delta~Q_k$ and $\Delta~R_{k+1}$ in 18 satisfy 40 and 45. Set $\Delta~Q_k$ and $\Delta~R_{k+1}$ as follows: \begin{aligned} &\Delta Q_k = (1+\alpha_{k})A_\xi P_k A_\xi^{\rm T} +\left(1+\frac{1}{\alpha_{k}}\right)\Delta Q_{\varphi,k} -A_k P_k A_k^{\rm T}, \\ &\Delta R_{k+1} = (1+\beta_{k}) (I-K_kC_\eta)\bar P_{k+1}(I-K_kC_\eta)^{\rm T} +\left(1+\frac{1}{\beta_{k}}\right) K_k \Delta R_{\psi,k+1} K_k^{\rm T} -(I-K_kC_{k+1})\bar P_{k+1}(I-K_kC_{k+1})^{\rm T}, \end{aligned} \tag{47} where \begin{aligned} &\alpha_{k}=\sqrt{\frac{\|\Delta Q_{\varphi,k}\|}{\|A_\xi P_k A_\xi^{\rm T}\|}}, \Delta Q_{\varphi,k}=6n {\rm{diag}} \left( \left[ \delta\varphi_{k,1}^2,\ldots, \delta\varphi_{k,6n}^2 \right] \right)+2d_k d_k^{\rm T}, \\ &\beta_{k}=\sqrt{\frac{\| K_k \Delta R_{\psi,k+1} K_k^{\rm T}\|}{\| (I-K_kC_\eta)\bar P_{k+1}(I- K_kC_\eta)^{\rm T} \|}}, \Delta R_{\psi,k+1}={m_{\rm nec}} {\rm{diag}} \left( \left[ \delta\psi_{k+1,1}^2,\ldots, \delta \psi_{k+1,{m_{\rm nec}}+9}^2 \right] \right), \\ \end{aligned} \tag{48} and $\delta~\varphi_{k,i}^2$, $\delta~\psi_{k+1,j}^2$ are the upper bounds of the linearization errors’ variances, that is $$\delta \varphi_{k,i}^2 \geq E(\varphi_{k,i}^2), \delta \psi_{k+1,j}^2 \geq E(\psi_{k+1,j}^2), i=1,\ldots,6n, j=1,\ldots,{m_{\rm nec}}+9. \tag{49}$$ Ignoring the effect of the third-order terms of the Taylor series expansions in 38 and 43, the linearization errors are \begin{aligned} &\varphi_{k,i}= \begin{cases} 0, i=6j-5,\ldots,6j-3, j=1,\ldots,n. \cr \frac{\Delta t}{2} (\hat X_k-X_k)^{\rm T} \left.\frac{\partial^2 f_j(l)}{\partial X^2}\right|_{\xi} (\hat X_k-X_k) + \Delta t(\hat X_k-X_k)^{\rm T} \left.\frac{\partial^2 f_j(l)}{\partial X^2}\right|_{\xi} (X_k-\xi), i=6j-3+l, l=1,2,3, j=1,\ldots,n. \end{cases} \\ &\psi_{k+1,j}= \begin{cases} 0, j=1,\ldots,9. \cr \frac{1}{2} (\bar X_{k+1}-X_{k+1})^{\rm T} \left.\frac{\partial^2 G(j)}{\partial X^2}\right|_{\eta} (\bar X_{k+1}-X_{k+1})+(\bar X_{k+1}-X_{k+1})^{\rm T} \left.\frac{\partial^2 G(j)}{\partial X^2}\right|_{\eta} (X_{k+1}-\eta), j=10,\ldots,{m_{\rm nec}}+9. \end{cases} \\ \end{aligned} \tag{50} Since the randomness of the gain matrix $K_k$ can be neglected for the navigation of the DSS studied, and the following inequality holds: $$AB^{\rm T}+BA^{\rm T}\leq \theta AA^{\rm T}+\frac{1}{\theta} BB^{\rm T}, \forall \theta >0, \tag{51}$$ where $A,B\in~\mathbb{R}^{n_1\times~n_2}$, $n_1,n_2\in~N_+$, $\Delta~Q_k$ and $\Delta~R_{k+1}$ in 47 with the constraints of $\delta~\varphi_{k,i}^2$ and $\delta~\psi_{k+1,j}^2$ in 49 satisfy the inequalities 40 and 45.

To be specific, if we choose $\xi=\hat~X_k$ and $\eta=\bar~X_{k+1}$, then $\Delta~Q_k$ and $\Delta~R_{k+1}$ in 18 with $\delta~\varphi_{k,i}^2$ and $\delta~\psi_{k+1,j}^2$ being chosen as in 19 satisfy 40 and 45. Therefore, the results of the prediction, the filtering and the estimation of the navigation algorithm 1417 are quasi-consistent.

Proof of Theorem TH_fus

We prove that for $\forall~\omega_j\in\Omega$, the corresponding estimate error satisfies the following result: $$E(\hat X_{{\rm fus},k}(j)-X_k(j))^2 \leq U_{{\rm fus},k}(j), \forall k, \forall j\in\{1,2,\ldots,6n\}. \tag{52}$$ Let $e_f=\hat~X_{{\rm~fus},k}(j)-X_k(j)$, $u_f=U_{{\rm~fus},k}(j)$, $e_i=\tilde~X_{i,k}(j)-X_k(j)$, $p_i=\tilde~P_{i,k}(j,j)$ and $\alpha_i=\omega_j(i)$, $i=1,\ldots,N$, $\forall~k$, $\forall~j$. Then the equivalent result of 52 is $E(e_f^2)\leq~u_f$ for $N=2^{m_{\rm~red}}$, which will be proved by the mathematical induction.

Firstly, we prove the inequality $E(e_f^2)\leq~u_f$ for $N=2$. From 26, we can obtain $e_f^2=u_f^2~(~\frac{\alpha_1}{p_1}~e_1~+~\frac{\alpha_2}{p_2}~e_2~)^2~$. Hence, it needs to be proved that $$\frac{1}{u_f} \geq E\left[ \left( \frac{\alpha_1}{p_1} e_1 + \frac{\alpha_2}{p_2} e_2 \right)^2\right]. \tag{53}$$ According to the quasi-consistency of the estimation $\tilde~X_{i,k}$, i.e., $E(e_i^2)~\leq~p_i$, there is $$\frac{1}{u_f}=\frac{\alpha_1}{p_1} + \frac{\alpha_2}{p_2} \geq \frac{\alpha_1}{p_1^2} E(e_1^2) + \frac{\alpha_2}{p_2^2} E(e_2^2). \tag{54}$$ Meanwhile, considering the relationship $\alpha_1+\alpha_2=1$, we can obtain $$\frac{\alpha_1}{p_1^2} e_1^2 + \frac{\alpha_2}{p_2^2} e_2^2 - \left( \frac{\alpha_1}{p_1} e_1 + \frac{\alpha_2}{p_2} e_2 \right)^2 =\alpha_1(1-\alpha_1) \left( \frac{\alpha_1}{p_1} e_1 - \frac{\alpha_2}{p_2} e_2 \right)^2 \geq 0. \tag{55}$$ Therefore, from 54 and 55, we can get the inequality 53, which means that 52 holds for $N=2$.

Then, assume for $N={N_0}$, Eq. 52 holds. For $N={N_0}+1$, from 26, there is $$\frac{1}{u_f}=\frac{\alpha_1}{p_1} +\cdots+ \frac{\alpha_{{N_0}+1}}{p_{{N_0}+1}}, e_f=u_f \left( \frac{\alpha_1}{p_1} e_1 +\cdots + \frac{\alpha_{{N_0}+1}}{p_{{N_0}+1}} e_{{N_0}+1} \right) . \tag{56}$$ Let $$\frac{1}{\bar{p}_1}=\frac{\beta_1}{p_1} +\cdots+ \frac{\beta_{N_0}}{p_{N_0}}, \bar{e}_1=\bar{p}_1 \left( \frac{\beta_1}{p_1} e_1 +\cdots+ \frac{\beta_{N_0}}{p_{N_0}} e_{N_0} \right), \tag{57}$$ where $\beta_i=\frac{\alpha_i}{\alpha_1+\cdots+\alpha_{N_0}},~i=1,2,\ldots,N_0$, then from the assumption for $N={N_0}$, we can get $E(\bar{e}_1^2)\leq~\bar{p}_1$. Similarly, let $$\frac{1}{\bar{p}_2}=\frac{\gamma_1}{\bar{p}_1} + \frac{\gamma_2}{p_{{N_0}+1}}, \bar{e}_2=\bar{p}_2 \left( \frac{\gamma_1}{\bar{p}_1} \bar{e}_1+ \frac{\gamma_2}{p_{{N_0}+1}} e_{{N_0}+1}\right), \tag{58}$$ where $\gamma_1=1-\alpha_{{N_0}+1},\gamma_2=\alpha_{{N_0}+1}$, then from the result of $N=2$, there is $E(\bar{e}_2^2)\leq~\bar{p}_2$. Moreover, we can obtain \begin{aligned} \frac{1}{\bar{p}_2}&=(1-\alpha_{{N_0}+1})\left( \frac{\beta_1}{p_1} +\cdots+ \frac{\beta_{N_0}}{p_{N_0}} \right) + \frac{\alpha_{{N_0}+1}}{p_{{N_0}+1}} =\frac{\alpha_1}{p_1} +\cdots + \frac{\alpha_{{N_0}+1}}{p_{{N_0}+1}}=\frac{1}{u_f}, \end{aligned} \tag{59} and \begin{aligned} \bar{e}_2=&\bar{p}_2 \left[ (1-\alpha_{{N_0}+1}) \left( \frac{\beta_1}{p_1} e_1 +\cdots+ \frac{\beta_{N_0}}{p_{N_0}} e_{N_0} \right) + \frac{\alpha_{{N_0}+1}}{p_{{N_0}+1}} e_{{N_0}+1}\right] =u_f \left( \frac{\alpha_1}{p_1} e_1 + \cdots + \frac{\alpha_{{N_0}+1}}{p_{{N_0}+1}} e_{{N_0}+1} \right)=e_f, \end{aligned} \tag{60} which means that Eq. 52 holds for $N=N_0+1$. Therefore, according to the mathematical induction, Eq. 52 holds for $\forall~N\in~\mathbb{R}$ and $\forall~\omega_j\in\Omega$.

Specifically, for $\omega_j=\omega_j^*$ and $N=2^{m_{\rm~red}}$, the corresponding results in property (i) can be obtained.

The results in property (ii) are obvious from the fusion scheme in 27 and 28.

### References

[1] Schetter T, Campbell M, Surka D. Multiple agent-based autonomy for satellite constellations. Artif Intell, 2003, 145: 147--180. Google Scholar

[2] Ley W, Wittmann K, Hallmann W. Handbuch der Raumfahrttechnik. Munich: Carl Hanser Verlag GmbH & CO. KG, 2011. Google Scholar

[3] Tapley B D, Ries J C, Davis G W, et al. Precision orbit determination for TOPEX/POSEIDON. J Geophys Res, 1994, 99: 24383--24404. Google Scholar

[4] Psiaki M L. Autonomous orbit determination for two spacecraft from relative position measurements. J Guid Control Dynam, 1999, 22: 305--312. Google Scholar

[5] Yim J R, Crassidis J L, Junkins J L. Autonomous orbit navigation of two spacecraft system using relative line of sight vector measurements. AAS Paper 04-257, 2004. Google Scholar

[6] Markley F L. Autonomous navigation using landmark and intersatellite data. AIAA Paper 84-1987, 1984. Google Scholar

[7] Liu Y, Liu L. Orbit determination using satellite-to-satellite tracking data. Chin J Astron Astrophys, 2001, 1: 281--286. Google Scholar

[8] Grechkoseev A K. Study of observability of motion of an orbital group of navigation space system using intersatellite range measurements. I. J Comput Sys Sci Int, 2011, 50: 293--308. Google Scholar

[9] Grechkoseev A K. Study of observability of motion of an orbital group of navigation space system using intersatellite range measurements. II. J Comput Syst Sci Int, 2011, 50: 472--482. Google Scholar

[10] Hill K, Born G H. Autonomous interplanetary orbit determination using satellite-to-satellite tracking. J Guid Control Dynam, 2007, 30: 679--686. Google Scholar

[11] Hill K, Born G H. Autonomous orbit determination from lunar halo orbits using crosslink range. J Spacecraft Rockets, 2008, 45: 548--553. Google Scholar

[12] Huxel P J, Bishop R H. Navigation algorithms and observability analysis for formation flying missions. J Guid Control Dynam, 2009, 32: 1218--1231. Google Scholar

[13] Huxel P J. Navigation algorithms and observability analysis for formation flying missions. Dissertation for Ph.D. Degree. Austin: University of Texas, 2006. Google Scholar

[14] Shorshi G, Bar-Itzhack I Y. Satellite autonomous navigation and orbit determination using magnetometers. In: Proceedings of the 31st Conference on Decision and Control, Tucson, 1992. 542--548. Google Scholar

[15] Wiegand M. Autonomous satellite navigation via Kalman filter of magnetometer data. Acta Astronaut, 1996, 38: 395--403. Google Scholar

[16] Li Y, Xu X S. The application of EKF and UKF to the SINS/GPS integrated navigation systems. In: Proceedings of the 2nd International Conference on Information Engineering and Computer Science (ICIECS), Wuhan, 2010. 1--5. Google Scholar

[17] Xia H W, Diao Y H, Ma G C, et al. X-ray pulsar relative navigation approach based on extended Kalman filter. J Chin Inertial Tech, 2014, 22: 619--623. Google Scholar

[18] Jiang Y G, Xue W C, Huang Y, et al. The consistent extended Kalman filter. In: Proceedings of the 33rd Chinese Control Conference, Nanjing, 2014. 6838--6845. Google Scholar

[19] Chong C Y. Hierarchical estimation. In: Proceedings of the 2nd MIT/ONR Workshop on C3, Monterey, 1979. 205--220. Google Scholar

[20] Chong C Y, Chang K C, Mori S. Distributed tracking in distributed sensor networks. In: Proceedings of the American Contrlol Conference, Seattle, 1986. Google Scholar

[21] Chang K C, Zhi T, Saha R K. Performance evaluation of track fusion with information matrix filter. IEEE Trans Aero Elec Syst, 2002, 38: 455--466. Google Scholar

[22] Roy A E. Orbital Motion. 4th ed. Bristol: Institute of Physics Publishing, 2005. Google Scholar

[23] Bar-Shalom Y, Li X R, Kirubarajan T. Estimation with Application to Tracking and Navigation. New York: John Wiley $\&$ Sons Inc., 2001. Google Scholar

[24] Julier S J, Uhlmann J K. A new extension of the Kalman filter to nonlinear systems. In: Proceedings of SPIE 3068, Signal Processing, Sensor Fusion, and Target Recognition VI, Orlando, 1997. 182--193. Google Scholar

[25] Jiang Y G. On quasi-consistent nonlinear Kalman filter. Dissertation for Ph.D. Degree. Beijing: University of Chinese Academy of Sciences, 2016. Google Scholar

[26] Sabol C, Burns R, McLaughlin C A. Satellite formation flying design and evolution. J Spacecraft Rockets, 2001, 38: 270--278. Google Scholar

[27] Simon D. Optimal State Estimation — Kalman, $H_\infty$, and Nonlinear Approaches. New Jersey: John Wiley $\&$ Sons Inc., 2006. Google Scholar

• Figure 1

Relative orbits of $S_1,S_2,\ldots,S_7$ in the virtual reference satellites LVLH frame.

• Figure 2

The mean square error and the diagonal elements of $P^{\rm~EKF}_k$ from the EKF based navigation algorithm of $S_4$. (a) Position errors; (b) velocity errors.

• Figure 3

The mean square error and the diagonal elements of $\tilde~P_k$ from the QCEKF based navigation algorithm of $S_4$. (a) Position errors; (b) velocity errors.

• Figure 6

The mean square error and the diagonal elements of $\tilde~P_k$ from the QCEKF based navigation algorithm of $S_7$. (a) Position errors; (b) velocity errors.

• Figure 7

The mean square error and the components of $U_{{\rm~fus},k}^*$ from the fusion algorithm of $S_4$. (a) Position errors; protectłinebreak (b) velocity errors.

• Figure 10

The mean square error and the components of $U_{{\rm~fus},k}^*$ from the fusion algorithm of $S_7$. (a) Position errors; (b) velocity errors.

• Figure 11

Results of the fusion algorithm and the algorithm without fusion of $S_4$. (a) Position errors; (b) velocity errors.

• Figure 12

Results of the fusion algorithm and the centralization algorithm of $S_4$. (a) Position errors; (b) velocity errors.

• Table 1   The choices of the redundant measurements for the fusion navigation algorithm
 $t_1=161.08$ s $t_4=1928.66$ s $\rho_{45}$ $\rho_{46}$ $\rho_{47}$ $\rho_{56}$ $\rho_{57}$ $\rho_{67}$ $\rho_{45}$ $\rho_{46}$ $\rho_{47}$ $\rho_{56}$ $\rho_{57}$ $\rho_{67}$ $X_4(1)$ $\star$ $\star$ $X_4(1)$ $\star$ $\star$ $\star$ $X_4(2)$ $\star$ $\star$ $\star$ $X_4(2)$ $\star$ $\star$ $X_4(3)$ $\star$ $\star$ $X_4(3)$ $\star$ $X_4(4)$ $\star$ $\star$ $X_4(4)$ $\star$ $\star$ $\star$ $X_4(5)$ $\star$ $X_4(5)$ $\star$ $\star$ $\star$ $X_4(6)$ $\star$ $X_4(6)$ $\star$ $\star$ $\star$ $X_5(1)$ $\star$ $X_5(1)$ $\star$ $\star$ $\star$ $\star$ $\star$ $\star$ $X_5(2)$ $\star$ $\star$ $X_5(2)$ $\star$ $\star$ $\star$ $\star$ $\star$ $\star$ $X_5(3)$ $\star$ $\star$ $\star$ $X_5(3)$ $\star$ $X_5(4)$ $\star$ $\star$ $\star$ $X_5(4)$ $\star$ $X_5(5)$ $\star$ $\star$ $X_5(5)$ $\star$ $\star$ $X_5(6)$ $\star$ $\star$ $X_5(6)$ $\star$ $\star$ $\star$ $X_6(1)$ $\star$ $\star$ $X_6(1)$ $\star$ $X_6(2)$ $\star$ $\star$ $\star$ $X_6(2)$ $\star$ $X_6(3)$ $\star$ $\star$ $\star$ $X_6(3)$ $\star$ $\star$ $\star$ $X_6(4)$ $\star$ $\star$ $\star$ $X_6(4)$ $\star$ $\star$ $\star$ $X_6(5)$ $\star$ $\star$ $X_6(5)$ $\star$ $\star$ $\star$ $X_6(6)$ $\star$ $\star$ $X_6(6)$ $\star$ $\star$ $\star$ $X_7(1)$ $\star$ $\star$ $X_7(1)$ $\star$ $\star$ $\star$ $X_7(2)$ $\star$ $\star$ $X_7(2)$ $\star$ $\star$ $X_7(3)$ $\star$ $\star$ $\star$ $X_7(3)$ $\star$ $\star$ $\star$ $X_7(4)$ $\star$ $\star$ $\star$ $X_7(4)$ $\star$ $\star$ $\star$ $X_7(5)$ $\star$ $X_7(5)$ $\star$ $\star$ $\star$ $X_7(6)$ $\star$ $\star$ $X_7(6)$ $\star$ $\star$ $\star$

Citations

• #### 0

Altmetric

Copyright 2020 Science China Press Co., Ltd. 《中国科学》杂志社有限责任公司 版权所有