logo

SCIENTIA SINICA Informationis, Volume 50 , Issue 12 : 1919(2020) https://doi.org/10.1360/SSI-2019-0237

Multi-sensor fusion for unmanned aerial vehicles based on the combination of filtering and optimization

More info
  • ReceivedOct 25, 2019
  • AcceptedDec 14, 2019
  • PublishedOct 20, 2020

Abstract

Accurate and real-time state estimation is the first step to realize safe flight and operation of unmanned aerial vehicles (UAVs). Multi-sensor fusion, e.g., vision, IMU, and GPS, can improve the accuracy of state estimation and even make it work when some sensor is unavailable. Thus, this paper proposes a multi-sensor fusion method based on the combination of filtering and optimization to achieve locally accurate and globally drift-free state estimation. The proposed method has two components, i.e., the Kalman filter and global optimization. The Kalman filter is considered the main structure of the fusion framework, which fuses a local sensor (IMU) and global sensors (aligned global visual inertial odometry, GPS, magnetometer, and barometer sensors) to obtain global state estimation in real time. Global optimization estimates the transformation between local base frame of the visual inertial odometry and global base frame to obtain an accurate global visual estimation. However, given discontinuity of optimization and odometry delay, the aligned visual odometry is then input into the Kalman filter to achieve accurate and drift-free state estimation in real time. Finally, flight and localization tests on a practical UAV were conducted. The experimental results demonstrate the effectiveness and robustness of the proposed multi-sensor fusion method.


Funded by

国家自然科学基金(U1508208,U1608253,91748130)


References

[1] Qi J, Song D, Shang H. Search and Rescue Rotary-Wing UAV and Its Application to the Lushan Ms 7.0 Earthquake. J Field Robotics, 2016, 33: 290-321 CrossRef Google Scholar

[2] Cao W R, Zhu L L, Han J D. An iterable multi-directional auto-correlation approach for aerial power line image enhancement. Robot, 2015, 37: 738--747. Google Scholar

[3] Liu Y, Wang Q, Hu H. A Novel Real-Time Moving Target Tracking and Path Planning System for a Quadrotor UAV in Unknown Unstructured Outdoor Scenes. IEEE Trans Syst Man Cybern Syst, 2019, 49: 2362-2372 CrossRef Google Scholar

[4] Klein G, Murray D. Parallel tracking and mapping for small AR workspaces. In: Proceedings of the 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, 2007. 1--10. Google Scholar

[5] Engel J, Schops T, Cremers D. LSD-SLAM: large-scale direct monocular SLAM. In: Proceedings of European Conference on Computer Vision, 2014. 834--849. Google Scholar

[6] Mur-Artal R, Montiel J M M, Tardos J D. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Trans Robot, 2015, 31: 1147-1163 CrossRef Google Scholar

[7] Engel J, Koltun V, Cremers D. Direct Sparse Odometry.. IEEE Trans Pattern Anal Mach Intell, 2018, 40: 611-625 CrossRef PubMed Google Scholar

[8] Zhang J, Singh S. LOAM: Lidar odometry and mapping in real-time. In: Proceedings of Robotics: Science and Systems, 2014. Google Scholar

[9] Zhang J, Singh S. Visual-lidar odometry and mapping: Low-drift, robust, and fast. In: Proceedings of IEEE International Conference on Robotics and Automation, 2015. 2174--2181. Google Scholar

[10] Leutenegger S, Lynen S, Bosse M. Keyframe-based visual-inertial odometry using nonlinear optimization. Int J Robotics Res, 2015, 34: 314-334 CrossRef Google Scholar

[11] Bloesch M, Omari S, Hutter M, Siegwart R. Robust visual inertial odometry using a direct EKF-based approach. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, 2015. 298--304. Google Scholar

[12] Forster C, Carlone L, Dellaert F. On-Manifold Preintegration for Real-Time Visual--Inertial Odometry. IEEE Trans Robot, 2017, 33: 1-21 CrossRef Google Scholar

[13] Qin T, Li P, Shen S. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans Robot, 2018, 34: 1004-1020 CrossRef Google Scholar

[14] Angeli A, Doncieux S, Meyer J A, et al. Real-time visual loop-closure detection. In: Proceedings of IEEE International Conference on Robotics and Automation, 2008. 1842--1847. Google Scholar

[15] Hess W, Kohler D, Rapp H, et al. Real-time loop closure in 2D LIDAR SLAM. In: Proceedings of IEEE International Conference on Robotics and Automation, 2016. 1271--1278. Google Scholar

[16] Dong Q L, Gu Z P, Hu Z Y. Automatic real-time SLAM relocalization based on a hierarchical bipartite graph model. Sci China Inf Sci, 2012, 55: 2841-2848 CrossRef Google Scholar

[17] Weiss S, Achtelik M W, Chli M, et al. Versatile distributed pose estimation and sensor self-calibration for an autonomous MAV. In: Proceedings of International Conference on Robotics and Automation, 2012. 31--38. Google Scholar

[18] Lynen S, Achtelik M W, Weiss S, et al. A robust and modular multi-sensor fusion approach applied to mav navigation. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013. 3923--3929. Google Scholar

[19] Shen S, Mulgaonkar Y, Michael N, et al. Multi-sensor fusion for robust autonomous flight in indoor and outdoor environments with a rotorcraft MAV. In: Proceedings of IEEE International Conference on Robotics and Automation, 2014. 4974--4981. Google Scholar

[20] Wang L, Wan J W, Liu Y H, et al. Multiple robots collaborative localization based on PE-EKF. Sci China Ser F-Inf Sci, 2007, 37: 1544--1555. Google Scholar

[21] Rehder J, Gupta K, Nuske S, et al. Global pose estimation with limited GPS and long range visual odometry. In: Proceedings of IEEE International Conference on Robotics and Automation, 2012. 627--633. Google Scholar

[22] Merfels C, Stachniss C. Pose fusion with chain pose graphs for automated driving. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, 2016. 3116--3123. Google Scholar

[23] Mascaro R, Teixeira L, Hinzmann T, et al. GOMSF: Graph-Optimization based multi-sensor fusion for robust UAV pose estimation. In: Proceedings of International Conference on Robotics and Automation, 2018. 1421--1428. Google Scholar

[24] Surber J, Teixeira L, Chli M. Robust visual-inertial localization with weak GPS priors for repetitive UAV flights. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), 2017. 6300--6306. Google Scholar

[25] Oleynikova H, Burri M, Lynen S, et al. Real-time visual-inertial localization for aerial and ground robots. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015. 3079--3085. Google Scholar

[26] Qin T, Cao S, Pan J, et al. A general optimization-based framework for global pose estimation with multiple sensors. 2019,. arXiv Google Scholar

[27] Strasdat H, Montiel J M M, Davison A J. Visual SLAM: Why filter?. Image Vision Computing, 2012, 30: 65-77 CrossRef Google Scholar

[28] Sola J. Quaternion Kinematics for the Error-state KF. Laboratoire dAnalyse et dArchitecture des Systemes-Centre national de la recherche scientifique, Technical Report, 2012. Google Scholar

[29] Huber P J. Robust Estimation of a Location Parameter. Ann Math Statist, 1964, 35: 73-101 CrossRef Google Scholar

[30] Grupp M. evo: Python package for the evaluation of odometry and SLAM. 2017. https://github.com/MichaelGrupp/evo. Google Scholar

  • Figure 1

    (Color online) The fusion framework for multiple sensors combining filtering and optimization method

  • Figure 2

    (Color online) The fusion framework of global optimization

  • Table 1   The RMSE (m) results of different algorithms in all three experiments
    Flight RMSE of experiment 1RMSE of experiment 2RMSE of experiment 3
    distance (m) 158.6 128.9 157.5 159.5 146.6 293.9 284.7 301.2 325.1 294.0 579.4 552.8 607.4 585.9 585.1
    VIO 0.41 0.86 0.70 0.75 1.49 1.90 1.78 1.72 1.66 2.62 8.87 15.20 12.98 12.66 14.09
    VINS-Fusion 0.60 0.75 0.71 0.51 1.33 2.08 3.75 3.55 4.20 2.70 1.96 4.51 1.39 2.59 2.07
    Proposed 0.59 0.70 0.62 0.52 1.06 1.06 0.80 1.04 1.43 1.15 1.33 2.39 1.34 2.15 1.87