logo

SCIENCE CHINA Information Sciences, Volume 59, Issue 1: 012106(2016) https://doi.org/10.1007/s11432-015-5356-0

A compressive tracking based on time-space Kalman fusion model

More info
  • ReceivedFeb 25, 2015
  • AcceptedApr 1, 2015
  • PublishedJul 16, 2015

Abstract

The compressive tracking (CT) method is a simple yet efficient algorithm which compresses the high-dimensional features into a low-dimensional space while preserving most of the salient information. This paper proposes a compressive time-space Kalman fusion tracking algorithm to extend the CT method to the case of multi-sensor fusion tracking. Existing fusion trackers deal with multi-sensor features individually and without time-space adaptability. Besides, significant information cumulated in the updating process has not been fully exploited, which calls for a necessity for temporal information extraction. Unlike previous algorithms, the proposed fusion model is completed in both space and time domains. Also, extended Kalman filter is introduced to formulate an updating method for fusion coefficient optimization. The accuracy and robustness of the proposed fusion tracking algorithm are demonstrated by several experimental results.


Funded by

national Natural Science Foundation of China(61175028)

ph.D. Programs Foundation of Ministry of Education of China(20090073110045)

national Natural Science Foundation of China(61365009)


Acknowledgment

Acknowledgments

This work was supported by national Natural Science Foundation of China (Grant Nos. 61175028, 61365009), ph.D. Programs Foundation of Ministry of Education of China (Grant No. 20090073110045).


References

[1] Zhang C L, Jing Z L, Tang Y P, et al. IET Comput Vis, 2013, 7: 151-162 CrossRef Google Scholar

[2] Xu M, Ellis T, Godsill S J, et al. IET Comput Vis, 2011, 5: 1 CrossRef Google Scholar

[3] Mei X, Ling H. IEEE Trans Pattern Anal Mach Intell, 2011, 33: 2259 CrossRef Google Scholar

[4] Li H, Shen C, Shi Q. Real-time visual tracking using compressive sensing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, 2011. 1305--1312. Google Scholar

[5] Babenko B, Yang M H, Belongie S. IEEE Trans Pattern Anal Mach Intell, 2011, 33: 1619 CrossRef Google Scholar

[6] Zhang K H, Zhang L. IEEE Trans Image Process, 2013, 22: 4664 CrossRef Google Scholar

[7] Lei Y, Ding X Q, Wang S J. IEEE Trans Syst Man Cybern Part B-Cybern, 2008, 38: 1578-1591 CrossRef Google Scholar

[8] Dinh T B, Medioni G. Co-training framework of generative and discriminative trackers with partial occlusion handling. In: Proceedings of the IEEE Workshop on Applications of Computer Vision, Kona, 2011. 642--649. Google Scholar

[9] Zhang K H, Zhang L. Real-time compressive tracking. In: Proceedings of European Conference on Computer Vision, Firenze, 2012. 866--879. Google Scholar

[10] Zhu Q P, Yan J, Deng D X. IET Comput Vis, 2013, 7: 448-455 CrossRef Google Scholar

[11] Zhou P, Yao J H, Pei J L. Sci China Ser-F: Inf Sci, 2009, 52: 1632-1639 CrossRef Google Scholar

[12] Mihaylova L, Loza A, Nikolov S. The influences of multi-sensor video fusion on object tracking using a particle filter. In: Proceedings of Workshop on Multiple Sensor Data Fusion, Dresden, 2006. 354--358. Google Scholar

[13] Cvejic N, Nikolov S G, Knowles H, et al. The effect of pixel-level fusion on object tracking in multi-sensor surveillance video. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, Minneapolis, 2007. 1--7. Google Scholar

[14] Xiao G, Yun X, Wu J M. A new tracking approach for visible and infrared sequences based on tracking-before-fusion. Int J Dynam Control, 2014, 1--12. Google Scholar

[15] Torresan H, Turgeon B, Ibarra C, et al. Advanced surveillance system: combining video and thermal imagery for pedestrian detection. In: Proceedings of the International Society for Optical Engineering, Beijing, 2004. 506--515. Google Scholar

[16] Stolkin R, Rees D, Talha, M, et al. Bayesian fusion of thermal and visible spectra camera data for region based tracking with rapid background adaptation. In: Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Hamburg, 2012. 192--199. Google Scholar

[17] Chen S, Zhu W, Leung H. Thermo-visual video fusion using probabilistic graphical model for human tracking. In: IEEE International Symposium on Circuits and Systems, Seattle, 2008. 1926--1929. Google Scholar

[18] Perez P, Vermaak J, Blake A. Proc IEEE, 2004, 92: 495-513 CrossRef Google Scholar

[19] Topkaya I S, Erdogan H. Histogram correlation based classifier fusion for object tracking. In: Proceedings of the IEEE 19th Signal Processing and Communications Applications Conference, Xi'an, 2011. 403--406. Google Scholar

[20] Liu H P, Sun F C. Fusion tracking in color and infrared images using sequential belief propagation. In: Proceedings of the IEEE International Conference on Robotics and Automation, Pasadena, 2008. 2259--2264. Google Scholar

[21] Xiao G, Yun X, Wu J M. Sci China Inf Sci, 2012, 55: 577-589 CrossRef Google Scholar

[22] Wen L G, Cai Z W, Lei Z, et al. Online spatio-temporal structure context learning for visual tracking. In: Proceedings of European Conference on Computer Vision, Firenze, 2012. 716--729. Google Scholar

[23] Sigal L, Zhu Y, Comaniciu D, et al. Tracking complex objects using graphical object models. In: Proceedings of 1st International Workshop Complex Motion, Gunzburg, 2004. 223--234. Google Scholar

[24] Wen L G, Cai Z W, Lei Z, et al. IEEE Trans Image Process, 2014, 23: 785-796 CrossRef Google Scholar

[25] Zhang K H, Zhang L, Liu Q, et al. Fast tracking via dense spatio-temporal context learning. In: Proceedings of European Conference on Computer Vision, Zurich, 2014. 1--15. Google Scholar

[26] Kim D, Jeon M. IEEE Trans Image Process, 2013, 22: 511-522 CrossRef Google Scholar

[27] Shafiee M J, Azimifar Z, Fieguth P. Model-based tracking: temporal conditional random fields. In: Proceedings of the IEEE International Conference on Image Processing, Hong Kong, 2010. 4645--4648. Google Scholar

[28] Shafiee M J, Azimifar Z, Fieguth P. Temporal conditional random fields: a conditional state space predictor for visual tracking. In: Proceedings of the Iranian Conference on Machine Vision and Image Processing, Isfahan, 2010. 1--6. Google Scholar

[29] Li X, Dick A, Shen C, et al. IEEE Trans Image Process, 2013, 22: 3028-3040 CrossRef Google Scholar

[30] Lazaridis G, Petrou M. IEEE Trans Image Process, 2006, 15: 2343-2357 CrossRef Google Scholar

[31] Bay H, Tuvtellars T, Gool L V. SURF: speeded up robust features. In: Proceedings of European Conference on Computer Vision, Graz, 2006. 404--417. Google Scholar

[32] Abdel-Hakim A E, Farag A A. CSIFT: a SIFT descriptor with color invariant characteristics. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, 2006. 1978--1983. Google Scholar

[33] Achlioptas D. J Comput Syst Sci, 2003, 66: 671-687 CrossRef Google Scholar

[34] Ng A, Jordan M. On discriminative vs. generative classifier: a comparison of logistic regression and naive bayes. In: Proceedings of the Conference on Neural Information Processing Systems, 2002. 841--848. Google Scholar

[35] Soundararajan R, Bovik A C. Video quality assessment using spatio-temporal entropic differences. In: Proceedings of the IEEE International Conference on Image Processing, Orlando, 2012. 684--694. Google Scholar

[36] Mehrseresht N, Taubman D. IEEE Trans Image Process, 2006, 15: 1397-1412 CrossRef Google Scholar

[37] Cehovin L, Kristan M, Leonardis A. An adaptive coupled-layer visual model for robust visual tracking. In: Proceedings of the IEEE International Conference on Computer Vision, Barcelona, 2011. 1363--1370. Google Scholar

[38] Deza E, Deza M M. Encyclopedia of Distances. Berlin/Heidelberg: Springer, 2009. 94--95. Google Scholar

[39] Lu G, Zhao W, Sun J P, et al. A novel particle filter for target tracking in wireless sensor network. In: Proceedings of the IET International Radar Conference, 2013. 1--6. Google Scholar

[40] Vaswani N. Kalman filtered compressed sensing. In: Proceedings of the IEEE International Conference on Image Processing, San Diego, 2008. 893--896. Google Scholar

[41] Jayamohan S, Mathurakani M. Noise tolerance analysis of marginalized particle filter for target tracking. In: Proceedings of the Annual International Conference on Microelectronics, Communications and Renewable Energy, Kanjirapally, 2013. 1--6. Google Scholar

[42] Simon D. Neurocomputing, 2002, 48: 455-475 CrossRef Google Scholar

[43] Wang X Q, Wang X L. The comparison of particle filter and extended Kalman filter in predicting building envelope heat transfer coefficient. In: Proceedings of the IEEE International Conference on Cloud Computing and Intelligence Systems, Hangzhou, 2012. 1524--1528. Google Scholar

[44] Bingham E, Mannila H. Random projection in dimensionality reduction: applications to image and text data. In: Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, 2001. 245--250. Google Scholar

[45] Candes E, Tao T. IEEE Trans Inform Theory, 2006, 52: 5406-5425 CrossRef Google Scholar

[46] Jiang N, Liu W Y, Wu Y. IEEE Trans Image Process, 2011, 20: 2288 CrossRef Google Scholar

Copyright 2020 Science China Press Co., Ltd. 《中国科学》杂志社有限责任公司 版权所有

京ICP备18024590号-1       京公网安备11010102003388号