The fisheye camera has becoming the most popular panoramic video capturing device. In fact, the stitching for fisheye images has drawn more and more attention. To solve the artifacts resulting from the exposure difference of several cameras, this paper proposes a luminance compensation method based on an uneven sampling histogram. According to the transition of the spatial sampling rate with the latitude's increasing in panorama, the proposed method samples the overlapped region of two adjacent images to calculate the histograms, and matches the histograms one by one to balance the luminance. Moreover, in consideration of the influence of luminance distribution to panorama video quality, we present a method for adaptive reference image selection. The selected reference image based on the proposed method can improve the overall quality of the panorama effectively. Due to the consideration of the distortion characteristics of fisheye images, the proposed method does not require an accurate alignment, and it is also well-adapted to luminance variations. Experimental results show that the proposed method can reduce the time complexity for panorama video stitching significantly without obvious quality degeneration, and it is suitable for fisheye lens-based real-time panoramic video stitching.
国家高技术研究发展计划(2015AA015903)
[1] Deng X M, Wu F C, Wu Y H, et al. Automatic spherical panorama generation with two fisheye images. In: Proceedings of the 7th World Congress on Intelligent Control and Automation, Chongqing, 2008. 5955--5959. Google Scholar
[2] Ho T, Budagavi M. Dual-fisheye lens stitching for 360-degree imaging. In: Proceedings of International Conference on Acoustics, Speech and Signal Processing, New Orleans, 2017. Google Scholar
[3] Liao W S, Hsieh T J, Liang W Y, et al. Real-time spherical panorama image stitching using OpenCL. In: Proceedings of International Conference on Computer Graphics and Virtual Reality, Las Vegas, 2011. 113--119. Google Scholar
[4] Xu Y, Zhou Q, Gong L. High-Speed Simultaneous Image Distortion Correction Transformations for a Multicamera Cylindrical Panorama Real-time Video System Using FPGA. IEEE Trans Circuits Syst Video Technol, 2014, 24: 1061-1069 CrossRef Google Scholar
[5] Reinhard E, Adhikhmin M, Gooch B, et al. Color transfer between images. IEEE Comput Graph Appl, 2001, 21: 34--41. Google Scholar
[6] Tian G Y, Gledhill D, Taylor D, et al. Colour correction for panoramic imaging. In: Proceedings of the 6th International Conference on Information Visualisation, London, 2002. 483--488. Google Scholar
[7] Brown M, Lowe D G. Automatic Panoramic Image Stitching using Invariant Features. Int J Comput Vision, 2007, 74: 59-73 CrossRef Google Scholar
[8] Kim Y N, Sim D G. Vignetting and illumination compensation for omni-directional image generation on spherical coordinate. In: Proceedings of the 16th International Conference on Artificial Reality and Telexistence--Workshops, Hangzhou, 2006. 413--418. Google Scholar
[9] Fecker U, Barkowsky M, Kaup A. Histogram-Based Prefiltering for Luminance and Chrominance Compensation of Multiview Video. IEEE Trans Circuits Syst Video Technol, 2008, 18: 1258-1267 CrossRef Google Scholar
[10] Yamamoto K, Oi R. Color correction for multi-view video using energy minimization of view networks. Int J Autom Comput, 2008, 5: 234-245 CrossRef Google Scholar
[11] Basu A, Licardie S. Alternative models for fish-eye lenses. Pattern Recognition Lett, 1995, 16: 433-441 CrossRef Google Scholar
[12] Salomon D. Transformations and Projections in Computer Graphics. Berlin: Springer, 2007. Google Scholar
[13] Ye Y, Alshina E, Boyce J. Algorithm descriptions of projection format conversion and video quality metrics in 360Lib version 54. In: Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, Document: JVET-H1004, Macau, 2017. Google Scholar
[14] Szeliski R, Shum H Y. Creating full view panoramic image mosaics and environment maps. In: Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques. New York: ACM, 1997. 251--258. Google Scholar
[15] Sun Y, Lu A, Yu L. Ahg8: WS-PSNR for 360 video objective quality evaluation. In: Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, Document: JVET-D0040, Chengdu, 2016. Google Scholar
[16] Chen J L, Liu J L, Ye J H, et al. Luminance and chrominance correction for multi-view video using overlapped local histogram matching. Chinese J Image Graph, 2007, 12: 1992--1999. Google Scholar
[17] Ruderman D L, Cronin T W, Chiao C C. Statistics of cone responses to natural images: implications for visual coding. J Opt Soc Am A, 1998, 15: 2036-2045 CrossRef ADS Google Scholar
Figure 1
(Color online) Fisheye-based video caption and stitching system. (a) Frame for 6 GoPro cameras; (b) fisheye image; (c) equirectangular projection
Figure 2
(Color online) Overlap region before and after equirectangular projection. (a) Origin image; (b) equirectangular projection result and the overlap region; (c) overlap region in the origin image; (d) histogram of the two regions
Figure 3
Preprocessing of the fisheye image
Figure 4
(Color online) (a) Weight distribution map of WS-PSNR; (b) density distribution of an equirectangular projection panorama image
Figure 6
(Color online) Real-time panorama video stitching system
Figure 7
(Color online) The luminance compensation results based on histogram statistics with (a) original images,protectłinebreak (b) corrected images
Figure 8
Histograms of three input images. From left to right are img0, img1, img5. (a) Histogram of overlap region with previous image; (b) histogram of overlap region with next image
Figure 9
(Color online) Different results of different reference images. (a) Selecting img5 (score=0.8527) as reference image; (b) Selecting img0 (score=1.5063) as reference image
Figure 10
(Color online) Experiment results of referenced method and proposal method. (a) Ref.
Index | (min, max) | Time (ms) | PSNR | WS-PSNR | Index | (min, max) | Time (ms) | PSNR | WS-PSNR |
1 | (1, 1024) | 9.08 | 31.4385 | 31.5229 | 9 | (2, 256) | 10.05 | 34.2231 | 34.1798 |
2 | (1, 512) | 9.93 | 31.9630 | 32.0129 | 10 | (4, 256) | 9.11 | 34.4783 | 34.4546 |
3 | (1, 256) | 11.57 | 32.8647 | 32.8514 | 11 | (8, 256) | 8.24 | 35.7685 | 35.7989 |
4 | (1, 128) | 13.30 | 33.9658 | 33.8869 | 12 | (16, 256) | 7.51 | 38.0210 | 37.9278 |
5 | (1, 64) | 14.99 | 34.5430 | 34.4906 | 13 | (32, 256) | 6.97 | 37.1566 | 37.1456 |
6 | (1, 32) | 17.54 | 35.7429 | 35.6996 | 14 | (64, 256) | 6.58 | 34.8895 | 35.1714 |
7 | (1, 16) | 20.52 | 37.0278 | 36.8767 | 15 | (128, 256) | 6.38 | 34.4281 | 34.0717 |
8 | (1, 8) | 24.31 | 39.6580 | 39.5388 |
Method | Time (ms) |
2173 | |
Proposal method (without adaptive reference image) | 1115 |
Proposal method (with adaptive reference image) | 1143 |