logo

SCIENTIA SINICA Informationis, Volume 50 , Issue 2 : 239-260(2020) https://doi.org/10.1360/N112018-00295

Medical-image-fusion algorithm based on a detail-enhanced and pulse-coupled neural-network model stimulated by parallel features

More info
  • ReceivedNov 1, 2018
  • AcceptedMar 19, 2019
  • PublishedFeb 12, 2020

Abstract

The purpose of medical-image fusion is to integrate the comprehensive information of multimodal medical images into a single image, which is helpful for clinical diagnosis, increasing the accuracy of disease observation by physicians, and shortening the treatment period. A new fusion algorithm for anatomical and functional images is proposed in this paper. Local Laplacian filtering (LLF) is chosen as the decomposition tool in the fusion process, which can enhance the details, protect the edges, and ensure that the details of the anatomical features in the fused images cannot be covered by the color information of the functional images. The steps of this algorithm are as follows: firstly, LLF decomposes the original image into an approximate image and a series of detailed images. Secondly, for the approximate image, this paper proposes a fusion rule for an improved local-energy maximum by combining the regional and edge energies. For detailed images, the parameter-adaptive simplified-pulse-coupled neural-network (PA-SPCNN) model is used to fuse the detailed images. The novel sum-modified-Laplacian and color-saliency feature are selected as external-stimulus inputs of the PA-SPCNN model in the anatomical and functional images, respectively. Finally, the fusion image is obtained by an inverse-LLF transform. Simulation experiments show that the proposed algorithm is superior to the classical algorithm in subjective and objective evaluation.


Funded by

国家自然科学基金项目(11761001,11761003)

宁夏科技创新领军人才项目(KJT2016002)


References

[1] James A P, Dasarathy B V. Medical image fusion: A survey of the state of the art. Inf Fusion, 2014, 19: 4-19 CrossRef Google Scholar

[2] Yang Y, Han C, Kang X, et al. A Novel Image Fusion Algorithm Based on IHS and Discrete Wavelet Transform. In: Proceedings of the IEEE International Conference on Automation & Logistics, Jinan, 2007. 1936--1940. Google Scholar

[3] Selesnick I W, Baraniuk R G, Kingsbury N C. The dual-tree complex wavelet transform. IEEE Signal Process Mag, 2005, 22: 123--151. Google Scholar

[4] Burt P J, Adelson E H. The Laplacian pyramid as a compact image code. Readings Comput Vision, 1987,31: 671--679. Google Scholar

[5] Toet A. Hierarchical image fusion. Machine Vis Apps, 1990, 3: 1-11 CrossRef Google Scholar

[6] Da Cunha A L, Zhou J, Do M N. The Nonsubsampled Contourlet Transform: Theory, Design, and Applications. IEEE Trans Image Process, 2006, 15: 3089-3101 CrossRef ADS Google Scholar

[7] Easley G, Labate D, Lim W Q. Sparse directional image representations using the discrete shearlet transform. Appl Comput Harmonic Anal, 2008, 25: 25-46 CrossRef Google Scholar

[8] Liu X, Mei W, Du H. Detail-enhanced multimodality medical image fusion based on gradient minimization smoothing filter and shearing filter.. Med Biol Eng Comput, 2018, 56: 1565-1578 CrossRef PubMed Google Scholar

[9] Liu X, Mei W, Du H. Multimodality medical image fusion algorithm based on gradient minimization smoothing filter and pulse coupled neural network. BioMed Signal Processing Control, 2016, 30: 140-148 CrossRef Google Scholar

[10] Zhang Q, Shen X, Xu L, et al. Rolling guidance filter. In: Proceedings of the European Conference on Computer Vision, Zurich, 2014. 815--830. Google Scholar

[11] Paris S, Hasinoff S W, Kautz J. Local Laplacian filters. ACM Trans Graph, 2011, 30: 1 CrossRef Google Scholar

[12] Li S T, Kang X D, Hu J W. Image Fusion With Guided Filtering. IEEE Trans Image Process, 2013, 22: 2864-2875 CrossRef PubMed ADS Google Scholar

[13] Liu X, Mei W, Du H. Structure tensor and nonsubsampled shearlet transform based algorithm for CT and MRI image fusion. Neurocomputing, 2017, 235: 131-139 CrossRef Google Scholar

[14] Du J, Li W, Xiao B. Union Laplacian pyramid with multiple features for medical image fusion. Neurocomputing, 2016, 194: 326-339 CrossRef Google Scholar

[15] Du J, Li W, Xiao B. Medical image fusion by combining parallel features on multi-scale local extrema scheme. Knowledge-Based Syst, 2016, 113: 4-12 CrossRef Google Scholar

[16] Du J, Li W, Xiao B. Anatomical-functional image fusion by information of interest in local Laplacian filtering domain. IEEE Transactions on Image Processing, 2017, 12: 5855-5866. Google Scholar

[17] Wu Z, Huang Y, Zhang K. Remote Sensing Image Fusion Method Based on PCA and Curvelet Transform. J Ind Soc Remote Sens, 2018, 46: 687-695 CrossRef Google Scholar

[18] Zhang K, Huang Y, Zhao C. Remote sensing image fusion via RPCA and adaptive PCNN in NSST domain. Int J Wavelets Multiresolut Inf Process, 2018, 16: 1850037 CrossRef Google Scholar

[19] Zhu Z Q, Yin H, Chai Y. A novel multi-modality image fusion method based on image decomposition and sparse representation. Inf Sci, 2018, 432: 516-529 CrossRef Google Scholar

[20] Aishwarya N, Bennila Thangammal C. A novel multimodal medical image fusion using sparse representation and modified spatial frequency. Int J Imag Syst Technol, 2018, 28: 175-185 CrossRef Google Scholar

[21] Qu X B, Yan J W, Xiao H Z. Image Fusion Algorithm Based on Spatial Frequency-Motivated Pulse Coupled Neural Networks in Nonsubsampled Contourlet Transform Domain. Acta Automatica Sin, 2008, 34: 1508-1514 CrossRef Google Scholar

[22] Xiong Y, Wu Y, Wang Y, et al. A medical image fusion method based on SIST and adaptive PCNN. In: Proceedings of the Control & Decision Conference, Chongqing, 2017. 5189--5194. Google Scholar

[23] Zhan K, Zhang H J, Ma Y D. New spiking cortical model for invariant texture retrieval and image processing.. IEEE Trans Neural Netw, 2009, 20: 1980-1986 CrossRef PubMed Google Scholar

[24] Yin M, Liu X, Liu Y. Medical Image Fusion With Parameter-Adaptive Pulse Coupled Neural Network in Nonsubsampled Shearlet Transform Domain. IEEE Trans Instrum Meas, 2019, 68: 49-64 CrossRef Google Scholar

[25] Yin M, Duan P, Liu W. A novel infrared and visible image fusion algorithm based on shift-invariant dual-tree complex shearlet transform and sparse representation. Neurocomputing, 2017, 226: 182-191 CrossRef Google Scholar

[26] Goferman S, Zelnik-Manor L, Tal A. Context-aware saliency detection.. IEEE Trans Pattern Anal Mach Intell, 2012, 34: 1915-1926 CrossRef PubMed Google Scholar

[27] Kou F, Chen W, Wen C. Gradient Domain Guided Image Filtering. IEEE Trans Image Process, 2015, 24: 4528-4539 CrossRef PubMed ADS Google Scholar

[28] He K, Sun J, Tang X. Guided image filtering.. IEEE Trans Pattern Anal Mach Intell, 2013, 35: 1397-1409 CrossRef PubMed Google Scholar

[29] Chen Y L, Park S K, Ma Y D. A new automatic parameter setting method of a simplified PCNN for image segmentation.. IEEE Trans Neural Netw, 2011, 22: 880-892 CrossRef PubMed Google Scholar

[30] Yang Y, Que Y, Huang S. Multimodal Sensor Medical Image Fusion Based on Type-2 Fuzzy Logic in NSCT Domain. IEEE Senss J, 2016, 16: 3735-3745 CrossRef ADS Google Scholar

[31] Liu Y, Chen X, Cheng J, et al. A medical image fusion method based on convolutional neural networks. In: Proceedings of the International Conference on Information Fusion, Xi'an, 2017. 1070--1076. Google Scholar

[32] Yeganeh H, Wang Z. Objective Quality Assessment of Tone-Mapped Images. IEEE Trans Image Process, 2013, 22: 657-667 CrossRef PubMed ADS Google Scholar

[33] Eskicioglu A M, Fisher P S. Image quality measures and their performance. IEEE Trans Commun, 1995, 43: 2959-2965 CrossRef Google Scholar

[34] Wang Y, Du H, Xu J, et al. A no-reference perceptual blur metric based on complex edge analysis. In: Proceedings of the Network Infrastructure and Digital Content(IC-NIDC), Beijing, 2012. 487--491. Google Scholar

[35] Liu Y, Liu S, Wang Z. A general framework for image fusion based on multi-scale transform and sparse representation. Inf Fusion, 2015, 24: 147-164 CrossRef Google Scholar

[36] Qu G, Zhang D, Yan P. Information measure for performance of image fusion. Electron Lett, 2002, 38: 313-315 CrossRef Google Scholar

  • Figure 1

    (Color online) Comparison of results before and after applying LLF to color and gray images. (a1) Color image; (a2) color image after LLF; (b1) gray image; (b2) pixel distribution map of gray image before LLF; (b3) pixel distribution map of gray image after LLF

  • Figure 2

    (Color online) Color saliency feature extraction. (a1) PET image; (a2) CSF of PET image; (b1) SPECT image; (b2) CSF of SPECT image

  • Figure 3

    The diagram of PA-SPCNN model

  • Figure 4

    (Color online) The flow chart of the proposed algorithm $(L=3)$

  • Figure 5

    (Color online) (a1) and (a2) are the source images of MRI and PET; (a3)$\sim$(a6) are the fusion images of (a1) and (a2) when $L=2,3,4,5$, respectively; (b1) and (b2) are the source images of MRI and SPECT; (b3)$\sim$(b6) are the fusion images of (b1) and (b2) when $L=2,3,4,5$, respectively

  • Figure 6

    (Color online) TMQI values of different source image pairs when $\epsilon$ varies from $10^{-1}$ to $10^{-6}$. (a) MRI-SPECT; (b) MRI-PET

  • Figure 7

    (Color online) Four MRI-PET image pairs. (a1)$\sim$(a4) MRI images; (b1)$\sim$(b4) corresponding PET images

  • Figure 8

    (Color online) Comparison of fusion results obtained by different methods on 4 MRI-PET image pairs. (a)$\sim$(d) are fusion results on the 4 pairs obtained by LP-SR, NSCT-SF-PCNN, GFF, LLF-IOI, NSST-PAPCNN, and the proposed method, respectively

  • Figure 9

    (Color online) Four MRI-SPECT image pairs. (a1)$\sim$(a4) MRI images; (b1)$\sim$(b4) corresponding SPECT images

  • Figure 10

    (Color online) Comparison of fusion results obtained by different methods on 4 MRI-SPECT image pairs. (a)$\sim$(d) are fusion results on the 4 pairs obtained by LP-SR, NSCT-SF-PCNN, GFF, LLF-IOI, NSST-PAPCNN, and the proposed method, respectively

  • Figure 11

    (Color online) Histograms of the average evaluation values of fusion results obtained by different methods on 40 experimental image pairs. (a) SF; (b) EI; (c) MI; (d) TMQI

  • Table 1   SF values of LLF decomposition with different level $L$
    Image pair Metric $L$=2 $L$=3 $L$=4 $L$=5
    MRI-PET SF 37.7999 38.4297 38.3881 38.173
    MRI-SPECT SF 27.1511 28.4187 28.2667 27.9765
  • Table 2   Average EI values of different fusion methods
    Image pair Metric Method1 Method2 Method3 Proposed method
    MRI-PET EI 87.0194 87.0603 87.0303 87.0731
    MRI-SPECT EI 68.9888 68.5563 68.4424 69.1077
  • Table 3   Average evaluation values of fusion results on 20 MRI-PET image pairs
    Metric LP-SR NSCT-SF-PCNN GFF LLF-IOI NSST-PAPCNN Proposed method
    SF 23.789 24.992 24.709 27.731 24.219 29.525(1)
    EI 68.256 73.319 72.408 82.738 71.263 86.131(1)
    MI 5.017 4.957 5.168 4.997 5.109 5.076(3)
    TMQI 0.732 0.719 0.683 0.734 0.735 0.737(1)
  • Table 4   Average evaluation values of fusion results on 20 MRI-SPECT image pairs
    Metric LP-SR NSCT-SF-PCNN GFF LLF-IOI NSST-PAPCNN Proposed method
    SF 17.350 18.075 16.866 22.320 17.424 23.464(1)
    EI 49.368 51.370 48.281 67.992 50.481 72.207(1)
    MI 4.466 4.459 4.723 4.618 4.768 4.676(3)
    TMQI 0.730 0.715 0.719 0.730 0.726 0.729(2)
  • Table 5   Running time of different fusion methods
    Metric LP-SR NSCT-SF-PCNN GFF LLF-IOI NSST-PAPCNN Proposed method
    Time 0.046056 260.860688 0.246661 275.129536 15.992219 331.249379

Copyright 2020  CHINA SCIENCE PUBLISHING & MEDIA LTD.  中国科技出版传媒股份有限公司  版权所有

京ICP备14028887号-23       京公网安备11010102003388号