logo

SCIENCE CHINA Information Sciences, Volume 63 , Issue 4 : 140306(2020) https://doi.org/10.1007/s11432-019-2798-9

Cascade conditional generative adversarial nets for spatial-spectral hyperspectral sample generation

More info
  • ReceivedOct 31, 2019
  • AcceptedFeb 13, 2020
  • PublishedMar 9, 2020

Abstract

Sample generation is an effective way to solve the problem of the insufficiency of training data for hyperspectral image classification. The generative adversarial network (GAN) is one of the popular deep learning methods, which utilizes adversarial training to generate the region of samples based on the required class label. In this paper, we propose cascade conditional generative adversarial nets for hyperspectral image complete spatial-spectral sample generation, named C$^{2}$GAN. The C$^{2}$GAN includes two stages. The stage-one model consists of the spatial information generation with a window size that entails feeding random noise and the required class label. The second stage is the spatial-spectral information generation that generates spectral information of all bands in the spatial region by feeding the label regions. The visualization and verification of generated samples based on the Pavia University and Salinas datasets show superior performance, which demonstrates that our method is useful for hyperspectral image classification.


Acknowledgment

This work was supported by National Nature Science Foundation of China (Grant Nos. 61973285, 61873249, 61773355, 61603355), National Nature Science Foundation of Hubei Province (Grant No. 2018CFB528), Opening Fund of the Ministry of Education Key Laboratory of Geological Survey and Evaluation (Grant No. CUG2019ZR10), and Fundamental Research Funds for the Central Universities (Grant No. CUGL17022).


References

[1] Ghamisi P, Maggiori E, Li S. New Frontiers in Spectral-Spatial Hyperspectral Image Classification: The Latest Advances Based on Mathematical Morphology, Markov Random Fields, Segmentation, Sparse Representation, and Deep Learning. IEEE Geosci Remote Sens Mag, 2018, 6: 10-43 CrossRef Google Scholar

[2] Li S, Song W, Fang L. Deep Learning for Hyperspectral Image Classification: An Overview. IEEE Trans Geosci Remote Sens, 2019, 57: 6690-6709 CrossRef ADS arXiv Google Scholar

[3] Chen Y, Jiang H, Li C. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans Geosci Remote Sens, 2016, 54: 6232-6251 CrossRef ADS Google Scholar

[4] Xu X, Li W, Ran Q. Multisource Remote Sensing Data Classification Based on Convolutional Neural Network. IEEE Trans Geosci Remote Sens, 2018, 56: 937-949 CrossRef ADS Google Scholar

[5] Goodfellow I J, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems, 2014. 2672--2680. Google Scholar

[6] Mirza M, Osindero S. Conditional generative adversarial nets. 2014,. arXiv Google Scholar

[7] Zhu L, Chen Y, Ghamisi P. Generative Adversarial Networks for Hyperspectral Image Classification. IEEE Trans Geosci Remote Sens, 2018, 56: 5046-5063 CrossRef ADS Google Scholar

[8] Zhan Y, Hu D, Wang Y. Semisupervised Hyperspectral Image Classification Based on Generative Adversarial Networks. IEEE Geosci Remote Sens Lett, 2018, 15: 212-216 CrossRef ADS Google Scholar

[9] Feng J, Yu H, Wang L. Classification of Hyperspectral Images Based on Multiclass Spatial-Spectral Generative Adversarial Networks. IEEE Trans Geosci Remote Sens, 2019, 57: 5329-5343 CrossRef ADS Google Scholar

[10] Xu Y H, Du B, Zhang L. Can we generate good samples for hyperspectral classification?-A generative adversarial network based method. In: Proceedings of IEEE International Geoscience and Remote Sensing Symposium, 2018. 5752--5755. Google Scholar

[11] Wang X, Tan K, Du Q. Caps-TripleGAN: GAN-Assisted CapsNet for Hyperspectral Image Classification. IEEE Trans Geosci Remote Sens, 2019, 57: 7232-7245 CrossRef ADS Google Scholar

[12] Lei N, Guo Y, An D, et al. Mode collapse and regularity of optimal transportation Maps. 2019,. arXiv Google Scholar

[13] Arjovsky M, Chintala S, Bottou L. Wasserstein gan. 2017,. arXiv Google Scholar

[14] Miyato T, Kataoka T, Koyama M, et al. Spectral normalization for generative adversarial networks. 2018,. arXiv Google Scholar

[15] Yinka-Banjo C, Ugot O A. A review of generative adversarial networks and its application in cybersecurity. Artif Intell Rev, 2019, 84 CrossRef Google Scholar

[16] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. Commun ACM, 2017, 60: 84-90 CrossRef Google Scholar

[17] Isola P, Zhu J Y, Zhou T, et al. Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. 1125--1134. Google Scholar

[18] Kruse F A, Lefkoff A B, Boardman J W, et al. The spectral image processing system (SIPS)-interactive visualization and analysis of imaging spectrometer data. In: Proceedings of AIP Conference Proceedings, 1993. 192--201. Google Scholar

[19] He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition. In: Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 770--778. Google Scholar

  • Figure 1

    (Color online) Framework of the proposed C$^{2}$GAN method.

  • Figure 2

    (Color online) Framework of generator $G1$ of stage one.

  • Figure 3

    (Color online) Framework of discriminator $D1$ of stage one.

  • Figure 4

    (Color online) Framework of generator $G2$ of stage two.

  • Figure 5

    (Color online) Framework of discriminator $D2$ of stage two.

  • Figure 6

    (Color online) Display of datasets. (a) Ground truth of Pavia University; (b) false-color composite map of Pavia University (bands 53, 31, and 8); (c) ground truth of Salinas; (d) false-color composite map of Salinas (bands 50, 170, and 190).

  • Figure 7

    (Color online) Visualization of spatial generation with the size of 32 $\times$ 32 pixels from 9 labels in total on the Pavia University dataset.

  • Figure 8

    (Color online) Visualization of spatial generation with the size of 32 $\times$ 32 pixels from 16 labels in total on the Salinas dataset.

  • Figure 9

    (Color online) Comparison of ground truth, generated fake spatial-spectral image and real spatial-spectral image of the first nine classes on the two datasets. (a) Pavia University dataset, bands 53, 31, and 8; (b) Salinas dataset, bands 50, 170, and 190.

  • Figure 10

    (Color online) Contrast curves between generated bands and real bands in each class on the Pavia University dataset. (a) 100 real spectra with all bands; (b) 100 generated spectra with all bands.

  • Figure 11

    (Color online) Contrast curves between generated bands and real bands in each class on the Salinas dataset. (a) 100 real spectra with all bands; (b) 100 generated spectra with all bands.

  •   

    Algorithm 1 Cascade conditional generative adversarial net

    Require:the spatial training samples $y1$, scattered label x and spatial-spectral training samples $y2$.

    Output:generated spatial-spectral sample $G2$($G1$(x)).

    Initialize weight and bias of $G1$, $G2$, $D1$, $D2$;

    for every epoch

    for $b\_y1$, $b\_x$ in every minibatch combination ($y1$, x)

    Create Gaussian noise $Z1$;

    Generate spatial samples $G1(b\_x$, $Z1$);

    for k steps

    Update parameters of $D1$ based on loss function $L(D1)$ in (5);

    end for

    Update parameters of $G1$ based on loss function $L(G1)$ in (4);

    end for

    end for

    for every epoch

    for $b\_y1$, $b\_y2$ in every minibatch combination ($y1$, $y2$)

    Generate spatial-spectral samples $G2(b\_y1$);

    for k steps

    Update parameters of $D2$ by maximizing $L(G2,~D2)$ in (6);

    end for

    Update parameters of $G2$ by minimizing $L(G2,~D2)$ in (6);

    end for

    end for

    Generated spatial information $G1$(x) by trained $G1$;

    Generated spatial-spectral information $G2$($G1$(x)) by trained $G2$;

    Return $G2$($G1$(x)).

  • Table 1   The mean spectral angle (SA) between the 100 generated spectra and the 100 real spectra of each class on the Pavia University dataset
    The class SA (radian) The class SA (radian)
    class1 0.093$\pm$0.016class6 0.176$\pm$0.030
    class2 0.173$\pm$0.026class7 0.075$\pm$0.015
    class3 0.072$\pm$0.015class8 0.067$\pm$0.011
    class4 0.103$\pm$0.012class9 0.362$\pm$0.047
    class5 0.075$\pm$0.011
  • Table 2   The mean spectral angle (SA) between the 100 generated spectra and the 100 real spectra of each class on the Salinas dataset
    The class SA (radian) The class SA (radian)
    class1 0.035$\pm$0.009class9 0.163$\pm$0.182
    class2 0.106$\pm$0.073class10 0.187$\pm$0.044
    class3 0.114$\pm$0.163class11 0.093$\pm$0.023
    class4 0.104$\pm$0.039class12 0.115$\pm$0.056
    class5 0.138$\pm$0.116class13 0.073$\pm$0.020
    class6 0.072$\pm$0.018class14 0.121$\pm$0.017
    class7 0.110$\pm$0.142class15 0.130$\pm$0.007
    class8 0.081$\pm$0.080 class16 0.106$\pm$0.045
  • Table 3   The comparison of real datasets real-5, real-10, real-30, real-1%, real-5% with generated datasets gen-5, gen-10, gen-30, gen-1% and gen-5% in CNN and ResNet models on the Salinas dataset
    DataCNNResNet
    Overall accuracy (%) Kappa$\times$100 Overall accuracy (%) Kappa$\times$100
    real-5 85.83$\pm$1.18 84.89$\pm$1.26 88.75$\pm$1.02 88.06$\pm$1.09
    gen-5 87.92$\pm$1.56 87.11$\pm$1.66 91.50$\pm$2.68 92.08$\pm$2.92
    real-10 93.54$\pm$0.59 93.11$\pm$0.63 93.12$\pm$1.02 92.67$\pm$1.09
    gen-10 94.17$\pm$0.29 93.78$\pm$0.32 94.58$\pm$0.59 94.22$\pm$0.63
    real-30 96.67$\pm$0.42 96.44$\pm$0.35 96.74$\pm$0.10 96.52$\pm$0.11
    gen-30 97.50$\pm$0.17 97.33$\pm$0.18 97.15$\pm$0.10 96.96$\pm$0.10
    real-1% 97.79$\pm$0.14 97.53$\pm$0.15 95.74$\pm$0.53 95.27$\pm$0.59
    gen-1% 98.60$\pm$0.26 98.45$\pm$0.29 96.09$\pm$0.76 95.56$\pm$0.85
    real-5% 99.75$\pm$0.02 99.73$\pm$0.02 99.72$\pm$0.01 99.68$\pm$0.02
    gen-5% 99.79$\pm$0.04 99.77$\pm$0.05 99.85$\pm$0.01 99.84$\pm$0.02
  • Table 4   The comparison of real datasets real-5, real-10, real-30, real-1%, real-5% with generated datasets gen-5, gen-10, gen-30, gen-1% and gen-5% in CNN and ResNet models on the Pavia University dataset
    DataCNNResNet
    Overall accuracy (%) Kappa$\times$100 Overall accuracy (%) Kappa$\times$100
    real-5 78.33$\pm$3.29 75.62$\pm$3.70 73.89$\pm$1.84 70.62$\pm$2.07
    gen-5 82.22$\pm$1.57 80.00$\pm$1.77 75.56$\pm$1.81 72.50$\pm$2.04
    real-10 82.96$\pm$2.10 79.58$\pm$2.36 79.63$\pm$1.89 77.08$\pm$2.12
    gen-10 83.33$\pm$3.27 80.12$\pm$4.08 82.96$\pm$0.52 80.83$\pm$0.59
    real-30 89.88$\pm$1.49 88.61$\pm$1.68 90.12$\pm$0.76 85.56$\pm$0.52
    gen-30 92.89$\pm$1.07 89.52$\pm$1.72 92.47$\pm$1.26 86.39$\pm$0.98
    real-1% 95.17$\pm$0.42 93.16$\pm$0.56 95.81$\pm$0.28 94.47$\pm$0.37
    gen-1% 95.24$\pm$0.45 93.70$\pm$0.63 96.09$\pm$0.23 94.84$\pm$0.30
    real-5% 99.73$\pm$0.04 99.64$\pm$0.05 99.53$\pm$0.04 99.37$\pm$0.05
    gen-5% 99.74$\pm$0.02 99.65$\pm$0.03 99.71$\pm$0.02 99.62$\pm$0.02
  • Table 5   Classification results of 3D-CNN, MSGAN, 3D-GAN, C$^{2}$GAN+CNN and C$^{2}$GAN+ResNet on the Salinas dataset
    Method Overall accuracy (%) Average accuracy (%) Kappa $\times$100
    3D-CNN 95.77$\pm$2.28 94.85$\pm$4.69 95.28$\pm$2.25
    MSGAN 96.23$\pm$1.32 96.35$\pm$2.52 95.80$\pm$1.46
    3D-GAN 99.61$\pm$0.22 99.48$\pm$0.32 99.50$\pm$0.37
    C$^{2}$GAN+CNN 99.79$\pm$0.04 98.72$\pm$0.71 99.77$\pm$0.05
    C$^{2}$GAN+ResNet 99.85$\pm$0.01 99.57$\pm$0.03 99.84$\pm$0.02
  • Table 6   Classification results of 3D-CNN, MSGAN, 3D-GAN, C$^{2}$GAN+CNN and C$^{2}$GAN+ResNet on the Pavia University dataset
    Method Overall accuracy (%) Average accuracy (%) Kappa $\times$100
    3D-CNN 97.63$\pm$0.21 96.71$\pm$0.57 96.85$\pm$0.28
    MSGAN 98.10$\pm$0.51 97.68$\pm$0.65 97.48$\pm$0.69
    3D-GAN 98.37$\pm$0.26 97.84$\pm$0.95 97.86$\pm$0.35
    C$^{2}$GAN+CNN 99.74$\pm$0.02 98.96$\pm$0.18 99.65$\pm$0.03
    C$^{2}$GAN+ResNet 99.71$\pm$0.02 98.54$\pm$0.38 99.62$\pm$0.02

Copyright 2020  CHINA SCIENCE PUBLISHING & MEDIA LTD.  中国科技出版传媒股份有限公司  版权所有

京ICP备14028887号-23       京公网安备11010102003388号