SCIENCE CHINA Information Sciences, Volume 61 , Issue 9 : 092105(2018) https://doi.org/10.1007/s11432-017-9308-x

Dating ancient paintings of Mogao Grottoes using deeply learnt visual codes

More info
  • ReceivedSep 6, 2017
  • AcceptedNov 30, 2017
  • PublishedAug 14, 2018


Cultural heritage is the asset of all the peoples of the world. The preservation and inheritance of cultural heritage is conducive to the progress of human civilization. In northwestern China, there is a world heritage site – Mogao Grottoes – that has a plenty of mural paintings showing the historical cultures of ancient China. To study these historical cultures, one critical procedure is to date the mural paintings, i.e., determining the era when they were created. Until now, most mural paintings at Mogao Grottoes have been dated by directly referring to the mural texts or historical documents. However, some are still left with creation-era undetermined due to the lack of reference materials. Considering that the drawing style of mural paintings was changing along the history and the drawing style can be learned and quantified through painting data, we formulate the problem of mural-painting dating into a problem of drawing-style classification. In fact, drawing styles can be expressed not only in color or curvature, but also in some unknown forms – the forms that have not been observed. To this end, besides sophisticated color and shape descriptors, a deep convolution neural network is designed to encode the implicit drawing styles. 3860 mural paintings collected from 194 different grottoes with determined creation-era labels are used to train the classification model and build the dating method. In experiments, the proposed dating method is applied to seven mural paintings which were previously dated with controversies, and the exciting new dating results are approved by the Dunhuang experts.


This work was supported by National Basic Research Program of China (Grant No. 2012CB725303), Major Program of Key Research Institute on Humanities and Social Science of the Chinese Ministry of Education (Grant No. 16JJD870002), and National Natural Science Foundation of China (Grant Nos. 91546106, 61301277). The authors would like to thank the Dunhuang Research Academia for providing the moral paintings of Dunhuang-P7, and thank Mr. Hui-Min WANG for helpful suggestions and discussions.


[1] Taylor R P, Micolich A P, Jonas D. Fractal analysis of pollock's drip paintings. Nature, 1999, 399: 422. Google Scholar

[2] Lyu S, Rockmore D, Farid H. A digital technique for art authentication. Proc Natl Acad Sci USA, 2004, 101: 17006-17010 CrossRef PubMed ADS Google Scholar

[3] Johnson C R, Hendriks E, Berezhnoy I J, et al. Image processing for artist identification. IEEE Signal Process Mag, 2008, 25: 37--48. Google Scholar

[4] Hughes J M, Graham D J, Rockmore D N. Quantification of artistic style through sparse coding analysis in the drawings of Pieter Bruegel the Elder. Proc Natl Acad Sci USA, 2010, 107: 1279-1283 CrossRef PubMed ADS Google Scholar

[5] Hughes J M, Foti N J, Krakauer D C. From the Cover: Quantitative patterns of stylistic influence in the evolution of literature. Proc Natl Acad Sci USA, 2012, 109: 7682-7686 CrossRef PubMed ADS Google Scholar

[6] Qi H, Taeb A, Hughes S M. Visual stylometry using background selection and wavelet-HMT-based Fisher information distances for attribution and dating of impressionist paintings. Signal Processing, 2013, 93: 541-553 CrossRef Google Scholar

[7] Kim D, Son S-W, Jeong H. Large-scale quantitative analysis of painting arts. Sci Rep, 2014, 4: 7370. Google Scholar

[8] Alameda-Pineda X, Ricci E, Yan Y, et al. Recognizing emotions from abstract paintings using non-linear matrix completion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, 2016. 5240--5248. Google Scholar

[9] Ornes S. Science and Culture: Charting the history of Western art with math. Proc Natl Acad Sci USA, 2015, 112: 7619-7620 CrossRef PubMed ADS Google Scholar

[10] Lowe D G. Object recognition from local scale-invariant features. In: Proceedings of the 7th IEEE International Conference on Computer Vision, Corfu, 1999. 1150--1157. Google Scholar

[11] Perronnin F, Sánchez J, Mensink T. Improving the fisher kernel for large-scale image classification. In: Proceedings of European Conference on Computer Vision, Heraklion, 2010. 143--156. Google Scholar

[12] Perronnin F, Dance C. Fisher kernels on visual vocabularies for image categorization. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, 2007. 1--8. Google Scholar

[13] van de Weijer J, Schmid C, Verbeek J. Learning Color Names for Real-World Applications. IEEE Trans Image Process, 2009, 18: 1512-1523 CrossRef PubMed ADS Google Scholar

[14] Benavente R, Vanrell M, Baldrich R. Parametric fuzzy sets for automatic color naming. J Opt Soc Am A, 2008, 25: 2582-2593 CrossRef ADS Google Scholar

[15] Berlin B, Kay P. Basic Color Terms: Their Universality and Evolution. Berkeley: University of California Press, 1991. Google Scholar

[16] Khan R, van de Weijer J, Shahbaz Khan F, et al. Discriminative color descriptors. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Portland, 2013. 2876--2883. Google Scholar

[17] Zou Q, Qi X B, Li Q Q, et al. Discriminative regional color co-occurrence descriptor. In: Proceedings of International Conference on Image Processing, Quebec City, 2015. 696--700. Google Scholar

[18] Zou Q, Ni L, Wang Q. Local Pattern Collocations Using Regional Co-occurrence Factorization. IEEE Trans Multimedia, 2017, 19: 492-505 CrossRef Google Scholar

[19] Deng C W, Huang G B, Xu J, et al. Extreme learning machines: new trends and applications. Sci China Inf Sci, 2015, 58: 020301. Google Scholar

[20] Hinton G E. Reducing the Dimensionality of Data with Neural Networks. Science, 2006, 313: 504-507 CrossRef PubMed ADS Google Scholar

[21] Bengio Y, Lamblin P, Popovici D, et al. Greedy layer-wise training of deep networks. In: Proceedings of Annual Conference on Neural Information Processing Systems (NIPS). Cambridge: MIT Press, 2006. 153--160. Google Scholar

[22] Gao W, Zhou Z H. Dropout Rademacher complexity of deep neural networks. Sci China Inf Sci, 2016, 59: 072104 CrossRef Google Scholar

[23] Deng L, Yu D. Deep learning: methods and applications. Trends Signal Process, 2013, 7: 197--387. Google Scholar

[24] LeCun Y, Boser B, Denker J S. Backpropagation Applied to Handwritten Zip Code Recognition. Neural Computation, 1989, 1: 541-551 CrossRef Google Scholar

[25] Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, 2012. 1097--1105. Google Scholar

[26] Szegedy C, Liu W, Jia Y Q, et al. Going deeper with convolutions. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Boston, 2015. 1--9. Google Scholar

[27] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. 2015,. arXiv Google Scholar

[28] Gatys L A, Ecker A S, Bethge M. Image style transfer using convolutional neural networks. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, 2016. 2414--2423. Google Scholar

[29] Crowley E J, Zisserman A. In search of art. In: Computer Vision - ECCV 2014 Workshops. Berlin: Springer, 2014. 54--70. Google Scholar

[30] Nair V, Hinton G E. Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th International Conference on International Conference on Machine Learning, Haifa, 2010. 807--814. Google Scholar

[31] Dong Z, Liang W, Wu Y W, et al. Nonnegative correlation coding for image classification. Sci China Inf Sci, 2016, 59: 012105. Google Scholar

[32] Chen L, Chen J, Zou Q, et al. Multi-view feature combination for ancient paintings chronological classification. J Comput Cult Herit, 2017, 10: 701--715. Google Scholar

[33] van de Sande K E A, Gevers T, Snoek C G M. Evaluating color descriptors for object and scene recognition.. IEEE Trans Pattern Anal Mach Intell, 2010, 32: 1582-1596 CrossRef PubMed Google Scholar

[34] Aitken M J. Science-based dating in archaeology. New York: Routledge, 2014. Google Scholar

[35] Pike A W G, Hoffmann D L, Garcia-Diez M. U-Series Dating of Paleolithic Art in 11 Caves in Spain. Science, 2012, 336: 1409-1413 CrossRef PubMed ADS Google Scholar

[36] Wang H. A study of shi-bao Guanyin and shi-gan-lu Guanyin images at cave 205 of the Mogao Grottes. J Dunhuang Studies, 2010, 1: 58--65. Google Scholar

  • Figure 1

    (Color online) (a) shows a part of the grotto $\texttt{\#}$206 and (b) shows a mural painting. (c) and (d) show the detail information of the DunHuang-E6 dataset: (c) Sample painting images of the DunHuang-E6. (d) The distribution of the paintings in DunHuang-E6, as regarding to the dynasty when the paintings were drawn, the number of paintings from each dynasty, the indices of the grottoes where the paintings were drawn. Note that, the marks on the top stick in (d) display the start year and the end year of each dynasty.

  • Figure 2

    (Color online) The system overview.

  • Figure 3

    (Color online) Architecture of the proposed DunNet neural network.

  • Figure 4

    (Color online) Classification accuracies obtained by binary classifiers (%). (a) RCC; (b) IFV; (c) IFV+RCC +DunNet.

  • Figure 5

    (Color online) Seven mural paintings from Mogao Grottoes. Note that, (a)–(f) are from grotto $\texttt{\#}$205, and (g) is from grotto $\texttt{\#}$206. (a) A mural painting located on the south wall of grotto $\texttt{\#}$205; (b) a mural painting on the south side of the west wall; (c) a mural painting on the north side of the west wall; (d) a mural painting of Peacock King, on the aisle's ceiling of grotto $\texttt{\#}$205; (e) a painting of Guanyin Buddha on south side of the west wall of grotto $\texttt{\#}$205; (f) a painting from the central ceiling of grotto $\texttt{\#}$205; (g) a Flying-Apsaras painting from grotto $\texttt{\#}$206. The classification results are shown in the two bar chars. (I) Voting results on the seven paintings using the 6-class classifier. (II) Voting results of the seven painting images using binary classifiers. (III) The possible creation era of the seven paintings in (a)–(g). The green denotes the up-to-date official creation era given by the Dunhuang Research Academia, and the red with question mark denotes the most probable creation era in alternative.

  • Table 1   Classification accuracies of different methods and their combinations (%)
    Descriptor Sui- Early Tang- Peak Tang- Middle Tang- Late Tang-Six Dynasties
    Early TangPeak TangMiddle TangLate TangWu Dai
    IFVsift 82.11 73.61 82.45 67.08 80.20 59.90
    CN11 76.94 68.30 75.64 69.88 75.66 46.02
    DD25 70.94 67.14 70.29 71.87 66.48 39.94
    DD50 70.16 69.10 75.23 72.05 65.86 43.78
    RCC 84.77 75.26 83.57 70.73 78.83 59.34
    AlexNet(256) 85.91 76.77 81.30 77.50 84.84 60.37
    GoogLeNet(256) 86.51 77.44 83.23 80.12 81.88 60.83
    DunNet(256) 88.11 81.03 84.40 75.65 83.03 62.95
    IFVsift + RCC 83.86 82.46 91.98 72.51 87.57 69.60
    IFVsift + DunNet 88.98 83.78 85.91 77.88 83.92 64.52
    IFVsift + RCC + DunNet 88.51 86.07 91.62 77.25 89.40 71.64

Copyright 2020  CHINA SCIENCE PUBLISHING & MEDIA LTD.  中国科技出版传媒股份有限公司  版权所有

京ICP备14028887号-23       京公网安备11010102003388号