SCIENTIA SINICA Informationis, Volume 51 , Issue 1 : 40(2021) https://doi.org/10.1360/SSI-2020-0178

Method of oil and gas reservoir detection based on geological knowledge distillation learning

More info
  • ReceivedJun 16, 2020
  • AcceptedOct 14, 2020
  • PublishedDec 25, 2020


Funded by



[1] Lin N T, Zhang D, Zhang K, et al. Predicting distribution of hydrocarbon reservoirs with seismic data based on learning of the small-sample convolution neural network. Chinese J Geophys, 2018, 61: 4110--4125. Google Scholar

[2] Zhang Y B, Liu M Y, Xiang J L. Discriminating method of the lateral sealing oil and gas by the transporting fault in hydrocarbon accumulation period and its application. Petrol Geol Oilfield Dev Daqing, 2019, 38: 33--40. Google Scholar

[3] 张晏奇. 测井资料交会图法在火山岩岩性识别中的应用探讨. 西部探矿工程, 2019, 31: 53--54. Google Scholar

[4] Li X Y, Qiao H W, Zhang J K, et al. Linear relationship between mineral content and porosity of Chang 6 reservoir in Jiyuan area, Ordos Basin. Lithologic Reservoirs, 2019, 31: 66--74. Google Scholar

[5] Neiman I B. Volume models of the action of cylindrical-charge explosion in rock. Sov Min Sci, 1987, 22: 44--52. Google Scholar

[6] Zhu Y, Xie J, Yang W. Method for improving history matching precision of reservoir numerical simulation. Pet Exploration Dev, 2008, 35: 225-229 CrossRef Google Scholar

[7] 刘之的. 随钻测井响应反演方法及应用研究. 博士学位论文. 成都: 西南石油大学, 2006. Google Scholar

[8] Mao Z Q, Li J F. Method and models for productivity prediction of hydrocarbon reservoirs. Acta Petrolei Sin, 2000, 21: 58--61. Google Scholar

[9] Wu P Y, Jain V, Kulkarni M S, et al. Machine learning-based method for automated well-log processing and interpretation. In: SEG Technical Program Expanded Abstracts 2018. Tulsa: Society of Exploration Geophysicists, 2018. 2041--2045. Google Scholar

[10] Gupta A, Soumya U. Well log interpretation using deep learning neural networks. In: Proceedigs of International Petroleum Technology Conference, 2020. Google Scholar

[11] Haldorsen J B U, Johnson D L, Plona T, et al. Borehole acoustic waves. Oilfield Rev, 2006, 18: 34--43. Google Scholar

[12] Walsh D, Turner P, Grunewald E. A Small-Diameter NMR Logging Tool for Groundwater Investigations. Groundwater, 2013, 51: 914-926 CrossRef Google Scholar

[13] Yuan J D, Wang Z H. Review of time series representation and classification techniques. Comput Sci, 2015, 42: 1--7. Google Scholar

[14] Ye L X, Keogh E J. Time series shapelets: a new primitive for data mining. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2009. 947--956. Google Scholar

[15] Ye L, Keogh E. Time series shapelets: a novel technique that allows accurate, interpretable and fast classification. Data Min Knowl Disc, 2011, 22: 149-182 CrossRef Google Scholar

[16] Chan K P, Fu A W C. Efficient time series matching by wavelets. In: Proceedings the 15th International Conference on Data Engineering, 1999. 126--133. Google Scholar

[17] Ding H, Trajcevski G, Scheuermann P. Querying and mining of time series data. Proc VLDB Endow, 2008, 1: 1542-1552 CrossRef Google Scholar

[18] Wang X, Mueen A, Ding H. Experimental comparison of representation methods and distance measures for time series data. Data Min Knowl Disc, 2013, 26: 275-309 CrossRef Google Scholar

[19] Zhong S, Ghosh J. HMMs and coupled HMMs for multi-channel EEG classification. In: Proceedings of International Joint Conference on Neural Networks, 2002. 1154--1159. Google Scholar

[20] Povinelli R J, Johnson M T, Lindgren A C. Time series classification using Gaussian mixture models of reconstructed phase spaces. IEEE Trans Knowl Data Eng, 2004, 16: 779-783 CrossRef Google Scholar

[21] Gao R, Wang S, Zheng F. Dynamic full Bayesian ensemble classifiers for small time series. Sci Sin-Inf, 2017, 47: 1445-1463 CrossRef Google Scholar

[22] 武天鸿, 翁小清, 单中南. 基于符号表示的时间序列分类综述. 河北省科学院学报, 2019, 36: 11--20. Google Scholar

[23] Wang J, Liu P, She M F H. Bag-of-words representation for biomedical time series classification. BioMed Signal Processing Control, 2013, 8: 634-644 CrossRef Google Scholar

[24] Lin J, Khade R, Li Y. Rotation-invariant similarity in time series using bag-of-patterns representation. J Intell Inf Syst, 2012, 39: 287-315 CrossRef Google Scholar

[25] Schäfer P, Högqvist M. SFA: a symbolic fourier approximation and index for similarity search in high dimensional datasets. In: Proceedings of the 15th International Conference on Extending Database Technology, 2012. 516--527. Google Scholar

[26] Sch?fer P. The BOSS is concerned with time series classification in the presence of noise. Data Min Knowl Disc, 2015, 29: 1505-1530 CrossRef Google Scholar

[27] Karim F, Majumdar S, Darabi H. Multivariate LSTM-FCNs for time series classification. Neural Networks, 2019, 116: 237-245 CrossRef Google Scholar

[28] Zheng Y, Liu Q, Chen E. Exploiting multi-channels deep convolutional neural networks for multivariate time series classification. Front Comput Sci, 2016, 10: 96-112 CrossRef Google Scholar

[29] Zheng Y, Liu Q, Chen E H, et al. Time series classification using multi-channels deep convolutional neural networks. In: Proceedings of International Conference on Web-Age Information Management, 2014. 298--310. Google Scholar

[30] Karim F, Majumdar S, Darabi H. Insights Into LSTM Fully Convolutional Networks for Time Series Classification. IEEE Access, 2019, 7: 67718-67725 CrossRef Google Scholar

[31] Shang F H, Yuan Y, Wang C Z, et al. A logging data processing method of intelligent optimization logging interpretation model based on knowledge-base. Acta Petrolei Sin, 2015, 36: 1449--1456. Google Scholar

[32] Alaudah Y, Micha?owicz P, Alfarraj M. A machine-learning benchmark for facies classification. Interpretation, 2019, 7: SE175-SE187 CrossRef Google Scholar

[33] Zhang W, Stewart R. Using FWI and deep learning to characterize velocity anomalies in crosswell seismic data. In: SEG Technical Program Expanded Abstracts 2019. Tulsa: Society of Exploration Geophysicists, 2019. 2313--2317. Google Scholar

[34] Tong B, Klinkigt M, Iwayama M, et al. Learning to generate rock descriptions from multivariate well logs with hierarchical attention. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2017. 2031--2040. Google Scholar

[35] Hochreiter S, Schmidhuber J. Long Short-Term Memory. Neural Computation, 1997, 9: 1735-1780 CrossRef Google Scholar

[36] Zhao H K, Wu L K, Li Z, et al. Predicting the dynamics in internet finance based on deep neural network structure. J Comput Res Dev, 2019, 56: 1621--1631. Google Scholar

[37] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2015. 3431--3440. Google Scholar

[38] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. In: Proceedings of Advances in Neural Information Processing Systems, 2017. 5998--6008. Google Scholar

[39] Lin T Y, Goyal P, Girshick R, et al. Focal loss for dense object detection. In: Proceedings of IEEE International Conference on Computer Vision, 2017. 2980--2988. Google Scholar

[40] Lopez-Paz D, Bottou L, Schölkopf B, et al. Unifying distillation and privileged information. 2015,. arXiv Google Scholar

[41] Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network. 2015,. arXiv Google Scholar

[42] Kingma D P, Ba J. Adam: a method for stochastic optimization. 2014,. arXiv Google Scholar

[43] Ye J, Chow J H, Chen J, et al. Stochastic gradient boosted distributed decision trees. In: Proceedings of the 18th ACM Conference on Information and Knowledge Management, 2009. 2061--2064. Google Scholar

  • Figure 1

    (Color online) The sample of oil and gas reservoirs detection

  • Figure 2

    Illustration of GKDMN model

  • Figure 3

    (Color online) The model experimental results of the oil and gas detection in the different parameters on #9FAB2 block dataset (the horizontal axis is the values of parameter $\lambda$, and the vertical axis indicates the matric scores). (a) Precision; (b) recall; (c) $F1$-score

  • Figure 4

    (Color online) The model experimental results of the oil and gas detection in the different parameters on #BF8A9 block dataset (the horizontal axis is the values of parameter $\lambda$, and the vertical axis indicates the matric scores). (a) Precision; (b) recall; (c) $F1$-score

  • Figure 5

    (Color online) The model experimental results of the oil and gas detection over the different epochs. (a) $F1$ scores of #9FAB2 block; (b) $F1$ scores of #BF8A9 block (the horizontal axis is the number of epochs and the vertical axis indicates the $F1$ scores)

  • Table 1   Statistics of the datasets
    Statistics #9FAB2 #BF8A9
    Number of total wells 299 180
    Number of total samples 749209 541717
    Number of reservoir classes 7 6
    Number of wells in train set 239 144
    Number of sensor features in train set 21 12
    Number of wells in test set 60 36
    Number of sensor features in test set 5 5
  • Table 2   The performance comparisons of oil and gas detection model in different datasets$^{\rm~a)}$
    Method #9FAB2 #BF8A9
    Precision Recall $F1$ Precision Recall $F1$
    GBDT 0.5750 0.6579 0.5872 0.7099 0.7555 0.7289
    LSTM 0.5565 0.6625 0.5779 0.7426 0.7655 0.7484
    FCN 0.5758 0.6700 0.5812 0.7461 0.7509 0.7483
    LSTMFCN 0.6104 0.6841 0.6069 0.7277 0.7951 0.7493
    ALSTMFCN 0.6155 0.5896 0.6006 0.7305 0.7924 0.7546
    GMN-a 0.6126 0.6863 0.6115 0.7380 0.7965 0.7609
    GMN 0.6349 0.6995 0.6394 0.7411 0.8004 0.7640
    GKDMN-a 0.6330 0.6892 0.6475 0.7583 0.7870 0.7686
    GKDMN 0.6486 0.7045 0.6540 0.7488 0.8086 0.7734