logo

SCIENCE CHINA Information Sciences, Volume 61, Issue 1: 012106(2018) https://doi.org/10.1007/s11432-016-9071-5

Mission evaluation: expert evaluation system for large-scale combat tasks of the weapon system of systems

More info
  • ReceivedNov 29, 2016
  • AcceptedMar 16, 2017
  • PublishedJul 19, 2017

Abstract

Mission evaluation is a new requirement for capability evaluation of the weapon system of systems (WSOS) in the era of big data, and is based on evaluating large-scale tasks with similar attributes. The use of traditional methods by military experts to evaluate large scale tasks incurs significant time cost and results in low accuracy, and is caused by a variety of factors that cause confusion. Therefore, we developed a system to assist military personnel in improving the efficiency of mission evaluation; the main innovations of our work include the qualitative and quantitative visualization of complex information is realized in a three-pane interface. We also realize the iterative and interactive evaluation modes of large-scale tasks by using the active learning method; moreover, the overall display of large-scale task evaluation results is realized using statistical graphics. In practical application, the system not only improves the users’ efficiency and accuracy scores, but also helps to achieve the recognition evaluation for the overall scoring results.


Acknowledgment

This work was supported by Major Program of the National Natural Science Foundation of China (Grant No. U1435218) and National Natural Science Foundation of China (Grant No. 61403401).


References

[1] Qian X S, Yu J Y, Dai R W. A new discipline of science---the study of open complex giant system and its methodology. J Syst Eng Electron, 1993, 4: 2--12. Google Scholar

[2] Dai R W, Li Y D. Researches on hall for workshop of meta synthetic engineering and system complexity. Complex Syst Complex Sci, 2004, 1: 1--24. Google Scholar

[3] Liu C M, Dai R W. Visualization of experts evaluation opinions in the hall for workshop of meta synthetic engineering. Pattern Recog Artif Intell, 2005, 18: 6--11. Google Scholar

[4] Dai R W. The proposal and recent development of metasynthetic method(M) from qualitative to quantitative. Chinese J Nature, 2009, 31: 311--314. Google Scholar

[5] Hu X F, Si G Y. SDE98: an environment of metasynthetic workshop for military strategic decisionmaking. Mini-Micro Syst, 1999, 20: 2--7. Google Scholar

[6] Si G Y. Strategic decision integration discussion and research and implemenation of the simulation circumstance. Syst Eng, 2000, 18: 79--80. Google Scholar

[7] Zhang X X, Zhang P Z. Research on visualization of group decision argument opinions distributing---design and development of electronic common brain audiovisual room. J Manage Sci China, 2005, 8: 15--27. Google Scholar

[8] Li J, Zhang P Z, Jiang Y Z. Research on automatic topic visual clustering in the group argument support systems. J Syst Manage, 2009, 18: 325--331. Google Scholar

[9] Xiong C Q, Li D H, Zhang Y. Clustering analysis of experts opinion and its visualization in hall for workshop of meta-synthetic engineering. Pattern Recogn Artif Intell, 2009, 22: 017. Google Scholar

[10] Franks A, Miller A, Bornn L, et al. Counterpoints: advanced defensive metrics for NBA basketball. In: Proceedings of the 9th Annual MIT Sloan Sports Analytics Conference, Boston, 2015. Google Scholar

[11] Maheswaran R, Chang Y H, Henehan A, et al. Deconstructing the rebound with optical tracking data. In: Proceedings of MIT Sloan Sports Analytics Conference, Boston, 2012. Google Scholar

[12] Lucey P, Bialkowski A, Monfort M, et al. “Quality vs Quantity: improved shot prediction in soccer using strategic features from spatiotemporal data. In: Proceedings of the 8th Annual MIT Sloan Sports Analytics Conference, Boston, 2014. 1--9. Google Scholar

[13] Rusu A, Stoica D, Burns E. Analyzing soccer goalkeeper performance using a metaphor-based visualization. In: Proceedings of the 15th International Conference on Information Visualization. Washington: IEEE Computer Society, 2011. 194--199. Google Scholar

[14] Rusu A, Stoica D, Burns E, et al. Dynamic visualizations for soccer statistical analysis. In: Proceedings of the 14th International Conference on Information Visualization. Washington: IEEE Computer Society, 2010. 207--212. Google Scholar

[15] Pileggi H, Stolper C D, Boyle J M, et al. Snapshot: visualization to propel ice hockey analytics. IEEE Trans Visual Comput Graph, 2012, 18: 2819--2828. Google Scholar

[16] Perin C, Vuillemot R, Fekete J D. Soccer Stories: a kick-off for visual soccer analysis. IEEE Trans Visual Comput Graph, 2013, 19: 2506--2515. Google Scholar

[17] Janetzko H, Sacha D, Stein M, et al. Feature-driven visual analytics of soccer data. In: Proceedings of IEEE Conference on Visual Analytics Science and Technology. Washington: IEEE Computer Society, 2014. 13--22. Google Scholar

[18] Angluin D. Queries and concept learning. Mach Learn, 1988, 2: 319--342. Google Scholar

[19] Tomanek K, Olsson F. A web survey on the use of active learning to support annotation of text data. In: Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, Boulder, 2009. 45--48. Google Scholar

[20] Settles B. Active Learning Literature Survey. Computer Sciences Technical Report 1648, University of Wisconsin-Madison, 2009. Google Scholar

[21] Dasgupta S, Langford J. A tutorial on active learning. In: Proceedings of International Conference of Machine Learning, Quebec, 2009. Google Scholar

[22] Macskassy S A. Using graph-based metrics with empirical risk minimization to speedup active learning on networked data. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Paris, 2009. 597--606. Google Scholar

[23] Bilgic M, Mihalkova L, Getoor L. Active learning for networked data. In: Proceedings of the 27th International Conference on Machine Learning, Haifa, 2010. 79--86. Google Scholar

[24] Shi L, Zhao Y, Tang J. Batch mode active learning for networked data. ACM Trans Intell Syst Tech, 2012, 3: 33. Google Scholar

[25] Hu X, Tang J, Gao H, et al. ActNet: active learning for networked teats inmicro-blogging. In: Proceedings of the 2013 SIAM International Conference on Data Mining, Austin, 2013. 306--314. Google Scholar

[26] Cesa-Bianchi N, Gentile C, Vitale F, et al. Active learning on trees and graphs,. arXiv Google Scholar

[27] Chen Y, Li Z, Nie L, et al. A semi-supervised network model for micro blog topic classification. In: Proceedings of the 24th International Conference on Computational Linguistics, Mumbai, 2012. 561--576. Google Scholar

[28] Liu F C. Research on entity relation extraction technology based on text. National Defense Science and Technology University, 2013. Google Scholar

[29] Tong S, Chang E. Support vector machine active learning for image retrieval. In: Proceedings of the 9th ACM International Conference on Multimedia, Ottawa, 2001. 107--118. Google Scholar

[30] Hu X, Tang L, Tang J, et al. Exploiting social relations for sentiment analysis in micro blogging. In: Proceedings of the 6th ACM International Conference on Web Search and Data Mining, Rome, 2013. 537--546. Google Scholar

[31] Yan R, Yang J, Hauptmann A. Automatically labeling video data using multi-class active learning. In: Proceedings of 9th IEEE International Conference on Computer Vision, Nice, 2003. 516--523. Google Scholar

[32] Zhu X, Zhang P, Lin X, et al. Active learning from stream data using optimal weight classifier ensemble. IEEE Trans Syst Man Cybernet, 2010, 40: 1607--1621. Google Scholar

[33] Zhang C, Chen T. An active learning framework for content-based information. IEEE Trans Multimed, 2002, 4: 260--268. Google Scholar

[34] PraBni J S, Ropinski T, Hinrichs K. Uncertainty---aware guided volume segmentation. IEEE Trans Vis Comput Graph, 2010, 16: 1358--1365. Google Scholar

[35] Top A, Hamarneh G, Abugharbieh R. Active Learning for Interactive 3D Image Segmentation. Berlin: Springer, 2011. 603--610. Google Scholar

[36] Gao L, Cao Y P, Lai Y K, et al. Active exploration of large 3d model repositories. IEEE Trans Visual Comput Graph, 2015, 21: 1390--1402. Google Scholar

[37] Cohen-Or D, Zhang H. From inspired modeling to creative modeling. Visual Comput, 2016, 1: 7--14. Google Scholar

[38] Hofmann T, Puzicha J. Latent class models for collaborative filtering. In: Proceedings of the 16th International Joint Conference on Artificial Intelligence. San Francisco: Morgan Kaufmann Publishers Inc, 1999. 99: 688--693. Google Scholar

[39] Jin R, Si L. A Bayesian approach toward active learning for collaborative filtering. In: Proceedings of the 20th Conference on Uncertainty in Aartificial Intelligence, Banff, 2004. 278--285. Google Scholar

[40] Harple A S, Yang Y. Personalized active learning for collaborative filtering. In: Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Singapore, 2008. 91--98. Google Scholar

[41] Boyd J R. A discourse on winning and losing. Maxwell Air Force Base: Air University Library, Document No. M-U 43947 (Briefing slides). Google Scholar

[42] Nadler B, Lafon S, Coifman R R, et al. Diffusion maps, spectral clustering and eigen functions of Fokker-Planck operators. In: Proceedings of Neural Information Processing Systems, British Columbia, 2005. 955--962. Google Scholar

[43] Zhu X J, Lafferty J, Ghahramani Z. Combining active learning and semi-supervised learning using Gaussian fields and harmonic functions. In: Proceedings of International Conference on Machine Learning Workshop, Washington, 2003. 58--65. Google Scholar

[44] Joshi A J, Porikli F, Papanikolopoulos N. Multi-class active learning for image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, 2009. 2372--2379. Google Scholar

  • Figure 1

    From task evaluation to mission evaluation. (a) Traditional patterns of task evaluation; (b) mission evaluation based on large scale tasks; (c) mission packages including multiple missions; (d) visual interaction and iteration.

  • Figure 2

    Work flow chart of the system.

  • Figure 3

    Structure graph of three-view method.

  • Figure 4

    Three-pane interactive interface. (a) Similarity space visualization based on task attribute clustering; (b) visualization of battlefield environment based on task information; (c) visualization of the whole operation effect based on OODA interaction network.

  • Figure 5

    OODA interactive network.

  • Figure 6

    Presentation interface of task set evaluation results.

  • Figure 7

    Early warning task experiment for large scale air targets. (a) Overall concept map; (b) military personnel scoring interface.

  • Figure 8

    Comparative analysis of different visualization methods. (a) Expert group acceptance of each group evaluation result; (b) time consumption of each group evaluation result.

  • Figure 9

    Evaluation experiment of 100 tasks data set. (a) Score accuracy of evaluation based on three machine learning methods; (b) success or failure accuracy of evaluation based on three machine learning methods.

  • Figure 10

    Overall feature analysis. (a) The spatial scatter plot of the 100 tasks evaluation results; (b) the spatial scatter plot of the 300 tasks evaluation results; (c) the spatial scatter plot of the 700 tasks evaluation results; (d) the histogram and density line of the 100 tasks evaluation results; (e) the histogram and density line of the 300 tasks evaluation results; (f) the histogram and density line of the 700 tasks evaluation results.

  • Figure 11

    Analysis of different target heights. (a) The spatial scatter plots of the evaluation results of the three data sets; (b) the box line diagram and bar chart of the evaluation results of the three data sets.

  • Figure 12

    Analysis of different target types. (a) The spatial scatter plots of the evaluation results of the three data sets; (b) the box line diagram and bar chart of the evaluation results of the three data sets.

  • Figure 13

    Analysis of the task types. (a) The spatial scatter plots of the evaluation results of the three data sets; (b) the box line diagram and bar chart of the evaluation results of the three data sets.

  • Figure 14

    Comparative recognition of different machine learning methods to the 300 tasks data set. (a) The score recognition of evaluation based on three machine learning methods; (b) the success or failure recognition of evaluation based on three machine learning methods.

  • Figure 15

    Comparative recognition of different machine learning methods to the 700 tasks data set. (a) The score recognition of evaluation based on three machine learning methods; (b) the success or failure recognition of evaluation based on three machine learning methods.

  • Figure 16

    Large-scale and long-range striking tasks experiment. (a) The overall concept map; (b) 100 striking tasks samples; (c) 400 striking tasks samples.

  • Figure 17

    Comparative recognition of different machine learning methods for long-range striking tasks. (a) The score recognition of 100 evaluation tasks based on three machine learning methods; (b) success or failure recognition of 400 evaluation tasks based on three machine learning methods.

  • Figure 18

    Evaluation results charts of the long-range striking mission based on three-view active query mode. protectłinebreak (a) Evaluation results chart of 100 tasks; (b) evaluation results chart of 400 tasks.

  • Table 1   12 statistical indicators of evaluation results of three data sets
    $N$ Mean Var Std_dev Median Std_err CV CSS USS R R1 Kurtosis Skewness
    97 4.1 6.5 2.5 4 0.26 61.9 622 2263 10 4 $-$0.79 0.04
    293 4.3 6.6 2.6 4 0.15 60.1 1919 7252 10 4 $-$0.80 0.17
    681 4.3 6.8 2.6 4 0.10 61.0 4630 17073 10 4 $-$0.83 0.10
  • Table 2   The comprehensive experimental evaluation results of 20% manual scoring
    100 tasks 300 tasks 700 tasks
    Score
    recognition
    Score
    recognition
    Score
    recognition
    Score
    recognition
    Score
    recognition
    Score
    recognition
    Random
    selection (RS)
    60 59 65 60 65 61
    three-view (%)
    Assisted
    selection (TVAS)
    70 64 74 70 75 71
    three-view (%)
    Active
    query (TVAQ) (%)
    76 78 86 89 90 92

Copyright 2020 Science China Press Co., Ltd. 《中国科学》杂志社有限责任公司 版权所有

京ICP备18024590号-1