logo

SCIENCE CHINA Information Sciences, Volume 59, Issue 7: 071101(2016) https://doi.org/10.1007/s11432-016-5572-2

High-confidence software evolution

More info
  • ReceivedJan 2, 2016
  • AcceptedMar 29, 2016
  • PublishedJun 13, 2016

Abstract

Software continues to evolve due to changing requirements, platforms and other environmental pressures. Modern software is dependent on frameworks, and if the frameworks evolve, the software has to evolve as well. On the other hand, the software may be changed due to changing requirements. Therefore, in high-confidence software evolution, we must consider both framework evolution and client evolution, each of which may incur faults and reduce software quality. In this article, we present a set of approaches to address some problems in high-confidence software evolution. In particular, to support framework evolution, we propose a history-based matching approach to identify a set of transformation rules between different APIs, and a transformation language to support automatic transformation. To support client evolution for high-confidence software, we propose a path-exploration-based approach to generate tests efficiently by pruning paths irrelevant to changes between versions, several coverage-based approaches to optimize test execution, and approaches to locate faults and fix memory leaks automatically. These approaches facilitate high-confidence software evolution from various aspects.


Funded by

. Tao Xie's work was supported in part by National Science Foundation(Grant Nos. CCF-1409423)

National Natural Science Foundation of China(61225007)

. Tao Xie's work was supported in part by National Science Foundation(CNS-1513939)

National Natural Science Foundation of China(61421091)

. Tao Xie's work was supported in part by National Science Foundation(CNS-1564274)

National Basic Research Program of China(2015AA01A202)

National Natural Science Foundation of China(61272157)

National Natural Science Foundation of China(61529201)

. Tao Xie's work was supported in part by National Science Foundation(CCF-1434596)

. Tao Xie's work was supported in part by National Science Foundation(CNS-1434582)


Acknowledgment

Acknowledgments

This work was supported by National Basic Research Program of China (Grant No. 2015AA01A202), National Natural Science Foundation of China (Grant Nos. 61421091, 61529201, 61272157, 61225007). Tao Xie's work was supported in part by National Science Foundation (Grant Nos. CCF-1409423, CNS-1434582, CCF-1434596, CNS-1513939, CNS-1564274), and a Google Faculty Research Award.


References

[1] Godfrey M W, German D M. The past, present, and future of software evolution. In: Proceedings of Frontiers of Software Maintenance, Beijing, 2008. 129--138. Google Scholar

[2] Kemerer C, Slaughter S. An empirical approach to studying software evolution. newblock IEEE Trans Softw Eng, 1999, 25: 493-509 CrossRef Google Scholar

[3] Lehman M M, Ramil J F, Wernick P D, et al. Metrics and laws of software evolution - the nineties view. In: Proceedings of the 4th International Software Metrics Symposium, Albuquerque, 1997. 20--32. Google Scholar

[4] Böhme M, Oliveira B C d S, Roychoudhury A. Regression tests to expose change interaction errors. In: Proceedings of the 9th Joint Meeting on Foundations of Software Engineering. New York: ACM, 2013. 334--344. Google Scholar

[5] Dig D, Manzoor K, Johnson R, et al. Refactoring-aware configuration management for object-oriented programs. In: Proceedings of the 29th International Conference on Software Engineering, Minneapolis, 2007. 427--436. Google Scholar

[6] Harrold M J, Gupta R, Soffa M L. A methodology for controlling the size of a test suite. newblock ACM Trans Softw Eng Methodol, 1993, 2: 270-285 CrossRef Google Scholar

[7] Henkel J, Diwan A. Catchup! capturing and replaying refactorings to support {API} evolution. In: Proceedings of the 27th International Conference on Software Engineering. New York: ACM, 2005. 274--283. Google Scholar

[8] Ren X, Ryder B G. Heuristic ranking of {Java} program edits for fault localization. In: Proceedings of the International Symposium on Software Testing and Analysis. New York: ACM, 2007. 239--249. Google Scholar

[9] Santelices R, Harrold M J, Orso A. Precisely detecting runtime change interactions for evolving software. In: Proceedings of the 3rd International Conference on Software Testing, Verification and Validation, Paris, 2010. 429--438. Google Scholar

[10] Stoerzer M, Ryder B G, Ren X, et al. Finding failure-inducing changes in {Java} programs using change classification. In: Proceedings of the 14th ACM SIGSOFT International Symposium on Foundations of Software Engineering. New York: ACM, 2006. 57--68. Google Scholar

[11] Wu W, Gu{é}h{é}neuc Y-G, Antoniol G, et al. Aura: a hybrid approach to identify framework evolution. In: Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering. New York: ACM, 2010. 325--334. Google Scholar

[12] Yoo S, Harman M. Using hybrid algorithm for pareto efficient multi-objective test suite minimisation. newblock J Syst Softw, 2010, 83: 689-701 CrossRef Google Scholar

[13] Zhang W, Yi L, Zhao H Y, et al. Feature-oriented stigmergy-based collaborative requirements modeling: an exploratory approach for requirements elicitation and evolution based on web-enabled collective intelligence. newblock Sci China Inf Sci, 2013, 56: 082108-701 Google Scholar

[14] Kim M, Notkin D, Grossman D. Automatic inference of structural changes for matching across program versions. In: Proceedings of the International Conference on Software Engineering, Minneapolis, 2007. 333--343. Google Scholar

[15] Xing Z, Stroulia E. {API}-evolution support with {Diff-CatchUp}. newblock IEEE Trans Softw Eng, 2007, 33: 818-836 CrossRef Google Scholar

[16] Malpohl G, Hunt J, Tichy W. Renaming detection. In: Proceedings of the 15th IEEE International Conference on Automated Software Engineering, Grenoble, 2000. 73--80. Google Scholar

[17] Weissgerber P, Diehl S. Identifying refactorings from source-code changes. In: Proceedings of the 21st IEEE/ACM International Conference on Automated Software Engineering (ASE'06), Tokyo, 2006. 231--240. Google Scholar

[18] Godfrey M, Zou L. Using origin analysis to detect merging and splitting of source code entities. newblock IEEE Trans Softw Eng, 2005, 31: 166-181 CrossRef Google Scholar

[19] Nguyen H A, Nguyen T T, Wilson G, et al. A graph-based approach to {API} usage adaptation. In: Proceedings of the ACM International Conference on Object Oriented Programming Systems Languages and Applications. New York: ACM, 2010. 302--321. Google Scholar

[20] Meng S, Wang X, Zhang L, et al. A history-based matching approach to identification of framework evolution. In: Proceedings of the 34th International Conference on Software Engineering (ICSE), Zurich, 2012. 353--363. Google Scholar

[21] Li J, Wang C, Xiong Y, et al. {SWIN}: towards type-safe {Java} program adaptation between {APIs}. In: Proceedings of the Workshop on Partial Evaluation and Program Manipulation. New York: ACM, 2015. 91--102. Google Scholar

[22] Wang C L, Li J, Xiong Y F, et al. Formal Definition of {SWIN} Language. https://github.com/Mestway/SWIN-Project/blob/master/docs/pepm-15/TR/TR.pdf. 2014. Google Scholar

[23] Nita M, Notkin D. Using twinning to adapt programs to alternative {APIs}. In: Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering, New York: ACM, 2010. 205--214. Google Scholar

[24] Cordy J R. The {TXL} source transformation language. newblock Sci Comput Program, 2006, 61: 190-210 CrossRef Google Scholar

[25] Bravenboer M, Kalleberg K T, ermaas R V, et al. {Stratego/XT} 0. 17. a language and toolset for program transformation. newblock Sci Comput Program, 2008, 72: 52-70 Google Scholar

[26] Wasserman L. Scalable, example-based refactorings with {refaster}. In: Proceedings of the ACM Workshop on Workshop on Refactoring Tools. New York: ACM, 2013. 25--28. Google Scholar

[27] Erwig M, Ren D. A rule-based language for programming software updates. In: Proceedings of the ACM SIGPLAN Workshop on Rule-Based Programming. New York: ACM, 2002. 67--78. Google Scholar

[28] Erwig M, Ren D. An update calculus for expressing type-safe program updates. newblock Sci Comput Program, 2007, 67: 199-222 CrossRef Google Scholar

[29] Dig D, Negara S, Mohindra V, et al. Reba: refactoring-aware binary adaptation of evolving libraries. In: Proceedings of the 30th International Conference on Software Engineering. New York: ACM, 2008. 441--450. Google Scholar

[30] Leather S, Jeuring J, Löh A, et al. Type-changing rewriting and semantics-preserving transformation. In: Proceedings of the ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation. New York: ACM, 2014. 109--120. Google Scholar

[31] Saff D, Ernst M D. Reducing wasted development time via continuous testing. In: Proceedings of the 14th International Symposium on Software Reliability Engineering, Denver, 2003. 281--292. Google Scholar

[32] Voas J. {PIE}: a dynamic failure-based technique. newblock IEEE Trans Softw Eng, 1992, 18: 717-727 CrossRef Google Scholar

[33] Clarke L. {A system to generate test data and symbolically execute programs}. newblock IEEE Trans Softw Eng, 1976, 2: 215-222 Google Scholar

[34] Csallner C, Smaragdakis Y. {JCrasher}: an automatic robustness tester for {Java}. newblock Softw Pract Exp, 2004, 34: 1025-1050 CrossRef Google Scholar

[35] Godefroid P, Klarlund N, Sen K. {DART}: directed automated random testing. In: Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation. New York: ACM, 2005. 213--223. Google Scholar

[36] Pacheco C, Lahiri S K, Ernst M D, et al. Feedback-directed random test generation. In: Proceedings of the 29th International Conference on Software Engineering (ICSE'07), Minneapolis, 2007. 75--84. Google Scholar

[37] Sen K, Marinov D, Agha G. {CUTE}: a concolic unit testing engine for {C}. In: Proceedings of the 10th European Software Engineering Conference Held Jointly with the 13th ACM SIGSOFT International Symposium on Foundations of Software Engineering. New York: ACM, 2005. 263--272. Google Scholar

[38] Tian T, Gong D-W. Test data generation for path coverage of message-passing parallel programs based on co-evolutionary genetic algorithms. newblock Aut Softw Eng, 2014, 22: 1-32 Google Scholar

[39] Tillmann N, Halleux J de. Pex-white box test generation for {.{N}{E}{T}}. In: Tests and Proofs. Berlin: Springer, 2008. 134--153. Google Scholar

[40] Tonella P. Evolutionary testing of classes. In: Proceedings of the ACM SIGSOFT International Symposium on Software Testing and Analysis. New York: ACM, 2004. 119--128. Google Scholar

[41] Visser W, P\v{a}s\v{a}reanu C S, Khurshid S. Test input generation with {J}ava pathfinder. In: Proceedings of the ACM SIGSOFT International Symposium on Software Testing and Analysis. New York: ACM, 2004. 97--107. Google Scholar

[42] Zhang W-Q, Gong D-W, Yao X-J, et al. Evolutionary generation of test data for many paths coverage. In: Proceedings of Chinese Control and Decision Conference, Xuzhou, 2010. 230--235. Google Scholar

[43] Inkumsah K, Xie T. Improving structural testing of object-oriented programs via integrating evolutionary testing and symbolic execution. In: Proceedings of the 23rd IEEE/ACM International Conference on Automated Software Engineering, L'Aquila, 2008. 297--306. Google Scholar

[44] Taneja K, Xie T, Tillmann N, et al. {eXpress}: guided path exploration for efficient regression test generation. In: Proceedings of the International Symposium on Software Testing and Analysis. New York: ACM, 2011. 1--11. Google Scholar

[45] Taneja K, Xie T, Tillmann N, et al. Guided path exploration for regression test generation. In: Proceedings of the 31st International Conference on Software Engineering, Vancouver, 2009. 311--314. Google Scholar

[46] Marinescu P D, Cadar C. Make test-zesti: a symbolic execution solution for improving regression testing. In: Proceedings of the 34th International Conference on Software Engineering. Piscataway: IEEE Press, 2012. 716--726. Google Scholar

[47] Böhme M, Oliveira B C d S, Roychoudhury A. Partition-based regression verification. In: Proceedings of the International Conference on Software Engineering, Piscataway, 2013. 302--311. Google Scholar

[48] Chipounov V, Georgescu V, Zamfir C, et al. Selective symbolic execution. In: Proceedings of the 5th Workshop on Hot Topics in System Dependability, Lisbon, 2009. 1--6. Google Scholar

[49] Chipounov V, Kuznetsov V, Candea G. The {S2E} platform: design, implementation, and applications. newblock ACM Trans Comput Syst, 2012, 30: 1-49 Google Scholar

[50] Fraser G, Arcuri A. Sound empirical evidence in software testing. In: Proceedings of the 34th International Conference on Software Engineering (ICSE), Zurich, 2012. 178--188. Google Scholar

[51] Jaygarl H, Kim S, Xie T, et al. OCAT: object capture-based automated testing. In: Proceedings of the 19th International Symposium on Software Testing and Analysis. New York: ACM, 2010. 159--170. Google Scholar

[52] Xiao X, Li S, Xie T, et al. Characteristic studies of loop problems for structural test generation via symbolic execution. In: Proceedings of IEEE/ACM 28th International Conference on Automated Software Engineering (ASE), Silicon Valley, 2013. 246--256. Google Scholar

[53] Xiao X, Xie T, Tillmann N, et al. Precise identification of problems for structural test generation. In: Proceedings of the 33rd International Conference on Software Engineering. New York: ACM, 2011. 611--620. Google Scholar

[54] Thummalapenta S, Xie T, Tillmann N, et al. Synthesizing method sequences for high-coverage testing. In: Proceedings of the ACM International Conference on Object Oriented Programming Systems Languages and Applications. New York: ACM, 2011. 189--206. Google Scholar

[55] Thummalapenta S, Xie T, Tillmann N, et al. {MSeqGen}: object-oriented unit-test generation via mining source code. In: Proceedings of the 7th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering. New York: ACM, 2009. 193--202. Google Scholar

[56] Qi D, Sumner W N, Qin F, et al. Modeling software execution environment. In: Proceedings of the 19th Working Conference on Reverse Engineering, Kingston, 2012. 415--424. Google Scholar

[57] Samimi H, Hicks R, Fogel A, et al. Declarative mocking. In: Proceedings of the International Symposium on Software Testing and Analysis. New York: ACM, 2013. 246--256. Google Scholar

[58] Zhang L, Ma X, Lu J, et al. Environmental modeling for automated cloud application testing. newblock IEEE Softw, 2012, 29: 30-35 CrossRef Google Scholar

[59] Godefroid P, Luchaup D. Automatic partial loop summarization in dynamic test generation. In: Proceedings of the International Symposium on Software Testing and Analysis. New York: ACM, 2011. 23--33. Google Scholar

[60] Xie T, Tillmann N, de Halleux P, et al. Fitness-guided path exploration in dynamic symbolic execution. In: Proceedings of IEEE/IFIP International Conference on Dependable Systems & Networks, Lisbon, 2009. 359--368. Google Scholar

[61] Böhme M. Automated regression testing and verification of complex code changes. Dissertation for Ph.D. Degree. Singapore: National University of Singapore, 2014. Google Scholar

[62] Jones J A, Harrold M J. Test-suite reduction and prioritization for modified condition/decision coverage. newblock IEEE Trans Softw Eng, 2003, 29: 195-209 CrossRef Google Scholar

[63] Jeffrey D, Gupta N. Improving fault detection capability by selectively retaining test cases during test suite reduction. newblock IEEE Trans Softw Eng, 2007, 33: 108-123 CrossRef Google Scholar

[64] Lin J-W, Huang C-Y. Analysis of test suite reduction with enhanced tie-breaking techniques. newblock Inf Softw Tech, 2009, 51: 679-690 CrossRef Google Scholar

[65] Chen T Y, Lau M F. A new heuristic for test suite reduction. newblock Inf Softw Tech, 1998, 40: 347-354 CrossRef Google Scholar

[66] Hao D, Zhang L, Wu X, et al. On-demand test suite reduction. In: Proceedings of the 34th International Conference on Software Engineering, Piscataway, 2012. 738--748. Google Scholar

[67] Do H, Rothermel G. A controlled experiment assessing test case prioritization techniques via mutation faults. In: Proceedings of the 21st IEEE International Conference on Software Maintenance, Budapest, 2005. 411--420. Google Scholar

[68] Elbaum S, Malishevsky A, Rothermel G. Prioritizing test cases for regression testing. In: Proceedings of International Symposium of Software Testing and Analysis, Portland, 2000. 102--112. Google Scholar

[69] Elbaum S, Malishevsky A, Rothermel G. Incorporating varying test costs and fault severities into test case prioritization. In: Proceedings of the 23rd International Conference on Software Engineering, Washington, 2001. 329--338. Google Scholar

[70] Hou S S, Zhang L, Xie T, et al. Quota-constrained test-case prioritization for regression testing of service-centric systems. In: Proceedings of IEEE International Conference on Software Maintenance, Beijing, 2008. 257--266. Google Scholar

[71] Jiang B, Zhang Z, Chan W K, et al. Adaptive random test case prioritization. In: Proceedings of the 24th IEEE/ACM International Conference on Automated Software Engineering, Auckland, 2009. 257--266. Google Scholar

[72] Li Z, Harman M, Hierons R. Search algorithms for regression test case prioritisation. newblock IEEE Trans Softw Eng, 2007, 33: 225-237 CrossRef Google Scholar

[73] Zhang L, Hou S, Guo C, et al. Time-aware test-case prioritization using integer linear programming. In: Proceedings of the 18th International Symposium on Software Testing and Analysis. New York: ACM, 2009. 213--224. Google Scholar

[74] Hao D, Zhang L, Zhang L, et al. A unified test case prioritization approach. newblock ACM Trans Softw Eng Methodol, 2014, 24: 1-31 Google Scholar

[75] Zhang L, Hao D, Zhang L, et al. Bridging the gap between the total and additional test-case prioritization strategies. In: Proceedings of the 35th International Conference on Software Engineering (ICSE), San Francisco, 2013. 192--201. Google Scholar

[76] Hao D, Zhao X, Zhang L. Adaptive test-case prioritization guided by output inspection. In: Proceedings of the 37th Annual Computer Software and Applications Conference (COMPSAC), Kyoto, 2013. 169--179. Google Scholar

[77] Mei H, Hao D, Zhang L, et al. A static approach to prioritizing {JUnit} test cases. newblock IEEE Trans Softw Eng, 2012, 38: 1258-1275 CrossRef Google Scholar

[78] Jones J A, Harrold M J, Stasko J. Visualization of test information to assist fault localization. In: Proceedings of the 24th International Conference on Software Engineering. New York: ACM, 2002. 467--477. Google Scholar

[79] Hao D, Zhang L, Pan Y, et al. On similarity-awareness in testing-based fault localization. newblock Aut Softw Eng, 2008, 15: 207-249 CrossRef Google Scholar

[80] Liblit B, Naik M, Zheng A X, et al. Scalable statistical bug isolation. In: Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation. New York: ACM, 2005. 15--26. Google Scholar

[81] Yu Y, Jones J A, Harrold M J. An empirical study of the effects of test-suite reduction on fault localization. In: Proceedings of the 30th International Conference on Software Engineering. New York: ACM, 2008. 201--210. Google Scholar

[82] Abreu R, Zoeteweij P, van Gemund A J C. On the accuracy of spectrum-based fault localization. In: Proceedings of Testing: Academic and Industrial Conference Practice and Research Techniques - MUTATION, Windsor, 2007. 89--98. Google Scholar

[83] Papadakis M, Traon Y L. Using mutants to locate ``unknown" faults. In: Proceedings of IEEE 5th International Conference on Software Testing, Verification and Validation, Montreal, 2012. 691--700. Google Scholar

[84] Zhang X, Gupta N, Gupta R. Locating faults through automated predicate switching. In: Proceedings of the 28th International Conference on Software Engineering. New York: ACM, 2006. 272--281. Google Scholar

[85] Jeffrey D, Gupta N, Gupta R. Fault localization using value replacement. In: Proceedings of the International Symposium on Software Testing and Analysis. New York: ACM, 2008. 167--178. Google Scholar

[86] Zhang S, Zhang C, Ernst M D. Automated documentation inference to explain failed tests. In: Proceedings of the 26th IEEE/ACM International Conference on Automated Software Engineering (ASE), Lawrence, 2011. 63--72. Google Scholar

[87] Xuan J, Monperrus M. Test case purification for improving fault localization. In: Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering. New York: ACM, 2014. 52--63. Google Scholar

[88] Zeller A. Yesterday, my program worked. today, it does not. why? In: Software Engineering --- ESEC/FSE'99. Berlin: Springer, 1999. 253--267. Google Scholar

[89] Zeller A, Hildebrandt R. Simplifying and isolating failure-inducing input. newblock IEEE Trans Softw Eng, 2002, 28: 183-200 CrossRef Google Scholar

[90] Zhang L, Kim M, Khurshid S. Localizing failure-inducing program edits based on spectrum information. In: Proceedings of the 27th IEEE International Conference on Software Maintenance (ICSM), Williamsburg, 2011. 23--32. Google Scholar

[91] Alves E, Gligoric M, Jagannath V, et al. Fault-localization using dynamic slicing and change impact analysis. In: Proceedings of the 26th IEEE/ACM International Conference on Automated Software Engineering, Washington, 2011. 520--523. Google Scholar

[92] Zhang L, Zhang L, Khurshid S. Injecting mechanical faults to localize developer faults for evolving software. In: Proceedings of the ACM SIGPLAN International Conference on Object Oriented Programming Systems Languages & Applications. New York: ACM, 2013. 765--784. Google Scholar

[93] Kim D, Nam J, Song J, et al. Automatic patch generation learned from human-written patches. In: Proceedings of the International Conference on Software Engineering. Piscataway: IEEE Press, 2013. 802--811. Google Scholar

[94] Goues C L, Nguyen T, Forrest S, et al. Genprog: a generic method for automatic software repair. newblock IEEE Trans Softw Eng, 2012, 38: 54-72 CrossRef Google Scholar

[95] Nguyen H D T, Qi D, Roychoudhury A, et al. Semfix: program repair via semantic analysis. In: Proceedings of the International Conference on Software Engineering. Piscataway: IEEE Press, 2013. 772--781. Google Scholar

[96] Qi Y, Mao X, Lei Y, et al. The strength of random search on automated program repair. In: Proceedings of the 36th International Conference on Software Engineering. New York: ACM, 2014. 254--265. Google Scholar

[97] Nguyen T T, Nguyen H A, Pham N H, et al. Recurring bug fixes in object-oriented programs. In: Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering. New York: ACM, 2010. 315--324. Google Scholar

[98] Coker Z, Hafiz M. Program transformations to fix C integers. In: Proceedings of the International Conference on Software Engineering. Piscataway: IEEE Press, 2013. 792--801. Google Scholar

[99] Jin G, Song L, Zhang W, et al. Automated atomicity-violation fixing. In: Proceedings of the 32nd ACM SIGPLAN Conference on Programming Language Design and Implementation. New York: ACM, 2011. 389--400. Google Scholar

[100] Jin G, Zhang W, Deng D, et al. Automated concurrency-bug fixing. In: Proceedings of the 10th USENIX Symposium on Operating Systems Design and Implementation, Hollywood, 2012. 221--236. Google Scholar

[101] Li H H, Qi J, Liu F, et al. The research progress of fuzz testing technology (in Chinese). newblock Sci China Inform, 2014, 44: 1305-1322 Google Scholar

[102] Xie T, Zhang L, Xiao X, et al. Cooperative software testing and analysis: advances and challenges. newblock J Comput Sci Tech, 2014, 29: 713-723 CrossRef Google Scholar

[103] Cherem S, Princehouse L, Rugina R. Practical memory leak detection using guarded value-flow analysis. In: Proceedings of the 28th ACM SIGPLAN Conference on Programming Language Design and Implementation. New York: ACM, 2007. 480--491. Google Scholar

[104] Hackett B, Rugina R. Region-based shape analysis with tracked locations. In: Proceedings of the 32nd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. New York: ACM, 2005. 310--323. Google Scholar

[105] Heine D L, Lam M S. A practical flow-sensitive and context-sensitive {C} and {C++} memory leak detector. In: Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation. New York: ACM, 2003. 168--181. Google Scholar

[106] Jung Y, Yi K. Practical memory leak detector based on parameterized procedural summaries. In: Proceedings of the 7th International Symposium on Memory Management. New York: ACM, 2008. 131--140. Google Scholar

[107] Sui Y, Ye D, Xue J. Static memory leak detection using full-sparse value-flow analysis. In: Proceedings of the International Symposium on Software Testing and Analysis. New York: ACM, 2012. 254--264. Google Scholar

[108] Torlak E, Chandra S. Effective interprocedural resource leak detection. In: Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering. New York: ACM, 2010. 535--544. Google Scholar

[109] Xie Y, Aiken A. Context- and path-sensitive memory leak detection. In: Proceedings of the 10th European Software Engineering Conference Held Jointly with 13th ACM SIGSOFT International Symposium on Foundations of Software Engineering. New York: ACM, 2005. 115--125. Google Scholar

[110] Gao Q, Xiong Y, Mi Y, et al. Safe memory-leak fixing for {C} programs. In: Proceedings of the 37th IEEE International Conference on Software Engineering (ICSE), Florence, 2015. 459--470. Google Scholar

[111] Lattner C, Lenharth A, Adve V. Making context-sensitive points-to analysis with heap cloning practical for the real world. In: Proceedings of the 28th ACM SIGPLAN Conference on Programming Language Design and Implementation. New York: ACM, 2007. 278--289. Google Scholar

[112] Hardekopf B, Lin C. Semi-sparse flow-sensitive pointer analysis. In: Proceedings of the 36th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. New York: ACM, 2009. 226--238. Google Scholar

[113] Hardekopf B, Lin C. Flow-sensitive pointer analysis for millions of lines of code. In: Proceedings of the 9th Annual IEEE/ACM International Symposium on Code Generation and Optimization (CGO), Chamonix, 2011. 289--298. Google Scholar

[114] Wang J, Ma X-D, Dong W, et al. Demand-driven memory leak detection based on flow- and context-sensitive pointer analysis. newblock J Comput Sci Tech, 2009, 24: 347-356 CrossRef Google Scholar

[115] Kam J, Ullman J. Monotone data flow analysis frameworks. newblock Act Inf, 1977, 7: 305-317 CrossRef Google Scholar

Copyright 2019 Science China Press Co., Ltd. 《中国科学》杂志社有限责任公司 版权所有

京ICP备18024590号-1