Lucas Cabral, Breno Miranda, Igor Lima, Marcelo d’Amorim
{"title":"RVprio: A tool for prioritizing runtime verification violations","authors":"Lucas Cabral, Breno Miranda, Igor Lima, Marcelo d’Amorim","doi":"10.1002/stvr.1813","DOIUrl":null,"url":null,"abstract":"Runtime verification (RV) helps to find software bugs by monitoring formally specified properties during testing. A key problem in using RV during testing is how to reduce the manual inspection effort for checking whether property violations are true bugs. To date, there was no automated approach for determining the likelihood that property violations were true bugs to reduce tedious and time‐consuming manual inspection. We present RVprio, the first automated approach for prioritizing RV violations in order of likelihood of being true bugs. RVprio uses machine learning classifiers to prioritize violations. For training, we used a labelled dataset of 1170 violations from 110 projects. On that dataset, (1) RVprio reached 90% of the effectiveness of a theoretically optimal prioritizer that ranks all true bugs at the top of the ranked list, and (2) 88.1% of true bugs were in the top 25% of RVprio‐ranked violations; 32.7% of true bugs were in the top 10%. RVprio was also effective when we applied it to new unlabelled violations, from which we found previously unknown bugs—54 bugs in 8 open‐source projects. Our dataset is publicly available online.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"70 1","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2022-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Software Testing Verification & Reliability","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1002/stvr.1813","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Runtime verification (RV) helps to find software bugs by monitoring formally specified properties during testing. A key problem in using RV during testing is how to reduce the manual inspection effort for checking whether property violations are true bugs. To date, there was no automated approach for determining the likelihood that property violations were true bugs to reduce tedious and time‐consuming manual inspection. We present RVprio, the first automated approach for prioritizing RV violations in order of likelihood of being true bugs. RVprio uses machine learning classifiers to prioritize violations. For training, we used a labelled dataset of 1170 violations from 110 projects. On that dataset, (1) RVprio reached 90% of the effectiveness of a theoretically optimal prioritizer that ranks all true bugs at the top of the ranked list, and (2) 88.1% of true bugs were in the top 25% of RVprio‐ranked violations; 32.7% of true bugs were in the top 10%. RVprio was also effective when we applied it to new unlabelled violations, from which we found previously unknown bugs—54 bugs in 8 open‐source projects. Our dataset is publicly available online.
期刊介绍:
The journal is the premier outlet for research results on the subjects of testing, verification and reliability. Readers will find useful research on issues pertaining to building better software and evaluating it.
The journal is unique in its emphasis on theoretical foundations and applications to real-world software development. The balance of theory, empirical work, and practical applications provide readers with better techniques for testing, verifying and improving the reliability of software.
The journal targets researchers, practitioners, educators and students that have a vested interest in results generated by high-quality testing, verification and reliability modeling and evaluation of software. Topics of special interest include, but are not limited to:
-New criteria for software testing and verification
-Application of existing software testing and verification techniques to new types of software, including web applications, web services, embedded software, aspect-oriented software, and software architectures
-Model based testing
-Formal verification techniques such as model-checking
-Comparison of testing and verification techniques
-Measurement of and metrics for testing, verification and reliability
-Industrial experience with cutting edge techniques
-Descriptions and evaluations of commercial and open-source software testing tools
-Reliability modeling, measurement and application
-Testing and verification of software security
-Automated test data generation
-Process issues and methods
-Non-functional testing