{"title":"模仿用户行为以改进内部测试套件","authors":"Qianqian Wang, A. Orso","doi":"10.1109/ICSE-Companion.2019.00133","DOIUrl":null,"url":null,"abstract":"Testing is today the most widely used software quality assurance approach. However, it is well known that the necessarily limited number of tests developed and run in-house are not representative of the rich variety of user executions in the field. In order to bridge this gap between in-house tests and field executions, we need a way to (1) identify the behaviors exercised in the field that were not exercised in-house and (2) generate new tests that exercise such behaviors. In this context, we propose Replica, a technique that uses field execution data to guide test generation. Replica instruments the software before deploying it, so that field data collection is triggered when a user exercises an untested behavior B, currently expressed as the violation of an invariant. When it receives the collected field data, Replica uses guided symbolic execution to generate one or more executions that exercise the previously untested behavior B. Our initial empirical evaluation, performed on a set of real user executions, shows that Replica can successfully generate tests that mirror field behaviors and have similar fault-detection capability. Our results also show that Replica can outperform a traditional input generation approach that does not use field-data guidance.","PeriodicalId":273100,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Mimicking User Behavior to Improve In-House Test Suites\",\"authors\":\"Qianqian Wang, A. Orso\",\"doi\":\"10.1109/ICSE-Companion.2019.00133\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Testing is today the most widely used software quality assurance approach. However, it is well known that the necessarily limited number of tests developed and run in-house are not representative of the rich variety of user executions in the field. In order to bridge this gap between in-house tests and field executions, we need a way to (1) identify the behaviors exercised in the field that were not exercised in-house and (2) generate new tests that exercise such behaviors. In this context, we propose Replica, a technique that uses field execution data to guide test generation. Replica instruments the software before deploying it, so that field data collection is triggered when a user exercises an untested behavior B, currently expressed as the violation of an invariant. When it receives the collected field data, Replica uses guided symbolic execution to generate one or more executions that exercise the previously untested behavior B. Our initial empirical evaluation, performed on a set of real user executions, shows that Replica can successfully generate tests that mirror field behaviors and have similar fault-detection capability. Our results also show that Replica can outperform a traditional input generation approach that does not use field-data guidance.\",\"PeriodicalId\":273100,\"journal\":{\"name\":\"2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-05-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSE-Companion.2019.00133\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSE-Companion.2019.00133","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Mimicking User Behavior to Improve In-House Test Suites
Testing is today the most widely used software quality assurance approach. However, it is well known that the necessarily limited number of tests developed and run in-house are not representative of the rich variety of user executions in the field. In order to bridge this gap between in-house tests and field executions, we need a way to (1) identify the behaviors exercised in the field that were not exercised in-house and (2) generate new tests that exercise such behaviors. In this context, we propose Replica, a technique that uses field execution data to guide test generation. Replica instruments the software before deploying it, so that field data collection is triggered when a user exercises an untested behavior B, currently expressed as the violation of an invariant. When it receives the collected field data, Replica uses guided symbolic execution to generate one or more executions that exercise the previously untested behavior B. Our initial empirical evaluation, performed on a set of real user executions, shows that Replica can successfully generate tests that mirror field behaviors and have similar fault-detection capability. Our results also show that Replica can outperform a traditional input generation approach that does not use field-data guidance.