Mitchell Olsthoorn, A. Deursen, Annibale Panichella
Software testing is an important and time-consuming task that is often done manually. In the last decades, researchers have come up with techniques to generate input data (e.g., fuzzing) and automate the process of generating test cases (e.g., search-based testing). However, these techniques are known to have their own limitations: search-based testing does not generate highly-structured data; grammar-based fuzzing does not generate test case structures. To address these limitations, we combine these two techniques. By applying grammar-based mutations to the input data gathered by the search-based testing algorithm, it allows us to co-evolve both aspects of test case generation. We evaluate our approach, called G-EVOSUITE, by performing an empirical study on 20 Java classes from the three most popular JSON parsers across multiple search budgets. Our results show that the proposed approach on average improves branch coverage for JSON related classes by 15 % (with a maximum increase of 50 %) without negatively impacting other classes.
{"title":"Generating Highly-structured Input Data by Combining Search-based Testing and Grammar-based Fuzzing","authors":"Mitchell Olsthoorn, A. Deursen, Annibale Panichella","doi":"10.1145/3324884.3418930","DOIUrl":"https://doi.org/10.1145/3324884.3418930","url":null,"abstract":"Software testing is an important and time-consuming task that is often done manually. In the last decades, researchers have come up with techniques to generate input data (e.g., fuzzing) and automate the process of generating test cases (e.g., search-based testing). However, these techniques are known to have their own limitations: search-based testing does not generate highly-structured data; grammar-based fuzzing does not generate test case structures. To address these limitations, we combine these two techniques. By applying grammar-based mutations to the input data gathered by the search-based testing algorithm, it allows us to co-evolve both aspects of test case generation. We evaluate our approach, called G-EVOSUITE, by performing an empirical study on 20 Java classes from the three most popular JSON parsers across multiple search budgets. Our results show that the proposed approach on average improves branch coverage for JSON related classes by 15 % (with a maximum increase of 50 %) without negatively impacting other classes.","PeriodicalId":106337,"journal":{"name":"2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115255054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meixi Liu, Weijiang Hong, Weiyu Pan, Chendong Feng, Zhenbang Chen, Ji Wang
The robustness of deep neural network (DNN) is critical and challenging to ensure. In this paper, we propose a general data-oriented mutation framework, called Styx, to improve the robustness of DNN. Styx generates new training data by slightly mutating the training data. In this way, Styx ensures the DNN's accuracy on the test dataset while improving the adaptability to small perturbations, i.e., improving the robustness. We have instantiated Styx for image classification and proposed pixel-level mutation rules that are applicable to any image classification DNNs. We have applied Styx on several commonly used benchmarks and compared Styx with the representative adversarial training methods. The preliminary experimental results indicate the effectiveness of Styx.
{"title":"Styx: A Data-Oriented Mutation Framework to Improve the Robustness of DNN","authors":"Meixi Liu, Weijiang Hong, Weiyu Pan, Chendong Feng, Zhenbang Chen, Ji Wang","doi":"10.1145/3324884.3418903","DOIUrl":"https://doi.org/10.1145/3324884.3418903","url":null,"abstract":"The robustness of deep neural network (DNN) is critical and challenging to ensure. In this paper, we propose a general data-oriented mutation framework, called Styx, to improve the robustness of DNN. Styx generates new training data by slightly mutating the training data. In this way, Styx ensures the DNN's accuracy on the test dataset while improving the adaptability to small perturbations, i.e., improving the robustness. We have instantiated Styx for image classification and proposed pixel-level mutation rules that are applicable to any image classification DNNs. We have applied Styx on several commonly used benchmarks and compared Styx with the representative adversarial training methods. The preliminary experimental results indicate the effectiveness of Styx.","PeriodicalId":106337,"journal":{"name":"2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129659074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christos Tsigkanos, Nianyu Li, Zhi Jin, Zhenjiang Hu, C. Ghezzi
Systematic model-driven design and early validation enable engineers to verify that a reactive system does not violate its requirements before actually implementing it. Requirements may come from multiple stakeholders, who are often concerned with different facets - design typically involves different experts having different concerns and views of the system. Engineers start from a specification which may be sourced from some domain model, while validation is often done on state-transition structures that support model checking. Two computationally expensive steps may work against scalability: transformation from specification to state-transition structures, and model checking. We propose a technique that makes the former efficient and also makes the resulting transition systems small enough to be efficiently verified. The technique automatically projects the specification into submodels depending on a property sought to be evaluated, which captures some stakeholder's viewpoint. The resulting reactive system submodel is then transformed into a state-transition structure and verified. The technique achieves cone-of-influence reduction, by slicing at the specification model level. Submodels are analysis-equivalent to the corresponding full model. If stakeholders propose a change to a submodel based on their own view, changes are automatically propagated to the specification model and other views affected. Automated reflection is achieved thanks to bidirectional model transformations, ensuring correctness. We cast our proposal in the context of graph-based reactive systems whose dynamics is described by rewriting rules. We demonstrate our view-based framework in practice on a case study within cyber-physical systems.
{"title":"Scalable Multiple-View Analysis of Reactive Systems via Bidirectional Model Transformations","authors":"Christos Tsigkanos, Nianyu Li, Zhi Jin, Zhenjiang Hu, C. Ghezzi","doi":"10.1145/3324884.3416579","DOIUrl":"https://doi.org/10.1145/3324884.3416579","url":null,"abstract":"Systematic model-driven design and early validation enable engineers to verify that a reactive system does not violate its requirements before actually implementing it. Requirements may come from multiple stakeholders, who are often concerned with different facets - design typically involves different experts having different concerns and views of the system. Engineers start from a specification which may be sourced from some domain model, while validation is often done on state-transition structures that support model checking. Two computationally expensive steps may work against scalability: transformation from specification to state-transition structures, and model checking. We propose a technique that makes the former efficient and also makes the resulting transition systems small enough to be efficiently verified. The technique automatically projects the specification into submodels depending on a property sought to be evaluated, which captures some stakeholder's viewpoint. The resulting reactive system submodel is then transformed into a state-transition structure and verified. The technique achieves cone-of-influence reduction, by slicing at the specification model level. Submodels are analysis-equivalent to the corresponding full model. If stakeholders propose a change to a submodel based on their own view, changes are automatically propagated to the specification model and other views affected. Automated reflection is achieved thanks to bidirectional model transformations, ensuring correctness. We cast our proposal in the context of graph-based reactive systems whose dynamics is described by rewriting rules. We demonstrate our view-based framework in practice on a case study within cyber-physical systems.","PeriodicalId":106337,"journal":{"name":"2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130615842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zehua Chen, Zhenbang Chen, Ziqi Shuai, Yufeng Zhang, Weiyu Pan
Constraint solving is one of the challenges for symbolic execution. Modern SMT solvers allow users to customize the internal solving procedure by solving strategies. In this extended abstract, we report our recent progress in synthesizing a program-specific solving strategy for the symbolic execution of a program. We propose a two-stage procedure for symbolic execution. At the first stage, we synthesize a solving strategy by utilizing deep learning techniques. Then, the strategy will be used in the second stage to improve the performance of constraint solving. The preliminary experimental results indicate the promising of our method.
{"title":"Synthesizing Smart Solving Strategy for Symbolic Execution","authors":"Zehua Chen, Zhenbang Chen, Ziqi Shuai, Yufeng Zhang, Weiyu Pan","doi":"10.1145/3324884.3418904","DOIUrl":"https://doi.org/10.1145/3324884.3418904","url":null,"abstract":"Constraint solving is one of the challenges for symbolic execution. Modern SMT solvers allow users to customize the internal solving procedure by solving strategies. In this extended abstract, we report our recent progress in synthesizing a program-specific solving strategy for the symbolic execution of a program. We propose a two-stage procedure for symbolic execution. At the first stage, we synthesize a solving strategy by utilizing deep learning techniques. Then, the strategy will be used in the second stage to improve the performance of constraint solving. The preliminary experimental results indicate the promising of our method.","PeriodicalId":106337,"journal":{"name":"2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114346727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin Kellogg, Martin Schäf, S. Tasiran, Michael D. Ernst
Vendors who wish to provide software or services to large corporations and governments must often obtain numerous certificates of compliance. Each certificate asserts that the software satisfies a compliance regime, like SOC or the PCI DSS, to protect the privacy and security of sensitive data. The industry standard for obtaining a compliance certificate is an auditor manually auditing source code. This approach is expensive, error-prone, partial, and prone to regressions. We propose continuous compliance to guarantee that the codebase stays compliant on each code change using lightweight verification tools. Continuous compliance increases assurance and reduces costs. Continuous compliance is applicable to any source-code compliance requirement. To illustrate our approach, we built verification tools for five common audit controls related to data security: cryptographically unsafe algorithms must not be used, keys must be at least 256 bits long, credentials must not be hard-coded into program text, HTTPS must always be used instead of HTTP, and cloud data stores must not be world-readable. We evaluated our approach in three ways. (1) We applied our tools to over 5 million lines of open-source software. (2) We compared our tools to other publicly-available tools for detecting misuses of encryption on a previously-published benchmark, finding that only ours are suitable for continuous compliance. (3) We deployed a continuous compliance process at AWS, a large cloud-services company: we integrated verification tools into the compliance process (including auditors accepting their output as evidence) and ran them on over 68 million lines of code. Our tools and the data for the former two evaluations are publicly available.
{"title":"Continuous Compliance","authors":"Martin Kellogg, Martin Schäf, S. Tasiran, Michael D. Ernst","doi":"10.1145/3324884.3416593","DOIUrl":"https://doi.org/10.1145/3324884.3416593","url":null,"abstract":"Vendors who wish to provide software or services to large corporations and governments must often obtain numerous certificates of compliance. Each certificate asserts that the software satisfies a compliance regime, like SOC or the PCI DSS, to protect the privacy and security of sensitive data. The industry standard for obtaining a compliance certificate is an auditor manually auditing source code. This approach is expensive, error-prone, partial, and prone to regressions. We propose continuous compliance to guarantee that the codebase stays compliant on each code change using lightweight verification tools. Continuous compliance increases assurance and reduces costs. Continuous compliance is applicable to any source-code compliance requirement. To illustrate our approach, we built verification tools for five common audit controls related to data security: cryptographically unsafe algorithms must not be used, keys must be at least 256 bits long, credentials must not be hard-coded into program text, HTTPS must always be used instead of HTTP, and cloud data stores must not be world-readable. We evaluated our approach in three ways. (1) We applied our tools to over 5 million lines of open-source software. (2) We compared our tools to other publicly-available tools for detecting misuses of encryption on a previously-published benchmark, finding that only ours are suitable for continuous compliance. (3) We deployed a continuous compliance process at AWS, a large cloud-services company: we integrated verification tools into the compliance process (including auditors accepting their output as evidence) and ran them on over 68 million lines of code. Our tools and the data for the former two evaluations are publicly available.","PeriodicalId":106337,"journal":{"name":"2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128029455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The data race problem is common in the interrupt-driven program, and it is difficult to find as a result of complicated interrupt interleaving. Static analysis is a mainstream technology to detect those problems, however, the synchronization mechanism of interrupt is hard to be processed by the existing method, which brings many false alarms. Eliminating false alarms in static analysis is the main challenge for precisely data race detection. In this paper, we present a framework of static analysis combined with program verification, which performs static analysis to find all potential races, and then verifies every race to eliminate false alarms. The experiment results on related race benchmarks show that our implementation finds all race bugs in the phase of static analysis, and eliminates all false alarms through program verification.
{"title":"A Program Verification based Approach to Find Data Race Vulnerabilities in Interrupt-driven Program","authors":"Haining Feng","doi":"10.1145/3324884.3418925","DOIUrl":"https://doi.org/10.1145/3324884.3418925","url":null,"abstract":"The data race problem is common in the interrupt-driven program, and it is difficult to find as a result of complicated interrupt interleaving. Static analysis is a mainstream technology to detect those problems, however, the synchronization mechanism of interrupt is hard to be processed by the existing method, which brings many false alarms. Eliminating false alarms in static analysis is the main challenge for precisely data race detection. In this paper, we present a framework of static analysis combined with program verification, which performs static analysis to find all potential races, and then verifies every race to eliminate false alarms. The experiment results on related race benchmarks show that our implementation finds all race bugs in the phase of static analysis, and eliminates all false alarms through program verification.","PeriodicalId":106337,"journal":{"name":"2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131357370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditional IT projects have rolled out newly developed software or systems after iterating manual tests based on the scenarios and cases that are considered sufficient. However, due to the time and budget limitation of IT projects, these traditional tests almost always fail to include all the possible scenarios and cases of the real world. Thus, we cannot eliminate all potential defects before go-live and unexpected failures might occur as a result, which can lead to severe damage to both customers and IT project contractors. This paper demonstrates a real transaction-based automated testing approach named ‘PerfecTwin’ with several real-world examples. PerfecTwin overcomes the above limitations of the traditional testing by running the new and old systems side-by-side, automatically validating the new system against the old system's actual transactions, in real time, which can eliminate almost all potential defects before go-live.
{"title":"The New Approach to IT Testing : Real Transaction-Based Automated Validation Solution","authors":"Yongsik Kim, SoAh Min, Youkyung Kim","doi":"10.1145/3324884.3421839","DOIUrl":"https://doi.org/10.1145/3324884.3421839","url":null,"abstract":"Traditional IT projects have rolled out newly developed software or systems after iterating manual tests based on the scenarios and cases that are considered sufficient. However, due to the time and budget limitation of IT projects, these traditional tests almost always fail to include all the possible scenarios and cases of the real world. Thus, we cannot eliminate all potential defects before go-live and unexpected failures might occur as a result, which can lead to severe damage to both customers and IT project contractors. This paper demonstrates a real transaction-based automated testing approach named ‘PerfecTwin’ with several real-world examples. PerfecTwin overcomes the above limitations of the traditional testing by running the new and old systems side-by-side, automatically validating the new system against the old system's actual transactions, in real time, which can eliminate almost all potential defects before go-live.","PeriodicalId":106337,"journal":{"name":"2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116299064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When building enterprise applications on Java frameworks (e.g., Spring), developers often specify components and configure operations with a special kind of XML files named “deployment descriptors (DD)”. Maintaining such XML files is challenging and time-consuming; because (1) the correct configuration semantics is domain-specific but usually vaguely documented, and (2) existing compilers and program analysis tools rarely examine XML files. To help developers ensure the quality of DD, this paper presents a novel approach-XEDITOR-that extracts configuration couplings (i.e., frequently co-occurring configurations) from DD, and adopts the coupling rules to validate new or updated files. Xeditor has two phases: coupling extraction and bug detection. To identify couplings, Xeditor first mines DD in open-source projects, and extracts XML entity pairs that (i) frequently coexist in the same files and (ii) hold the same data at least once. Xeditor then applies customized association rule mining to the extracted pairs. For bug detection, given a new XML file, Xeditor checks whether the file violates any coupling; if so, Xeditor reports the violation(s). For evaluation, we first created two data sets with the 4,248 DD mined from 1,137 GitHub projects. According to the experiments with these data sets, Xeditor extracted couplings with high precision (73%); it detected bugs with 92% precision, 96% recall, and 94% accuracy. Additionally, we applied Xeditor to the version history of another 478 GitHub projects. Xeditor identified 25 very suspicious XML updates, 15 of which were later fixed by developers.
{"title":"Inferring and Applying Def-Use Like Configuration Couplings in Deployment Descriptors","authors":"Chengyuan Wen, Yaxuan Zhang, Xiao He, Na Meng","doi":"10.1145/3324884.3416577","DOIUrl":"https://doi.org/10.1145/3324884.3416577","url":null,"abstract":"When building enterprise applications on Java frameworks (e.g., Spring), developers often specify components and configure operations with a special kind of XML files named “deployment descriptors (DD)”. Maintaining such XML files is challenging and time-consuming; because (1) the correct configuration semantics is domain-specific but usually vaguely documented, and (2) existing compilers and program analysis tools rarely examine XML files. To help developers ensure the quality of DD, this paper presents a novel approach-XEDITOR-that extracts configuration couplings (i.e., frequently co-occurring configurations) from DD, and adopts the coupling rules to validate new or updated files. Xeditor has two phases: coupling extraction and bug detection. To identify couplings, Xeditor first mines DD in open-source projects, and extracts XML entity pairs that (i) frequently coexist in the same files and (ii) hold the same data at least once. Xeditor then applies customized association rule mining to the extracted pairs. For bug detection, given a new XML file, Xeditor checks whether the file violates any coupling; if so, Xeditor reports the violation(s). For evaluation, we first created two data sets with the 4,248 DD mined from 1,137 GitHub projects. According to the experiments with these data sets, Xeditor extracted couplings with high precision (73%); it detected bugs with 92% precision, 96% recall, and 94% accuracy. Additionally, we applied Xeditor to the version history of another 478 GitHub projects. Xeditor identified 25 very suspicious XML updates, 15 of which were later fixed by developers.","PeriodicalId":106337,"journal":{"name":"2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121973877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1007/s10664-022-10250-2
Johannes Dorn, S. Apel, Norbert Siegmund
{"title":"Mastering Uncertainty in Performance Estimations of Configurable Software Systems","authors":"Johannes Dorn, S. Apel, Norbert Siegmund","doi":"10.1007/s10664-022-10250-2","DOIUrl":"https://doi.org/10.1007/s10664-022-10250-2","url":null,"abstract":"","PeriodicalId":106337,"journal":{"name":"2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132954988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mosts of the neural synthesis construct encoder-decoder models to learn a probability distribution over the space of programs. Two drawbacks in such neural program synthesis are that the synthesis scale is relatively small and the correctness of the synthesis result cannot be guaranteed. We address these problems by constructing a framework, which analyzes and solves problems from three dimensions: program space description, model architecture, and result processing. Experiments show that the scalability and precision of synthesis are improved in every dimension.
{"title":"Scalability and Precision Improvement of Neural Program Synthesis","authors":"Yating Zhang","doi":"10.1145/3324884.3418912","DOIUrl":"https://doi.org/10.1145/3324884.3418912","url":null,"abstract":"Mosts of the neural synthesis construct encoder-decoder models to learn a probability distribution over the space of programs. Two drawbacks in such neural program synthesis are that the synthesis scale is relatively small and the correctness of the synthesis result cannot be guaranteed. We address these problems by constructing a framework, which analyzes and solves problems from three dimensions: program space description, model architecture, and result processing. Experiments show that the scalability and precision of synthesis are improved in every dimension.","PeriodicalId":106337,"journal":{"name":"2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130452665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}