首页 > 最新文献

Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings最新文献

英文 中文
μ 2
Isabella Laybourn
Coverage-guided fuzzing is a popular tool for finding bugs. This paper introduces μ2, a strategy for extending coverage-guided fuzzing with mutation analysis, which previous work has found to be better correlated with test effectiveness than code coverage. μ2 was implemented in Java using the JQF framework and the default mutations used by PIT. Initial evaluation shows increased performance when using μ2 as compared to coverage-guided fuzzing.
{"title":"μ\u0000 <sup>2</sup>","authors":"Isabella Laybourn","doi":"10.1145/3510454.3522682","DOIUrl":"https://doi.org/10.1145/3510454.3522682","url":null,"abstract":"Coverage-guided fuzzing is a popular tool for finding bugs. This paper introduces μ2, a strategy for extending coverage-guided fuzzing with mutation analysis, which previous work has found to be better correlated with test effectiveness than code coverage. μ2 was implemented in Java using the JQF framework and the default mutations used by PIT. Initial evaluation shows increased performance when using μ2 as compared to coverage-guided fuzzing.","PeriodicalId":326006,"journal":{"name":"Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114464134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
META
Tianqi Zhou, Jiawei Liu, Yifan Wang, Zhenyu Chen
As the market’s demand for software quality continues increasing, companies increase demand for excellent testing engineers. The on-line testing platform cultivates students by offering software testing courses. Students are encouraged to submit test codes by exams on the online testing platform during the course evaluation. However, the problem of how to effectively assess the test code written by students is to be solved. This paper implements a multidimensional evaluation system named META for software testing based on an online testing platform. META is designed to address the problem of how to evaluate testing effectiveness systematically. This paper evaluates students’ testing effectiveness in seven dimensions for three software testing types: developer unit testing, web application testing, and mobile application testing, combining test codes and test behaviours. For the validity of META, 14 exams are selected from MOOCTest for an experiment in this paper, of which ten exams for developer unit testing, three exams for mobile application testing, and one exam for web application testing, involving 718 students participating in the exam and 26666 records submitted. The experimental results show that META can present significant variability in different dimensions for different students with similar scores. Video URL: https://www.youtube.com/watch?v=EiCSMtefPMU.
{"title":"META","authors":"Tianqi Zhou, Jiawei Liu, Yifan Wang, Zhenyu Chen","doi":"10.1145/3510454.3516867","DOIUrl":"https://doi.org/10.1145/3510454.3516867","url":null,"abstract":"As the market’s demand for software quality continues increasing, companies increase demand for excellent testing engineers. The on-line testing platform cultivates students by offering software testing courses. Students are encouraged to submit test codes by exams on the online testing platform during the course evaluation. However, the problem of how to effectively assess the test code written by students is to be solved. This paper implements a multidimensional evaluation system named META for software testing based on an online testing platform. META is designed to address the problem of how to evaluate testing effectiveness systematically. This paper evaluates students’ testing effectiveness in seven dimensions for three software testing types: developer unit testing, web application testing, and mobile application testing, combining test codes and test behaviours. For the validity of META, 14 exams are selected from MOOCTest for an experiment in this paper, of which ten exams for developer unit testing, three exams for mobile application testing, and one exam for web application testing, involving 718 students participating in the exam and 26666 records submitted. The experimental results show that META can present significant variability in different dimensions for different students with similar scores. Video URL: https://www.youtube.com/watch?v=EiCSMtefPMU.","PeriodicalId":326006,"journal":{"name":"Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126968358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TestKnight
Cristian-Alexandru Botocan, Piyush Deshmukh, P. Makridis, Jorge Romeu Huidobro, Mathanrajan Sundarrajan, M. Aniche, A. Zaidman
Software testing is one of the most important aspects of modern software development. To ensure the quality of the software, developers should ideally write and execute automated tests regularly as their code-base evolves. TestKnight, a plugin for the IntelliJ IDEA integrated development environment (IDE), aims to help Java developers improve the testing process through support for creating and maintaining high-quality test suites.Github repo: https://github.com/SERG-Delft/testknightJetbrains Marketplace: https://plugins.jetbrains.com/plugin/17072-testknightYouTube video: https://www.youtube.com/watch?v=BSaL-K7ug6M
{"title":"TestKnight","authors":"Cristian-Alexandru Botocan, Piyush Deshmukh, P. Makridis, Jorge Romeu Huidobro, Mathanrajan Sundarrajan, M. Aniche, A. Zaidman","doi":"10.1145/3510454.3517052","DOIUrl":"https://doi.org/10.1145/3510454.3517052","url":null,"abstract":"Software testing is one of the most important aspects of modern software development. To ensure the quality of the software, developers should ideally write and execute automated tests regularly as their code-base evolves. TestKnight, a plugin for the IntelliJ IDEA integrated development environment (IDE), aims to help Java developers improve the testing process through support for creating and maintaining high-quality test suites.Github repo: https://github.com/SERG-Delft/testknightJetbrains Marketplace: https://plugins.jetbrains.com/plugin/17072-testknightYouTube video: https://www.youtube.com/watch?v=BSaL-K7ug6M","PeriodicalId":326006,"journal":{"name":"Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123377611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
iPFlakies
Rui-Jing Wang, Y. Chen, Wing Lam
Developers typically run tests after code changes. Flaky tests, which are tests that can nondeterministically pass and fail when run on the same version of code, can mislead developers about their recent changes. Much of the prior work on flaky tests is focused on Java projects. One prominent category of flaky tests that the prior work focused on is order-dependent (OD) tests, which are tests that pass or fail depending on the order in which tests are run. For example, our prior work proposed using other tests in the test suite to fix (or correctly set up) the state needed by Java OD tests to pass.Unlike Java flaky tests, flaky tests in other programming languages have received less attention. To help with this problem, another piece of prior work recently studied flaky tests in Python projects and detected many OD tests. Unfortunately, the work did not identify the other tests in the test suites that can be used to fix the OD tests. To fill this gap, we propose iPFlakies, a framework for automatically detecting and fixing Python OD tests. Using iPFlakies, we extend the prior work’s dataset to include (1) tests that can be used to reproduce and fix the OD tests and (2) patches for the OD tests. Our work to extend the dataset finds that reproducing passing and failing test results of flaky tests can be difficult and that iPFlakies is effective at detecting and fixing Python OD tests. To aid future research, we make our iPFlakies framework, dataset improvements, and experimental infrastructure publicly available.
{"title":"iPFlakies","authors":"Rui-Jing Wang, Y. Chen, Wing Lam","doi":"10.1145/3510454.3516846","DOIUrl":"https://doi.org/10.1145/3510454.3516846","url":null,"abstract":"Developers typically run tests after code changes. Flaky tests, which are tests that can nondeterministically pass and fail when run on the same version of code, can mislead developers about their recent changes. Much of the prior work on flaky tests is focused on Java projects. One prominent category of flaky tests that the prior work focused on is order-dependent (OD) tests, which are tests that pass or fail depending on the order in which tests are run. For example, our prior work proposed using other tests in the test suite to fix (or correctly set up) the state needed by Java OD tests to pass.Unlike Java flaky tests, flaky tests in other programming languages have received less attention. To help with this problem, another piece of prior work recently studied flaky tests in Python projects and detected many OD tests. Unfortunately, the work did not identify the other tests in the test suites that can be used to fix the OD tests. To fill this gap, we propose iPFlakies, a framework for automatically detecting and fixing Python OD tests. Using iPFlakies, we extend the prior work’s dataset to include (1) tests that can be used to reproduce and fix the OD tests and (2) patches for the OD tests. Our work to extend the dataset finds that reproducing passing and failing test results of flaky tests can be difficult and that iPFlakies is effective at detecting and fixing Python OD tests. To aid future research, we make our iPFlakies framework, dataset improvements, and experimental infrastructure publicly available.","PeriodicalId":326006,"journal":{"name":"Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121160541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
GARUDA
Ajinkya Rajput, K. Gopinath
Symbolic execution is a widely employed technique in vulnerability detection. However, it faces an acute problem of state space explosion when analyzing programs that dynamically allocate memory. In this work we present GARUDA that makes the symbolic execution heap-aware to mitigate the state space explosion problem. We show that GARUDA can detect vulnerabilities in real-world software and can generate inputs to trigger two more safety violations than the winner of the TestComp2021 testing competition in the heap safety category of TestComp2021 benchmarks.
{"title":"GARUDA","authors":"Ajinkya Rajput, K. Gopinath","doi":"10.1145/3510454.3528650","DOIUrl":"https://doi.org/10.1145/3510454.3528650","url":null,"abstract":"Symbolic execution is a widely employed technique in vulnerability detection. However, it faces an acute problem of state space explosion when analyzing programs that dynamically allocate memory. In this work we present GARUDA that makes the symbolic execution heap-aware to mitigate the state space explosion problem. We show that GARUDA can detect vulnerabilities in real-world software and can generate inputs to trigger two more safety violations than the winner of the TestComp2021 testing competition in the heap safety category of TestComp2021 benchmarks.","PeriodicalId":326006,"journal":{"name":"Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129123265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FuzzTastic
Stephan Lipp, Daniel Elsner, T. Hutzelmann, Sebastian Banescu, A. Pretschner, Marcel Böhme
Performing sound and fair fuzzer evaluations can be challenging, not only because of the randomness involved in fuzzing, but also due to the large number of fuzz tests generated. Existing evaluations use code coverage as a proxy measure for fuzzing effectiveness. Yet, instead of considering coverage of all generated fuzz inputs, they only consider the inputs stored in the fuzzer queue. However, as we show in this paper, this approach can lead to biased assessments due to path collisions. Therefore, we developed FuzzTastic, a fuzzer-agnostic coverage analyzer that allows practitioners and researchers to perform uniform fuzzer evaluations that are not affected by such collisions. In addition, its time-stamped coverage-probing approach enables frequency-based coverage analysis to identify barely tested source code and to visualize fuzzing progress over time and across code. To foster further studies in this field, we make FuzzTastic, together with a benchmark dataset worth ~12 CPU-years of fuzzing, publicly available; the demo video can be found at https://youtu.be/Lm-eBx0aePA.
{"title":"FuzzTastic","authors":"Stephan Lipp, Daniel Elsner, T. Hutzelmann, Sebastian Banescu, A. Pretschner, Marcel Böhme","doi":"10.1145/3510454.3516847","DOIUrl":"https://doi.org/10.1145/3510454.3516847","url":null,"abstract":"Performing sound and fair fuzzer evaluations can be challenging, not only because of the randomness involved in fuzzing, but also due to the large number of fuzz tests generated. Existing evaluations use code coverage as a proxy measure for fuzzing effectiveness. Yet, instead of considering coverage of all generated fuzz inputs, they only consider the inputs stored in the fuzzer queue. However, as we show in this paper, this approach can lead to biased assessments due to path collisions. Therefore, we developed FuzzTastic, a fuzzer-agnostic coverage analyzer that allows practitioners and researchers to perform uniform fuzzer evaluations that are not affected by such collisions. In addition, its time-stamped coverage-probing approach enables frequency-based coverage analysis to identify barely tested source code and to visualize fuzzing progress over time and across code. To foster further studies in this field, we make FuzzTastic, together with a benchmark dataset worth ~12 CPU-years of fuzzing, publicly available; the demo video can be found at https://youtu.be/Lm-eBx0aePA.","PeriodicalId":326006,"journal":{"name":"Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128592376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
GIFdroid GIFdroid
Sidong Feng, Chunyang Chen
Bug reports are vital for software maintenance that allow users to inform developers of the problems encountered while using software. However, it is difficult for non-technical users to write clear descriptions about the bug occurrence. Therefore, more and more users begin to record the screen for reporting bugs as it is easy to be created and contains detailed procedures triggering the bug. But it is still tedious and time-consuming for developers to reproduce the bug due to the length and unclear actions within the recording. To overcome these issues, we propose GIFdroid, a light-weight approach to automatically replay the execution trace from visual bug reports. GIFdroid adopts image processing techniques to extract the keyframes from the recording, map them to states in GUI Transitions Graph, and generate the execution trace of those states to trigger the bug. Our automated experiments and user study demonstrate its accuracy, efficiency, and usefulness of the approach.Github Link: https://github.com/sidongfeng/gifdroidVideo Link: https://youtu.be/5GIw1Hdr6CEAppendix Link: https://sites.google.com/view/gifdroid
{"title":"GIFdroid","authors":"Sidong Feng, Chunyang Chen","doi":"10.1145/3510454.3516857","DOIUrl":"https://doi.org/10.1145/3510454.3516857","url":null,"abstract":"Bug reports are vital for software maintenance that allow users to inform developers of the problems encountered while using software. However, it is difficult for non-technical users to write clear descriptions about the bug occurrence. Therefore, more and more users begin to record the screen for reporting bugs as it is easy to be created and contains detailed procedures triggering the bug. But it is still tedious and time-consuming for developers to reproduce the bug due to the length and unclear actions within the recording. To overcome these issues, we propose GIFdroid, a light-weight approach to automatically replay the execution trace from visual bug reports. GIFdroid adopts image processing techniques to extract the keyframes from the recording, map them to states in GUI Transitions Graph, and generate the execution trace of those states to trigger the bug. Our automated experiments and user study demonstrate its accuracy, efficiency, and usefulness of the approach.Github Link: https://github.com/sidongfeng/gifdroidVideo Link: https://youtu.be/5GIw1Hdr6CEAppendix Link: https://sites.google.com/view/gifdroid","PeriodicalId":326006,"journal":{"name":"Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123055878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
VRTest VRTest
Xiaoyin Wang
Virtual Reality (VR) is an emerging technique that attracts interest from various application domains such as training, education, remote communication, gaming, and navigation. Despite the ever growing number of VR software projects, the quality assurance techniques for VR software has not been well studied. Therefore, the validation of VR software largely rely on pure manual testing. In this paper, we present a novel testing framework called VRTest to automate the testing of scenes in VR software. In particular, VRTest extracts information from a VR scene and controls the user camera to explore the scene and interact with the virtual objects with certain testing strategies. VRTest currently supports two built-in testing strategies: VRMonkey and VRGreed, which use pure random exploration and greedy algorithm to explore interact-able objects in VR scenes. The video of our tool is available on Youtube at https://www.youtube.com/watch?v=TARqTEaa7_Q
{"title":"VRTest","authors":"Xiaoyin Wang","doi":"10.1145/3510454.3516870","DOIUrl":"https://doi.org/10.1145/3510454.3516870","url":null,"abstract":"Virtual Reality (VR) is an emerging technique that attracts interest from various application domains such as training, education, remote communication, gaming, and navigation. Despite the ever growing number of VR software projects, the quality assurance techniques for VR software has not been well studied. Therefore, the validation of VR software largely rely on pure manual testing. In this paper, we present a novel testing framework called VRTest to automate the testing of scenes in VR software. In particular, VRTest extracts information from a VR scene and controls the user camera to explore the scene and interact with the virtual objects with certain testing strategies. VRTest currently supports two built-in testing strategies: VRMonkey and VRGreed, which use pure random exploration and greedy algorithm to explore interact-able objects in VR scenes. The video of our tool is available on Youtube at https://www.youtube.com/watch?v=TARqTEaa7_Q","PeriodicalId":326006,"journal":{"name":"Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123843879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
DistFax
Xiaoqin Fu, Boxiang Lin, Haipeng Cai
In this paper, we present DistFax, a toolkit for measuring common distributed systems, focusing on their interprocess communications (IPCs), a vital aspect of distributed system run-time behaviors. DistFax measures the coupling and cohesion of distributed systems via respective IPC metrics. It also characterizes the run-time quality of distributed systems via a set of dynamic quality metrics. DistFax then computes statistical correlations between the IPC metrics and quality metrics. It further exploits the correlations to classify the system quality status with respect to various quality metrics in a standard quality model. We empirically demonstrated the practicality and usefulness of DistFax in measuring the IPCs and quality of 11 real-world distributed systems against diverse execution scenarios. The demo video of DistFax can be viewed at https://youtu.be/VLmNiHvOuWQ online, and the artifact package is publicly available at https://tinyurl.com/zaz27ec8.
{"title":"DistFax","authors":"Xiaoqin Fu, Boxiang Lin, Haipeng Cai","doi":"10.1145/3510454.3516859","DOIUrl":"https://doi.org/10.1145/3510454.3516859","url":null,"abstract":"In this paper, we present DistFax, a toolkit for measuring common distributed systems, focusing on their interprocess communications (IPCs), a vital aspect of distributed system run-time behaviors. DistFax measures the coupling and cohesion of distributed systems via respective IPC metrics. It also characterizes the run-time quality of distributed systems via a set of dynamic quality metrics. DistFax then computes statistical correlations between the IPC metrics and quality metrics. It further exploits the correlations to classify the system quality status with respect to various quality metrics in a standard quality model. We empirically demonstrated the practicality and usefulness of DistFax in measuring the IPCs and quality of 11 real-world distributed systems against diverse execution scenarios. The demo video of DistFax can be viewed at https://youtu.be/VLmNiHvOuWQ online, and the artifact package is publicly available at https://tinyurl.com/zaz27ec8.","PeriodicalId":326006,"journal":{"name":"Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116917091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
WhyGen WhyGen
Weixiang Yan, Yuanchun Li
Deep learning has demonstrated great abilities in various code generation tasks. However, despite the great convenience for some developers, many are concerned that the code generators may recite or closely mimic copyrighted training data without user awareness, leading to legal and ethical concerns. To ease this problem, we introduce a tool, named WhyGen, to explain the generated code by referring to training examples. Specifically, we first introduce a data structure, named inference fingerprint, to represent the decision process of the model when generating a prediction. The fingerprints of all training examples are collected offline and saved to a database. When the model is used at runtime for code generation, the most relevant training examples can be retrieved by querying the fingerprint database. Our experiments have shown that WhyGen is able to precisely notify the users about possible recitations and highly similar imitations with a top-10 accuracy of 81.21%. The demo video can be found at https://youtu.be/EtoQP6850To.
{"title":"WhyGen","authors":"Weixiang Yan, Yuanchun Li","doi":"10.1145/3510454.3516866","DOIUrl":"https://doi.org/10.1145/3510454.3516866","url":null,"abstract":"Deep learning has demonstrated great abilities in various code generation tasks. However, despite the great convenience for some developers, many are concerned that the code generators may recite or closely mimic copyrighted training data without user awareness, leading to legal and ethical concerns. To ease this problem, we introduce a tool, named WhyGen, to explain the generated code by referring to training examples. Specifically, we first introduce a data structure, named inference fingerprint, to represent the decision process of the model when generating a prediction. The fingerprints of all training examples are collected offline and saved to a database. When the model is used at runtime for code generation, the most relevant training examples can be retrieved by querying the fingerprint database. Our experiments have shown that WhyGen is able to precisely notify the users about possible recitations and highly similar imitations with a top-10 accuracy of 81.21%. The demo video can be found at https://youtu.be/EtoQP6850To.","PeriodicalId":326006,"journal":{"name":"Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128188013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1