Pub Date : 2021-05-01DOI: 10.1109/ICSE-Companion52605.2021.00139
Rashina Hoda
Grounded Theory, while becoming increasingly popular in software engineering, is also one of the most misunderstood, misused, and poorly presented and evaluated method in software engineering. When applied well, GT results in dense and valuable explanations of how and why phenomena occur in practice. GT can be applied as a full research method leading to mature theories and also in limited capacity for data analysis within other methods, using its robust open coding and constant comparison procedures. This technical briefing will go through the social origins of GT, present examples of grounded theories developed in SE, discuss the key challenges SE researchers face, and provide a gentle introduction to socio-technical grounded theory, a variant of GT for software engineering research.
{"title":"Decoding Grounded Theory for Software Engineering","authors":"Rashina Hoda","doi":"10.1109/ICSE-Companion52605.2021.00139","DOIUrl":"https://doi.org/10.1109/ICSE-Companion52605.2021.00139","url":null,"abstract":"Grounded Theory, while becoming increasingly popular in software engineering, is also one of the most misunderstood, misused, and poorly presented and evaluated method in software engineering. When applied well, GT results in dense and valuable explanations of how and why phenomena occur in practice. GT can be applied as a full research method leading to mature theories and also in limited capacity for data analysis within other methods, using its robust open coding and constant comparison procedures. This technical briefing will go through the social origins of GT, present examples of grounded theories developed in SE, discuss the key challenges SE researchers face, and provide a gentle introduction to socio-technical grounded theory, a variant of GT for software engineering research.","PeriodicalId":136929,"journal":{"name":"2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132751237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-01DOI: 10.1109/ICSE-Companion52605.2021.00078
Yigit Küçük, Tim A. D. Henderson, Andy Podgurski
This work presents an overview of the artifact for the paper titled "Improving Fault Localization by Integrating Value and Predicate Based Causal Inference Techniques". The artifact was implemented in a virtual machine and includes the scripts for the UniVal algorithm for fault localization employing the Defects4J test suite. Technical information about the individual components for the artifact's repository as well as guidance on the necessary documentation for utilizing the software is provided.
{"title":"Artifact for Improving Fault Localization by Integrating Value and Predicate Based Causal Inference Techniques","authors":"Yigit Küçük, Tim A. D. Henderson, Andy Podgurski","doi":"10.1109/ICSE-Companion52605.2021.00078","DOIUrl":"https://doi.org/10.1109/ICSE-Companion52605.2021.00078","url":null,"abstract":"This work presents an overview of the artifact for the paper titled \"Improving Fault Localization by Integrating Value and Predicate Based Causal Inference Techniques\". The artifact was implemented in a virtual machine and includes the scripts for the UniVal algorithm for fault localization employing the Defects4J test suite. Technical information about the individual components for the artifact's repository as well as guidance on the necessary documentation for utilizing the software is provided.","PeriodicalId":136929,"journal":{"name":"2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132964022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-01DOI: 10.1109/ICSE-Companion52605.2021.00089
Frédéric Recoules, Sébastien Bardin, Richard Bonichon, Matthieu Lemerre, L. Mounier, Marie-Laure Potet
The main goal of the artifact is to support the experimental claims of the paper #992 "Interface Compliance of Inline As-sembly: Automatically Check, Patch and Refine" by making both the prototype and data availableto the community. The expected result is the same output as the figures given in Table I and Table IV (appendix C) of the paper. In addition, we hope the released snapshot of our prototype is simple, documented and robust enough to have some uses for people dealing withinline assembly.
工件的主要目标是通过使原型和数据对社区可用来支持论文#992“Inline as - assembly的接口遵从性:自动检查、修补和改进”的实验声明。预期结果与本文表一和表四(附录C)给出的数字相同。此外,我们希望发布的原型快照是简单的,有文档记录的,并且足够健壮,可以让人们使用内联汇编。
{"title":"RUSTInA: Automatically Checking and Patching Inline Assembly Interface Compliance (Artifact Evaluation): Accepted submission #992 – “Interface Compliance of Inline Assembly: Automatically Check, Patch and Refine”","authors":"Frédéric Recoules, Sébastien Bardin, Richard Bonichon, Matthieu Lemerre, L. Mounier, Marie-Laure Potet","doi":"10.1109/ICSE-Companion52605.2021.00089","DOIUrl":"https://doi.org/10.1109/ICSE-Companion52605.2021.00089","url":null,"abstract":"The main goal of the artifact is to support the experimental claims of the paper #992 \"Interface Compliance of Inline As-sembly: Automatically Check, Patch and Refine\" by making both the prototype and data availableto the community. The expected result is the same output as the figures given in Table I and Table IV (appendix C) of the paper. In addition, we hope the released snapshot of our prototype is simple, documented and robust enough to have some uses for people dealing withinline assembly.","PeriodicalId":136929,"journal":{"name":"2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130170710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-01DOI: 10.1109/ICSE-Companion52605.2021.00056
Reem Aleithan
Explaining the prediction results of software bug prediction models is a challenging task, which can provide useful information for developers to understand and fix the predicted bugs. Recently, Jirayus et al.'s proposed to use two model-agnostic techniques (i.e., LIME and iBreakDown) to explain the prediction results of bug prediction models. Although their experiments on file-level bug prediction show promising results, the performance of these techniques on explaining the results of just-in-time (i.e., change-level) bug prediction is unknown. This paper conducts the first empirical study to explore the explainability of these model-agnostic techniques on just-in-time bug prediction models. Specifically, this study takes a three-step approach, 1) replicating previously widely used just-in-time bug prediction models, 2) applying Local Interpretability Model-agnostic Explanation Technique (LIME) and iBreakDown on the prediction results, and 3) manually evaluating the explanations for buggy instances (i.e. positive predictions) against the root cause of the bugs. The results of our experiment show that LIME and iBreakDown fail to explain defect prediction explanations for just-in-time bug prediction models, unlike file-level. This paper urges for new approaches for explaining the results of just-in-time bug prediction models.
{"title":"Explainable Just-In-Time Bug Prediction: Are We There Yet?","authors":"Reem Aleithan","doi":"10.1109/ICSE-Companion52605.2021.00056","DOIUrl":"https://doi.org/10.1109/ICSE-Companion52605.2021.00056","url":null,"abstract":"Explaining the prediction results of software bug prediction models is a challenging task, which can provide useful information for developers to understand and fix the predicted bugs. Recently, Jirayus et al.'s proposed to use two model-agnostic techniques (i.e., LIME and iBreakDown) to explain the prediction results of bug prediction models. Although their experiments on file-level bug prediction show promising results, the performance of these techniques on explaining the results of just-in-time (i.e., change-level) bug prediction is unknown. This paper conducts the first empirical study to explore the explainability of these model-agnostic techniques on just-in-time bug prediction models. Specifically, this study takes a three-step approach, 1) replicating previously widely used just-in-time bug prediction models, 2) applying Local Interpretability Model-agnostic Explanation Technique (LIME) and iBreakDown on the prediction results, and 3) manually evaluating the explanations for buggy instances (i.e. positive predictions) against the root cause of the bugs. The results of our experiment show that LIME and iBreakDown fail to explain defect prediction explanations for just-in-time bug prediction models, unlike file-level. This paper urges for new approaches for explaining the results of just-in-time bug prediction models.","PeriodicalId":136929,"journal":{"name":"2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131517434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-01DOI: 10.1109/ICSE-Companion52605.2021.00050
Nina Körber
For teachers, automated tool support for debugging and assessing their students' programming assignments is a great help in their everyday business. For block-based programming languages which are commonly used to introduce younger learners to programming, testing frameworks and other software analysis tools exist, but require manual work such as writing test suites or formal specifications. However, most of the teachers using languages like Scratch are not trained for or experienced in this kind of task. Linters do not require manual work but are limited to generic bugs and therefore miss potential task-specific bugs in student solutions. In prior work, we proposed the use of anomaly detection to find project-specific bugs in sets of student programming assignments automatically, without any additional manual labour required from the teachers' side. Evaluation on student solutions for typical programming assignments showed that anomaly detection is a reliable way to locate bugs in a data set of student programs. In this paper, we enhance our initial approach by lowering the abstraction level. The results suggest that the lower abstraction level can focus anomaly detection on the relevant parts of the programs.
{"title":"Anomaly Detection in Scratch Assignments","authors":"Nina Körber","doi":"10.1109/ICSE-Companion52605.2021.00050","DOIUrl":"https://doi.org/10.1109/ICSE-Companion52605.2021.00050","url":null,"abstract":"For teachers, automated tool support for debugging and assessing their students' programming assignments is a great help in their everyday business. For block-based programming languages which are commonly used to introduce younger learners to programming, testing frameworks and other software analysis tools exist, but require manual work such as writing test suites or formal specifications. However, most of the teachers using languages like Scratch are not trained for or experienced in this kind of task. Linters do not require manual work but are limited to generic bugs and therefore miss potential task-specific bugs in student solutions. In prior work, we proposed the use of anomaly detection to find project-specific bugs in sets of student programming assignments automatically, without any additional manual labour required from the teachers' side. Evaluation on student solutions for typical programming assignments showed that anomaly detection is a reliable way to locate bugs in a data set of student programs. In this paper, we enhance our initial approach by lowering the abstraction level. The results suggest that the lower abstraction level can focus anomaly detection on the relevant parts of the programs.","PeriodicalId":136929,"journal":{"name":"2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134071606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-01DOI: 10.1109/ICSE-Companion52605.2021.00090
Larissa Braz, Enrico Fregnan, G. Çalikli, Alberto Bacchelli
Improper Input Validation (IIV) is a dangerous software vulnerability that occurs when a system does not safely handle input data. Although IIV is easy to detect and fix, it still commonly happens in practice; so, why do developers not recognize IIV? Answering this question is key to understand how to support developers in creating secure software systems. In our work, we studied to what extent developers can detect IIV and investigate underlying reasons. To do so, we conducted an online experiment with 146 software developers. In this document, we explain how to obtain the artifact package of our study, the artifact material, and how to use the artifacts.
{"title":"Data and Materials for: Why Don’t Developers Detect Improper Input Validation?'; DROP TABLE Papers; --","authors":"Larissa Braz, Enrico Fregnan, G. Çalikli, Alberto Bacchelli","doi":"10.1109/ICSE-Companion52605.2021.00090","DOIUrl":"https://doi.org/10.1109/ICSE-Companion52605.2021.00090","url":null,"abstract":"Improper Input Validation (IIV) is a dangerous software vulnerability that occurs when a system does not safely handle input data. Although IIV is easy to detect and fix, it still commonly happens in practice; so, why do developers not recognize IIV? Answering this question is key to understand how to support developers in creating secure software systems. In our work, we studied to what extent developers can detect IIV and investigate underlying reasons. To do so, we conducted an online experiment with 146 software developers. In this document, we explain how to obtain the artifact package of our study, the artifact material, and how to use the artifacts.","PeriodicalId":136929,"journal":{"name":"2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131211481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-01DOI: 10.1109/ICSE-Companion52605.2021.00087
Jordan Henkel, Denini Silva, Leopoldo Teixeira, Marcelo d’Amorim, T. Reps
Shipwright is a human-in-the-loop system for Dockerfile repair. In this artifact, we provide the data, tools, and scripts necessary to allow others to run our experiments (either in full, or reduced versions where necessary). In particular, we provide code and data corresponding to each of the four research questions we answered in the Shipwright paper.
{"title":"Shipwright: A Human-in-the-Loop System for Dockerfile Repair","authors":"Jordan Henkel, Denini Silva, Leopoldo Teixeira, Marcelo d’Amorim, T. Reps","doi":"10.1109/ICSE-Companion52605.2021.00087","DOIUrl":"https://doi.org/10.1109/ICSE-Companion52605.2021.00087","url":null,"abstract":"Shipwright is a human-in-the-loop system for Dockerfile repair. In this artifact, we provide the data, tools, and scripts necessary to allow others to run our experiments (either in full, or reduced versions where necessary). In particular, we provide code and data corresponding to each of the four research questions we answered in the Shipwright paper.","PeriodicalId":136929,"journal":{"name":"2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124848448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-01DOI: 10.1109/ICSE-Companion52605.2021.00062
A. Gartziandia
Software embedded in Cyber-Physical Systems (CPSs) usually has a large life-cycle and is continuously evolving. The increasing expansion of IoT and CPSs has highlighted the need for additional mechanisms for remote deployment and updating of this software, to ensure its correct behaviour. Performance problems require special attention, as they may appear in operation due to limitations in lab testing and environmental conditions. In this context, we propose a microservice-based method to detect performance problems in CPSs. These microservices will be deployed in installation to detect performance problems in run-time when new software versions are deployed. The problem detection is based on Machine Learning algorithms, which predict the performance of a new software release based onknowledge from previous releases. This permits taking corrective actions so that system reliability is guaranteed.
{"title":"Microservice-Based Performance Problem Detection in Cyber-Physical System Software Updates","authors":"A. Gartziandia","doi":"10.1109/ICSE-Companion52605.2021.00062","DOIUrl":"https://doi.org/10.1109/ICSE-Companion52605.2021.00062","url":null,"abstract":"Software embedded in Cyber-Physical Systems (CPSs) usually has a large life-cycle and is continuously evolving. The increasing expansion of IoT and CPSs has highlighted the need for additional mechanisms for remote deployment and updating of this software, to ensure its correct behaviour. Performance problems require special attention, as they may appear in operation due to limitations in lab testing and environmental conditions. In this context, we propose a microservice-based method to detect performance problems in CPSs. These microservices will be deployed in installation to detect performance problems in run-time when new software versions are deployed. The problem detection is based on Machine Learning algorithms, which predict the performance of a new software release based onknowledge from previous releases. This permits taking corrective actions so that system reliability is guaranteed.","PeriodicalId":136929,"journal":{"name":"2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122257085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-01DOI: 10.1109/ICSE-Companion52605.2021.00113
Rajshakhar Paul, Asif Kamal Turzo, Amiangshu Bosu
This paper presents a an empirically built and validated dataset of code reviews from the Chromium OS project that either identified or missed security vulnerabilities. The dataset includes total 890 vulnerable code changes categorized based on the CWE specification and is publicly available at: https://zenodo.org/record/4539891
{"title":"A Dataset of Vulnerable Code Changes of the Chromium OS Project","authors":"Rajshakhar Paul, Asif Kamal Turzo, Amiangshu Bosu","doi":"10.1109/ICSE-Companion52605.2021.00113","DOIUrl":"https://doi.org/10.1109/ICSE-Companion52605.2021.00113","url":null,"abstract":"This paper presents a an empirically built and validated dataset of code reviews from the Chromium OS project that either identified or missed security vulnerabilities. The dataset includes total 890 vulnerable code changes categorized based on the CWE specification and is publicly available at: https://zenodo.org/record/4539891","PeriodicalId":136929,"journal":{"name":"2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116461362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-01DOI: 10.1109/ICSE-Companion52605.2021.00107
Max Weber, S. Apel, Norbert Siegmund
These artifacts refer to the study and implementation of the paper 'White-Box Performance-Influence Models: A Profiling and Learning Approach'. In this document, we describe the idea and process of how to build white-box performance models for configurable software systems. Specifically, we describe the general steps and tools that we have used to implement our approach, the data we have obtained, and the evaluation setup. We further list the available artifacts, such as raw measurements, configurations, and scripts at our software heritage repository.
{"title":"White-Box Performance-Influence Models: A Profiling and Learning Approach (Replication Package)","authors":"Max Weber, S. Apel, Norbert Siegmund","doi":"10.1109/ICSE-Companion52605.2021.00107","DOIUrl":"https://doi.org/10.1109/ICSE-Companion52605.2021.00107","url":null,"abstract":"These artifacts refer to the study and implementation of the paper 'White-Box Performance-Influence Models: A Profiling and Learning Approach'. In this document, we describe the idea and process of how to build white-box performance models for configurable software systems. Specifically, we describe the general steps and tools that we have used to implement our approach, the data we have obtained, and the evaluation setup. We further list the available artifacts, such as raw measurements, configurations, and scripts at our software heritage repository.","PeriodicalId":136929,"journal":{"name":"2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128215458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}