Pub Date : 2018-03-20DOI: 10.1109/SANER.2018.8330201
Christian Macho, Shane McIntosh, M. Pinzger
Build systems are widely used in today's software projects to automate integration and build processes. Similar to source code, build specifications need to be maintained to avoid outdated specifications, and build breakage as a consequence. Recent work indicates that neglected build maintenance is one of the most frequently occurring reasons why open source and proprietary builds break. In this paper, we propose BuildMedic, an approach to automatically repair Maven builds that break due to dependency-related issues. Based on a manual investigation of 37 broken Maven builds in 23 open source Java projects, we derive three repair strategies to automatically repair the build, namely Version Update, Delete Dependency, and Add Repository. We evaluate the three strategies on 84 additional broken builds from the 23 studied projects in order to demonstrate the applicability of our approach. The evaluation shows that BuildMedic can automatically repair 45 of these broken builds (54%). Furthermore, in 36% of the successfully repaired build breakages, BuildMedic outputs at least one repair candidate that is considered a correct repair. Moreover, 76% of them could be repaired with only a single dependency correction.
{"title":"Automatically repairing dependency-related build breakage","authors":"Christian Macho, Shane McIntosh, M. Pinzger","doi":"10.1109/SANER.2018.8330201","DOIUrl":"https://doi.org/10.1109/SANER.2018.8330201","url":null,"abstract":"Build systems are widely used in today's software projects to automate integration and build processes. Similar to source code, build specifications need to be maintained to avoid outdated specifications, and build breakage as a consequence. Recent work indicates that neglected build maintenance is one of the most frequently occurring reasons why open source and proprietary builds break. In this paper, we propose BuildMedic, an approach to automatically repair Maven builds that break due to dependency-related issues. Based on a manual investigation of 37 broken Maven builds in 23 open source Java projects, we derive three repair strategies to automatically repair the build, namely Version Update, Delete Dependency, and Add Repository. We evaluate the three strategies on 84 additional broken builds from the 23 studied projects in order to demonstrate the applicability of our approach. The evaluation shows that BuildMedic can automatically repair 45 of these broken builds (54%). Furthermore, in 36% of the successfully repaired build breakages, BuildMedic outputs at least one repair candidate that is considered a correct repair. Moreover, 76% of them could be repaired with only a single dependency correction.","PeriodicalId":6602,"journal":{"name":"2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"73 1","pages":"106-117"},"PeriodicalIF":0.0,"publicationDate":"2018-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85769488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-20DOI: 10.1109/SANER.2018.8330209
Yongrui Xu, Peng Liang, M. Babar
Assigning responsibilities to classes is not only vital during initial software analysis/design phases in object-oriented analysis and design (OOAD), but also during maintenance and evolution phases, when new responsibilities have to be assigned to classes or existing responsibilities have to be changed. Class Responsibility Assignment (CRA) is one of the most complex tasks in OOAD as it heavily relies on designers' judgment and implicit design knowledge (DK) of design problems. Since CRA is highly dependent on the successful use of implicit DK, (semi)-automated approaches that help designers to assign responsibilities to classes should make implicit DK explicit and exploit the DK effectively. In this paper, we propose a learning based approach for the Class Responsibility Assignment (CRA) problem. A learning mechanism is introduced into Genetic Algorithm (GA) to extract the implicit DK about which responsibilities have a high probability to be assigned to the same class, and then the extracted DK is employed automatically to improve the design quality of the generated solutions. The proposed approach has been evaluated through an experimental study with three cases. By comparing the solutions obtained from the proposed approach and the existing approaches, the proposed approach can significantly improve the design quality of the generated solutions to the CRA problem, and the generated solutions by the proposed approach are more likely to be accepted by developers from the practical aspects.
{"title":"Automatically exploiting implicit design knowledge when solving the class responsibility assignment problem","authors":"Yongrui Xu, Peng Liang, M. Babar","doi":"10.1109/SANER.2018.8330209","DOIUrl":"https://doi.org/10.1109/SANER.2018.8330209","url":null,"abstract":"Assigning responsibilities to classes is not only vital during initial software analysis/design phases in object-oriented analysis and design (OOAD), but also during maintenance and evolution phases, when new responsibilities have to be assigned to classes or existing responsibilities have to be changed. Class Responsibility Assignment (CRA) is one of the most complex tasks in OOAD as it heavily relies on designers' judgment and implicit design knowledge (DK) of design problems. Since CRA is highly dependent on the successful use of implicit DK, (semi)-automated approaches that help designers to assign responsibilities to classes should make implicit DK explicit and exploit the DK effectively. In this paper, we propose a learning based approach for the Class Responsibility Assignment (CRA) problem. A learning mechanism is introduced into Genetic Algorithm (GA) to extract the implicit DK about which responsibilities have a high probability to be assigned to the same class, and then the extracted DK is employed automatically to improve the design quality of the generated solutions. The proposed approach has been evaluated through an experimental study with three cases. By comparing the solutions obtained from the proposed approach and the existing approaches, the proposed approach can significantly improve the design quality of the generated solutions to the CRA problem, and the generated solutions by the proposed approach are more likely to be accepted by developers from the practical aspects.","PeriodicalId":6602,"journal":{"name":"2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"39 1","pages":"197-209"},"PeriodicalIF":0.0,"publicationDate":"2018-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86548728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-20DOI: 10.1109/SANER.2018.8330192
Nikolaos Tsantalis, Theodoros Chaikalis, A. Chatzigeorgiou
Deodorants are different from perfumes, because they are applied directly on body and by killing bacteria they reduce odours and offer a refreshing fragrance. That was our goal when we first thought about "bad smells" in code: to develop techniques for effectively identifying and removing (i.e., deodorizing) code smells from object-oriented software. JDeodorant encompasses a number of techniques for suggesting and automatically applying refactoring opportunities on Java source code, in a way that requires limited effort on behalf of the developer. In contrast to other approaches that rely on generic strategies that can be adapted to various smells, JDeodorant adopts ad-hoc strategies for each smell considering the particular characteristics of the underlying design or code problem. In this retrospective paper, we discuss the impact of JDeodorant over the last ten years and a number of tools and techniques that have been developed for a similar purpose which either compare their results with JDeodorant or have built on top of JDeodorant. Finally, we discuss the empirical findings from a number of studies that employed JDeodorant to extract their datasets.
{"title":"Ten years of JDeodorant: Lessons learned from the hunt for smells","authors":"Nikolaos Tsantalis, Theodoros Chaikalis, A. Chatzigeorgiou","doi":"10.1109/SANER.2018.8330192","DOIUrl":"https://doi.org/10.1109/SANER.2018.8330192","url":null,"abstract":"Deodorants are different from perfumes, because they are applied directly on body and by killing bacteria they reduce odours and offer a refreshing fragrance. That was our goal when we first thought about \"bad smells\" in code: to develop techniques for effectively identifying and removing (i.e., deodorizing) code smells from object-oriented software. JDeodorant encompasses a number of techniques for suggesting and automatically applying refactoring opportunities on Java source code, in a way that requires limited effort on behalf of the developer. In contrast to other approaches that rely on generic strategies that can be adapted to various smells, JDeodorant adopts ad-hoc strategies for each smell considering the particular characteristics of the underlying design or code problem. In this retrospective paper, we discuss the impact of JDeodorant over the last ten years and a number of tools and techniques that have been developed for a similar purpose which either compare their results with JDeodorant or have built on top of JDeodorant. Finally, we discuss the empirical findings from a number of studies that employed JDeodorant to extract their datasets.","PeriodicalId":6602,"journal":{"name":"2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"1 1","pages":"4-14"},"PeriodicalIF":0.0,"publicationDate":"2018-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79814015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-20DOI: 10.1109/SANER.2018.8330213
Md Ahasanuzzaman, M. Asaduzzaman, C. Roy, Kevin A. Schneider
The design and maintenance of APIs are complex tasks due to the constantly changing requirements of its users. Despite the efforts of its designers, APIs may suffer from a number of issues (such as incomplete or erroneous documentation, poor performance, and backward incompatibility). To maintain a healthy client base, API designers must learn these issues to fix them. Question answering sites, such as Stack Overflow (SO), has become a popular place for discussing API issues. These posts about API issues are invaluable to API designers, not only because they can help to learn more about the problem but also because they can facilitate learning the requirements of API users. However, the unstructured nature of posts and the abundance of non-issue posts make the task of detecting SO posts concerning API issues difficult and challenging. In this paper, we first develop a supervised learning approach using a Conditional Random Field (CRF), a statistical modeling method, to identify API issue-related sentences. We use the above information together with different features of posts and experience of users to build a technique, called CAPS, that can classify SO posts concerning API issues. Evaluation of CAPS using carefully curated SO posts on three popular API types reveals that the technique outperforms all three baseline approaches we consider in this study. We also conduct studies to test the generalizability of CAPS results and to understand the effects of different sources of information on it.
{"title":"Classifying stack overflow posts on API issues","authors":"Md Ahasanuzzaman, M. Asaduzzaman, C. Roy, Kevin A. Schneider","doi":"10.1109/SANER.2018.8330213","DOIUrl":"https://doi.org/10.1109/SANER.2018.8330213","url":null,"abstract":"The design and maintenance of APIs are complex tasks due to the constantly changing requirements of its users. Despite the efforts of its designers, APIs may suffer from a number of issues (such as incomplete or erroneous documentation, poor performance, and backward incompatibility). To maintain a healthy client base, API designers must learn these issues to fix them. Question answering sites, such as Stack Overflow (SO), has become a popular place for discussing API issues. These posts about API issues are invaluable to API designers, not only because they can help to learn more about the problem but also because they can facilitate learning the requirements of API users. However, the unstructured nature of posts and the abundance of non-issue posts make the task of detecting SO posts concerning API issues difficult and challenging. In this paper, we first develop a supervised learning approach using a Conditional Random Field (CRF), a statistical modeling method, to identify API issue-related sentences. We use the above information together with different features of posts and experience of users to build a technique, called CAPS, that can classify SO posts concerning API issues. Evaluation of CAPS using carefully curated SO posts on three popular API types reveals that the technique outperforms all three baseline approaches we consider in this study. We also conduct studies to test the generalizability of CAPS results and to understand the effects of different sources of information on it.","PeriodicalId":6602,"journal":{"name":"2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"16 05","pages":"244-254"},"PeriodicalIF":0.0,"publicationDate":"2018-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72591463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-20DOI: 10.1109/SANER.2018.8330231
Lingfeng Bao, Tien-Duy B. Le, D. Lo
The popularity of Android platform on mobile devices has attracted much attention from many developers and researchers, as well as malware writers. Recently, Jamrozik et al. proposed a technique to secure Android applications referred to as mining sandboxes. They used an automated test case generation technique to explore the behavior of the app under test and then extracted a set of sensitive APIs that were called. Based on the extracted sensitive APIs, they built a sandbox that can block access to APIs not used during testing. However, they only evaluated the proposed technique with benign apps but not investigated whether it was effective in detecting malicious behavior of malware that infects benign apps. Furthermore, they only investigated one test case generation tool (i.e., Droidmate) to build the sandbox, while many others have been proposed in the literature. In this work, we complement Jamrozik et al.'s work in two ways: (1) we evaluate the effectiveness of mining sandboxes on detecting malicious behaviors; (2) we investigate the effectiveness of multiple automated test case generation tools to mine sandboxes. To investigate effectiveness of mining sandboxes in detecting malicious behaviors, we make use of pairs of malware and benign app it infects. We build a sandbox based on sensitive APIs called by the benign app and check if it can identify malicious behaviors in the corresponding malware. To generate inputs to apps, we investigate five popular test case generation tools: Monkey, Droidmate, Droidbot, GUIRipper, and PUMA. We conduct two experiments to evaluate the effectiveness and efficiency of these test case generation tools on detecting malicious behavior. In the first experiment, we select 10 apps and allow test case generation tools to run for one hour; while in the second experiment, we select 102 pairs of apps and allow the test case generation tools to run for one minute. Our experiments highlight that 75.5%-77.2% of malware in our dataset can be uncovered by mining sandboxes — showing its power to protect Android apps. We also find that Droidbot performs best in generating test cases for mining sandboxes, and its effectiveness can be further boosted when coupled with other test case generation tools.
{"title":"Mining sandboxes: Are we there yet?","authors":"Lingfeng Bao, Tien-Duy B. Le, D. Lo","doi":"10.1109/SANER.2018.8330231","DOIUrl":"https://doi.org/10.1109/SANER.2018.8330231","url":null,"abstract":"The popularity of Android platform on mobile devices has attracted much attention from many developers and researchers, as well as malware writers. Recently, Jamrozik et al. proposed a technique to secure Android applications referred to as mining sandboxes. They used an automated test case generation technique to explore the behavior of the app under test and then extracted a set of sensitive APIs that were called. Based on the extracted sensitive APIs, they built a sandbox that can block access to APIs not used during testing. However, they only evaluated the proposed technique with benign apps but not investigated whether it was effective in detecting malicious behavior of malware that infects benign apps. Furthermore, they only investigated one test case generation tool (i.e., Droidmate) to build the sandbox, while many others have been proposed in the literature. In this work, we complement Jamrozik et al.'s work in two ways: (1) we evaluate the effectiveness of mining sandboxes on detecting malicious behaviors; (2) we investigate the effectiveness of multiple automated test case generation tools to mine sandboxes. To investigate effectiveness of mining sandboxes in detecting malicious behaviors, we make use of pairs of malware and benign app it infects. We build a sandbox based on sensitive APIs called by the benign app and check if it can identify malicious behaviors in the corresponding malware. To generate inputs to apps, we investigate five popular test case generation tools: Monkey, Droidmate, Droidbot, GUIRipper, and PUMA. We conduct two experiments to evaluate the effectiveness and efficiency of these test case generation tools on detecting malicious behavior. In the first experiment, we select 10 apps and allow test case generation tools to run for one hour; while in the second experiment, we select 102 pairs of apps and allow the test case generation tools to run for one minute. Our experiments highlight that 75.5%-77.2% of malware in our dataset can be uncovered by mining sandboxes — showing its power to protect Android apps. We also find that Droidbot performs best in generating test cases for mining sandboxes, and its effectiveness can be further boosted when coupled with other test case generation tools.","PeriodicalId":6602,"journal":{"name":"2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"20 1","pages":"445-455"},"PeriodicalIF":0.0,"publicationDate":"2018-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78863790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-20DOI: 10.1109/SANER.2018.8330205
Georgios Digkas, M. Lungu, P. Avgeriou, A. Chatzigeorgiou, Apostolos Ampatzoglou
During software evolution technical debt (TD) follows a constant ebb and flow, being incurred and paid back, sometimes in the same day and sometimes ten years later. There have been several studies in the literature investigating how technical debt in source code accumulates during time and the consequences of this accumulation for software maintenance. However, to the best of our knowledge there are no large scale studies that focus on the types of issues that are fixed and the amount of TD that is paid back during software evolution. In this paper we present the results of a case study, in which we analyzed the evolution of fifty-seven Java open-source software projects by the Apache Software Foundation at the temporal granularity level of weekly snapshots. In particular, we focus on the amount of technical debt that is paid back and the types of issues that are fixed. The findings reveal that a small subset of all issue types is responsible for the largest percentage of TD repayment and thus, targeting particular violations the development team can achieve higher benefits.
{"title":"How do developers fix issues and pay back technical debt in the Apache ecosystem?","authors":"Georgios Digkas, M. Lungu, P. Avgeriou, A. Chatzigeorgiou, Apostolos Ampatzoglou","doi":"10.1109/SANER.2018.8330205","DOIUrl":"https://doi.org/10.1109/SANER.2018.8330205","url":null,"abstract":"During software evolution technical debt (TD) follows a constant ebb and flow, being incurred and paid back, sometimes in the same day and sometimes ten years later. There have been several studies in the literature investigating how technical debt in source code accumulates during time and the consequences of this accumulation for software maintenance. However, to the best of our knowledge there are no large scale studies that focus on the types of issues that are fixed and the amount of TD that is paid back during software evolution. In this paper we present the results of a case study, in which we analyzed the evolution of fifty-seven Java open-source software projects by the Apache Software Foundation at the temporal granularity level of weekly snapshots. In particular, we focus on the amount of technical debt that is paid back and the types of issues that are fixed. The findings reveal that a small subset of all issue types is responsible for the largest percentage of TD repayment and thus, targeting particular violations the development team can achieve higher benefits.","PeriodicalId":6602,"journal":{"name":"2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"91 1","pages":"153-163"},"PeriodicalIF":0.0,"publicationDate":"2018-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80926033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-20DOI: 10.1109/SANER.2018.8330253
Naoto Ogura, S. Matsumoto, Hideaki Hata, S. Kusumoto
Coding style is a representation of source code, which does not affect the behavior of program execution. The choice of coding style is purely a matter of developer preference. Inconsistency of coding style not only decreased readability but also can cause frustration during programming. In this paper, we propose a novel tool, called StyleCoordinator, to solve both of the following problems, which would appear to contradict each other: ensuring a consistent coding style for all source codes managed in a repository and ensuring the ability of developers to use their own coding styles in a local environment. In order to validate the execution performance, we apply the proposed tool to an actual software repository.
{"title":"Bring your own coding style","authors":"Naoto Ogura, S. Matsumoto, Hideaki Hata, S. Kusumoto","doi":"10.1109/SANER.2018.8330253","DOIUrl":"https://doi.org/10.1109/SANER.2018.8330253","url":null,"abstract":"Coding style is a representation of source code, which does not affect the behavior of program execution. The choice of coding style is purely a matter of developer preference. Inconsistency of coding style not only decreased readability but also can cause frustration during programming. In this paper, we propose a novel tool, called StyleCoordinator, to solve both of the following problems, which would appear to contradict each other: ensuring a consistent coding style for all source codes managed in a repository and ensuring the ability of developers to use their own coding styles in a local environment. In order to validate the execution performance, we apply the proposed tool to an actual software repository.","PeriodicalId":6602,"journal":{"name":"2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"27 1","pages":"527-531"},"PeriodicalIF":0.0,"publicationDate":"2018-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87318158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-20DOI: 10.1109/SANER.2018.8330202
Xuliang Liu, Hao Zhong
In recent years, automatic program repair has been a hot research topic in the software engineering community, and many approaches have been proposed. Although these approaches produce promising results, some researchers criticize that existing approaches are still limited in their repair capability, due to their limited repair templates. Indeed, it is quite difficult to design effective repair templates. An award-wining paper analyzes thousands of manual bug fixes, but summarizes only ten repair templates. Although more bugs are thus repaired, recent studies show such repair templates are still insufficient. We notice that programmers often refer to Stack Overflow, when they repair bugs. With years of accumulation, Stack Overflow has millions of posts that are potentially useful to repair many bugs. The observation motives our work towards mining repair templates from Stack Overflow. In this paper, we propose a novel approach, called SOFix, that extracts code samples from Stack Overflow, and mines repair patterns from extracted code samples. Based on our mined repair patterns, we derived 13 repair templates. We implemented these repair templates in SOFix, and conducted evaluations on the widely used benchmark, Defects4J. Our results show that SOFix repaired 23 bugs, which are more than existing approaches. After comparing repaired bugs and templates, we find that SOFix repaired more bugs, since it has more repair templates. In addition, our results also reveal the urgent need for better fault localization techniques.
{"title":"Mining stackoverflow for program repair","authors":"Xuliang Liu, Hao Zhong","doi":"10.1109/SANER.2018.8330202","DOIUrl":"https://doi.org/10.1109/SANER.2018.8330202","url":null,"abstract":"In recent years, automatic program repair has been a hot research topic in the software engineering community, and many approaches have been proposed. Although these approaches produce promising results, some researchers criticize that existing approaches are still limited in their repair capability, due to their limited repair templates. Indeed, it is quite difficult to design effective repair templates. An award-wining paper analyzes thousands of manual bug fixes, but summarizes only ten repair templates. Although more bugs are thus repaired, recent studies show such repair templates are still insufficient. We notice that programmers often refer to Stack Overflow, when they repair bugs. With years of accumulation, Stack Overflow has millions of posts that are potentially useful to repair many bugs. The observation motives our work towards mining repair templates from Stack Overflow. In this paper, we propose a novel approach, called SOFix, that extracts code samples from Stack Overflow, and mines repair patterns from extracted code samples. Based on our mined repair patterns, we derived 13 repair templates. We implemented these repair templates in SOFix, and conducted evaluations on the widely used benchmark, Defects4J. Our results show that SOFix repaired 23 bugs, which are more than existing approaches. After comparing repaired bugs and templates, we find that SOFix repaired more bugs, since it has more repair templates. In addition, our results also reveal the urgent need for better fault localization techniques.","PeriodicalId":6602,"journal":{"name":"2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"40 1","pages":"118-129"},"PeriodicalIF":0.0,"publicationDate":"2018-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88175715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-01DOI: 10.1109/SANER.2018.8330256
L. Frunzio, B. Lin, Michele Lanza, G. Bavota
Code metrics can be used to assess the internal quality of software systems, and in particular their adherence to good design principles. While providing hints about code quality, metrics are difficult to interpret. Indeed, they take a code component as input and assess a quality attribute (e.g., code readability) by providing a number as output. However, it might be unclear for developers whether that value should be considered good or bad for the specific code at hand. We present RETICULA (REal TIme Code qUaLity Assessment), a plugin for the IntelliJ IDE to assist developers in perceiving code quality during software development. RETICULA compares the quality metrics for a project (or a single class) under development in the IDE with those of similar open source systems (classes) previously analyzed. With the visualized results, developers can gain insights about the quality of their code. A video illustrating the features of RETICULA can be found at: https://reticulaplugin.github.io/.
{"title":"RETICULA: Real-time code quality assessment","authors":"L. Frunzio, B. Lin, Michele Lanza, G. Bavota","doi":"10.1109/SANER.2018.8330256","DOIUrl":"https://doi.org/10.1109/SANER.2018.8330256","url":null,"abstract":"Code metrics can be used to assess the internal quality of software systems, and in particular their adherence to good design principles. While providing hints about code quality, metrics are difficult to interpret. Indeed, they take a code component as input and assess a quality attribute (e.g., code readability) by providing a number as output. However, it might be unclear for developers whether that value should be considered good or bad for the specific code at hand. We present RETICULA (REal TIme Code qUaLity Assessment), a plugin for the IntelliJ IDE to assist developers in perceiving code quality during software development. RETICULA compares the quality metrics for a project (or a single class) under development in the IDE with those of similar open source systems (classes) previously analyzed. With the visualized results, developers can gain insights about the quality of their code. A video illustrating the features of RETICULA can be found at: https://reticulaplugin.github.io/.","PeriodicalId":6602,"journal":{"name":"2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"11 1","pages":"542-546"},"PeriodicalIF":0.0,"publicationDate":"2018-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78033118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Searching and reusing online code has become a common practice in software development. Two important characteristics of online code have not been carefully considered in current tool support. First, many pieces of online code are largely similar but subtly different. Second, several pieces of code may form complex relations through their differences. These two characteristics make it difficult to properly rank online code to a search query and reduce the efficiency of examining search results. In this paper, we present an exploratory online code search approach that explicitly takes into account the above two characteristics of online code. Given a list of methods returned for a search query, our approach uses clone detection and code differencing techniques to analyze both commonalities and differences among the methods in the search results. It then produces an exploration graph that visualizes the method differences and the relationships of methods through their differences. The exploration graph allows developers to explore search results in a structured view of different method groups present in the search results, and turns implicit code differences into visual cues to help developers navigate the search results. We implement our approach in a web-based tool called CodeNuance. We conduct experiments to evaluate the effectiveness of our CodeNuance tool for search results examination, compared with ranked-list and code-clustering based search results examination. We also compare the performance and user behavior differences in using our tool and other exploratory code search tools.
{"title":"Supporting exploratory code search with differencing and visualization","authors":"Wenjian Liu, Xin Peng, Zhenchang Xing, Junyi Li, Bing Xie, Wenyun Zhao","doi":"10.1109/SANER.2018.8330218","DOIUrl":"https://doi.org/10.1109/SANER.2018.8330218","url":null,"abstract":"Searching and reusing online code has become a common practice in software development. Two important characteristics of online code have not been carefully considered in current tool support. First, many pieces of online code are largely similar but subtly different. Second, several pieces of code may form complex relations through their differences. These two characteristics make it difficult to properly rank online code to a search query and reduce the efficiency of examining search results. In this paper, we present an exploratory online code search approach that explicitly takes into account the above two characteristics of online code. Given a list of methods returned for a search query, our approach uses clone detection and code differencing techniques to analyze both commonalities and differences among the methods in the search results. It then produces an exploration graph that visualizes the method differences and the relationships of methods through their differences. The exploration graph allows developers to explore search results in a structured view of different method groups present in the search results, and turns implicit code differences into visual cues to help developers navigate the search results. We implement our approach in a web-based tool called CodeNuance. We conduct experiments to evaluate the effectiveness of our CodeNuance tool for search results examination, compared with ranked-list and code-clustering based search results examination. We also compare the performance and user behavior differences in using our tool and other exploratory code search tools.","PeriodicalId":6602,"journal":{"name":"2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"413 1","pages":"300-310"},"PeriodicalIF":0.0,"publicationDate":"2018-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78141479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}