Pub Date : 2018-04-23DOI: 10.1109/SANER.2018.8330198
Giovanni Grano, Adelina Ciurumelea, Sebastiano Panichella, Fabio Palomba, H. Gall
The intense competition characterizing mobile application's marketplaces forces developers to create and maintain high-quality mobile apps in order to ensure their commercial success and acquire new users. This motivated the research community to propose solutions that automate the testing process of mobile apps. However, the main problem of current testing tools is that they generate redundant and random inputs that are insufficient to properly simulate the human behavior, thus leaving feature and crash bugs undetected until they are encountered by users. To cope with this problem, we conjecture that information available in user reviews—that previous work showed as effective for maintenance and evolution problems—can be successfully exploited to identify the main issues users experience while using mobile applications, e.g., GUI problems and crashes. In this paper we provide initial insights into this direction, investigating (i) what type of user feedback can be actually exploited for testing purposes, (ii) how complementary user feedback and automated testing tools are, when detecting crash bugs or errors and (iii) whether an automated system able to monitor crash-related information reported in user feedback is sufficiently accurate. Results of our study, involving 11,296 reviews of 8 mobile applications, show that user feedback can be exploited to provide contextual details about errors or exceptions detected by automated testing tools. Moreover, they also help detecting bugs that would remain uncovered when rely on testing tools only. Finally, the accuracy of the proposed automated monitoring system demonstrates the feasibility of our vision, i.e., integrate user feedback into testing process.
{"title":"Exploring the integration of user feedback in automated testing of Android applications","authors":"Giovanni Grano, Adelina Ciurumelea, Sebastiano Panichella, Fabio Palomba, H. Gall","doi":"10.1109/SANER.2018.8330198","DOIUrl":"https://doi.org/10.1109/SANER.2018.8330198","url":null,"abstract":"The intense competition characterizing mobile application's marketplaces forces developers to create and maintain high-quality mobile apps in order to ensure their commercial success and acquire new users. This motivated the research community to propose solutions that automate the testing process of mobile apps. However, the main problem of current testing tools is that they generate redundant and random inputs that are insufficient to properly simulate the human behavior, thus leaving feature and crash bugs undetected until they are encountered by users. To cope with this problem, we conjecture that information available in user reviews—that previous work showed as effective for maintenance and evolution problems—can be successfully exploited to identify the main issues users experience while using mobile applications, e.g., GUI problems and crashes. In this paper we provide initial insights into this direction, investigating (i) what type of user feedback can be actually exploited for testing purposes, (ii) how complementary user feedback and automated testing tools are, when detecting crash bugs or errors and (iii) whether an automated system able to monitor crash-related information reported in user feedback is sufficiently accurate. Results of our study, involving 11,296 reviews of 8 mobile applications, show that user feedback can be exploited to provide contextual details about errors or exceptions detected by automated testing tools. Moreover, they also help detecting bugs that would remain uncovered when rely on testing tools only. Finally, the accuracy of the proposed automated monitoring system demonstrates the feasibility of our vision, i.e., integrate user feedback into testing process.","PeriodicalId":6602,"journal":{"name":"2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"48 1","pages":"72-83"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84739215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-20DOI: 10.1109/SANER.2018.8330228
Taiza Montenegro, Hugo Melo, Roberta Coelho, E. Barbosa
The exception handling policy of a system comprises the set of design rules that specify its exception handling behavior (how exceptions should be handled and thrown in a system). Such policy is usually undocumented and implicitly defined by the system architect. Developers are usually unaware of such rules and may think that by just sprinkling the code with catch-blocks they can adequately deal with the exceptional conditions of a system. As a consequence, the exception handling code once designed to make the program more reliable may become a source of faults (e.g., the uncaught exceptions are one of the main causes of crashes in current Java applications). To mitigate such problem, we propose Exception Policy Expert (EPE), a tool embedded in Eclipse IDE that warns developers about policy violations related to the code being edited. A case study performed in a real development context showed that the tool could indeed make the exception handling policy explicit to the developers during development.
{"title":"Improving developers awareness of the exception handling policy","authors":"Taiza Montenegro, Hugo Melo, Roberta Coelho, E. Barbosa","doi":"10.1109/SANER.2018.8330228","DOIUrl":"https://doi.org/10.1109/SANER.2018.8330228","url":null,"abstract":"The exception handling policy of a system comprises the set of design rules that specify its exception handling behavior (how exceptions should be handled and thrown in a system). Such policy is usually undocumented and implicitly defined by the system architect. Developers are usually unaware of such rules and may think that by just sprinkling the code with catch-blocks they can adequately deal with the exceptional conditions of a system. As a consequence, the exception handling code once designed to make the program more reliable may become a source of faults (e.g., the uncaught exceptions are one of the main causes of crashes in current Java applications). To mitigate such problem, we propose Exception Policy Expert (EPE), a tool embedded in Eclipse IDE that warns developers about policy violations related to the code being edited. A case study performed in a real development context showed that the tool could indeed make the exception handling policy explicit to the developers during development.","PeriodicalId":6602,"journal":{"name":"2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"14 5 Suppl 3 1","pages":"413-422"},"PeriodicalIF":0.0,"publicationDate":"2018-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84985061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-20DOI: 10.1109/SANER.2018.8330217
R. Kula, Coen De Roover, D. Germán, T. Ishio, Katsuro Inoue
The popularity of super repositories such as Maven Central and the CRAN is a testament to software reuse activities in both open-source and commercial projects alike. However, several studies have highlighted the risks and dangers brought about by application developers keeping dependencies on outdated library versions. Intelligent mining of super repositories could reveal hidden trends within the corresponding software ecosystem and thereby provide valuable insights for such dependency-related decisions. In this paper, we propose the Software Universe Graph (SUG) Model as a structured abstraction of the evolution of software systems and their library dependencies over time. To demonstrate the SUG's usefulness, we conduct an empirical study using 6,374 Maven artifacts and over 6,509 CRAN packages mined from their real-world ecosystems. Visualizations of the SUG model such as ‘library coexistence pairings’ and ‘dependents diffusion’ uncover popularity, adoption and diffusion patterns within each software ecosystem. Results show the Maven ecosystem as having a more conservative approach to dependency updating than the CRAN ecosystem.
{"title":"A generalized model for visualizing library popularity, adoption, and diffusion within a software ecosystem","authors":"R. Kula, Coen De Roover, D. Germán, T. Ishio, Katsuro Inoue","doi":"10.1109/SANER.2018.8330217","DOIUrl":"https://doi.org/10.1109/SANER.2018.8330217","url":null,"abstract":"The popularity of super repositories such as Maven Central and the CRAN is a testament to software reuse activities in both open-source and commercial projects alike. However, several studies have highlighted the risks and dangers brought about by application developers keeping dependencies on outdated library versions. Intelligent mining of super repositories could reveal hidden trends within the corresponding software ecosystem and thereby provide valuable insights for such dependency-related decisions. In this paper, we propose the Software Universe Graph (SUG) Model as a structured abstraction of the evolution of software systems and their library dependencies over time. To demonstrate the SUG's usefulness, we conduct an empirical study using 6,374 Maven artifacts and over 6,509 CRAN packages mined from their real-world ecosystems. Visualizations of the SUG model such as ‘library coexistence pairings’ and ‘dependents diffusion’ uncover popularity, adoption and diffusion patterns within each software ecosystem. Results show the Maven ecosystem as having a more conservative approach to dependency updating than the CRAN ecosystem.","PeriodicalId":6602,"journal":{"name":"2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"22 1","pages":"288-299"},"PeriodicalIF":0.0,"publicationDate":"2018-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80996551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-20DOI: 10.1109/SANER.2018.8330195
Carmine Vassallo, Sebastiano Panichella, Fabio Palomba, Sebastian Proksch, A. Zaidman, H. Gall
Automatic static analysis tools (ASATs) are tools that support automatic code quality evaluation of software systems with the aim of (i) avoiding and/or removing bugs and (ii) spotting design issues. Hindering their wide-spread acceptance are their (i) high false positive rates and (ii) low comprehensibility of the generated warnings. Researchers and ASATs vendors have proposed solutions to prioritize such warnings with the aim of guiding developers toward the most severe ones. However, none of the proposed solutions considers the development context in which an ASAT is being used to further improve the selection of relevant warnings. To shed light on the impact of such contexts on the warnings configuration, usage and adopted prioritization strategies, we surveyed 42 developers (69% in industry and 31% in open source projects) and interviewed 11 industrial experts that integrate ASATs in their workflow. While we can confirm previous findings on the reluctance of developers to configure ASATs, our study highlights that (i) 71% of developers do pay attention to different warning categories depending on the development context, and (ii) 63% of our respondents rely on specific factors (e.g., team policies and composition) when prioritizing warnings to fix during their programming. Our results clearly indicate ways to better assist developers by improving existing warning selection and prioritization strategies.
{"title":"Context is king: The developer perspective on the usage of static analysis tools","authors":"Carmine Vassallo, Sebastiano Panichella, Fabio Palomba, Sebastian Proksch, A. Zaidman, H. Gall","doi":"10.1109/SANER.2018.8330195","DOIUrl":"https://doi.org/10.1109/SANER.2018.8330195","url":null,"abstract":"Automatic static analysis tools (ASATs) are tools that support automatic code quality evaluation of software systems with the aim of (i) avoiding and/or removing bugs and (ii) spotting design issues. Hindering their wide-spread acceptance are their (i) high false positive rates and (ii) low comprehensibility of the generated warnings. Researchers and ASATs vendors have proposed solutions to prioritize such warnings with the aim of guiding developers toward the most severe ones. However, none of the proposed solutions considers the development context in which an ASAT is being used to further improve the selection of relevant warnings. To shed light on the impact of such contexts on the warnings configuration, usage and adopted prioritization strategies, we surveyed 42 developers (69% in industry and 31% in open source projects) and interviewed 11 industrial experts that integrate ASATs in their workflow. While we can confirm previous findings on the reluctance of developers to configure ASATs, our study highlights that (i) 71% of developers do pay attention to different warning categories depending on the development context, and (ii) 63% of our respondents rely on specific factors (e.g., team policies and composition) when prioritizing warnings to fix during their programming. Our results clearly indicate ways to better assist developers by improving existing warning selection and prioritization strategies.","PeriodicalId":6602,"journal":{"name":"2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"5 1","pages":"38-49"},"PeriodicalIF":0.0,"publicationDate":"2018-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82538541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-20DOI: 10.1109/SANER.2018.8330252
L. Pelloni, Giovanni Grano, Adelina Ciurumelea, Sebastiano Panichella, Fabio Palomba, H. Gall
Mobile devices such as smartphones, tablets and wearables are changing the way we do things, radically modifying our approach to technology. To sustain the high competition characterizing the mobile market, developers need to deliver high quality applications in a short release cycle. To reveal and fix bugs as soon as possible, researchers and practitioners proposed tools to automate the testing process. However, such tools generate a high number of redundant inputs, lacking of contextual information and generating reports difficult to analyze. In this context, the content of user reviews represents an unmatched source for developers seeking for defects in their applications. However, no prior work explored the adoption of information available in user reviews for testing purposes. In this demo we present BECLOMA, a tool to enable the integration of user feedback in the testing process of mobile apps. BECLOMA links information from testing tools and user reviews, presenting to developers an augmented testing report combining stack traces with user reviews information referring to the same crash. We show that BECLOMA facilitates not only the diagnosis and fix of app bugs, but also presents additional benefits: it eases the usage of testing tools and automates the analysis of user reviews from the Google Play Store.
智能手机、平板电脑和可穿戴设备等移动设备正在改变我们做事的方式,从根本上改变我们对技术的态度。为了维持手机市场的高竞争,开发者需要在较短的发布周期内交付高质量的应用。为了尽快发现和修复错误,研究人员和从业者提出了自动化测试过程的工具。然而,这些工具产生了大量的冗余输入,缺乏上下文信息,生成的报告难以分析。在这种情况下,用户评论的内容代表了开发人员在其应用程序中寻找缺陷的无与伦比的来源。然而,之前的工作没有探索用户评论中可用信息用于测试目的的采用。在这个演示中,我们展示了BECLOMA,一个能够在移动应用程序的测试过程中集成用户反馈的工具。BECLOMA将来自测试工具和用户评论的信息链接起来,向开发人员展示一个增强的测试报告,该报告结合了堆栈跟踪和用户评论信息,涉及相同的崩溃。我们发现BECLOMA不仅有助于诊断和修复应用漏洞,而且还提供了额外的好处:它简化了测试工具的使用,并自动分析来自Google Play Store的用户评论。
{"title":"BECLoMA: Augmenting stack traces with user review information","authors":"L. Pelloni, Giovanni Grano, Adelina Ciurumelea, Sebastiano Panichella, Fabio Palomba, H. Gall","doi":"10.1109/SANER.2018.8330252","DOIUrl":"https://doi.org/10.1109/SANER.2018.8330252","url":null,"abstract":"Mobile devices such as smartphones, tablets and wearables are changing the way we do things, radically modifying our approach to technology. To sustain the high competition characterizing the mobile market, developers need to deliver high quality applications in a short release cycle. To reveal and fix bugs as soon as possible, researchers and practitioners proposed tools to automate the testing process. However, such tools generate a high number of redundant inputs, lacking of contextual information and generating reports difficult to analyze. In this context, the content of user reviews represents an unmatched source for developers seeking for defects in their applications. However, no prior work explored the adoption of information available in user reviews for testing purposes. In this demo we present BECLOMA, a tool to enable the integration of user feedback in the testing process of mobile apps. BECLOMA links information from testing tools and user reviews, presenting to developers an augmented testing report combining stack traces with user reviews information referring to the same crash. We show that BECLOMA facilitates not only the diagnosis and fix of app bugs, but also presents additional benefits: it eases the usage of testing tools and automates the analysis of user reviews from the Google Play Store.","PeriodicalId":6602,"journal":{"name":"2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"115 12 1","pages":"522-526"},"PeriodicalIF":0.0,"publicationDate":"2018-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90232380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recording code changes comes to be well recognized as an effective means for understanding the evolution of existing programs and making their future changes efficient. Although fine-grained textual changes of source code are worth leveraging in various situations, there is no satisfactory tool that records such changes. This paper proposes a yet another tool, called ChangeMacroRecorder, which automatically records all textual changes of source code while a programmer writes and modifies it on the Eclipse's Java editor. Its capability has been improved with respect to both the accuracy of its recording and the convenience for its use. Tool developers can easily and cheaply create their new applications that utilize recorded changes by embedding our proposed recording tool into them.
{"title":"ChangeMacroRecorder: Recording fine-grained textual changes of source code","authors":"Katsuhisa Maruyama, Shinpei Hayashi, Takayuki Omori","doi":"10.1109/SANER.2018.8330255","DOIUrl":"https://doi.org/10.1109/SANER.2018.8330255","url":null,"abstract":"Recording code changes comes to be well recognized as an effective means for understanding the evolution of existing programs and making their future changes efficient. Although fine-grained textual changes of source code are worth leveraging in various situations, there is no satisfactory tool that records such changes. This paper proposes a yet another tool, called ChangeMacroRecorder, which automatically records all textual changes of source code while a programmer writes and modifies it on the Eclipse's Java editor. Its capability has been improved with respect to both the accuracy of its recording and the convenience for its use. Tool developers can easily and cheaply create their new applications that utilize recorded changes by embedding our proposed recording tool into them.","PeriodicalId":6602,"journal":{"name":"2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"32 1","pages":"537-541"},"PeriodicalIF":0.0,"publicationDate":"2018-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91291844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-20DOI: 10.1109/SANER.2018.8330200
Carmen Coviello, Simone Romano, G. Scanniello, A. Marchetto, G. Antoniol, A. Corazza
Regression testing is an important activity that can be expensive (e.g., for large test suites). Test suite reduction approaches speed up regression testing by removing redundant test cases. These approaches can be classified as adequate or inadequate. Adequate approaches reduce test suites so that they completely preserve the test requirements (e.g., code coverage) of the original test suites. Inadequate approaches produce reduced test suites that only partially preserve the test requirements. An inadequate approach is appealing when it leads to a greater reduction in test suite size at the expense of a small loss in fault-detection capability. We investigate a clustering-based approach for inadequate test suite reduction and compare it with well-known adequate approaches. Our investigation is founded on a public dataset and allows an exploration of trade-offs in test suite reduction. Results help a more informed decision, using guidelines defined in this research, to balance size, coverage, and fault-detection loss of reduced test suites when using clustering.
{"title":"Clustering support for inadequate test suite reduction","authors":"Carmen Coviello, Simone Romano, G. Scanniello, A. Marchetto, G. Antoniol, A. Corazza","doi":"10.1109/SANER.2018.8330200","DOIUrl":"https://doi.org/10.1109/SANER.2018.8330200","url":null,"abstract":"Regression testing is an important activity that can be expensive (e.g., for large test suites). Test suite reduction approaches speed up regression testing by removing redundant test cases. These approaches can be classified as adequate or inadequate. Adequate approaches reduce test suites so that they completely preserve the test requirements (e.g., code coverage) of the original test suites. Inadequate approaches produce reduced test suites that only partially preserve the test requirements. An inadequate approach is appealing when it leads to a greater reduction in test suite size at the expense of a small loss in fault-detection capability. We investigate a clustering-based approach for inadequate test suite reduction and compare it with well-known adequate approaches. Our investigation is founded on a public dataset and allows an exploration of trade-offs in test suite reduction. Results help a more informed decision, using guidelines defined in this research, to balance size, coverage, and fault-detection loss of reduced test suites when using clustering.","PeriodicalId":6602,"journal":{"name":"2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"28 1","pages":"95-105"},"PeriodicalIF":0.0,"publicationDate":"2018-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88433222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-20DOI: 10.1109/SANER.2018.8330210
Zhou Xu, Jin Liu, Xiapu Luo, Zhang Tao
As defects in software modules may cause product failure and financial loss, it is critical to utilize defect prediction methods to effectively identify the potentially defective modules for a thorough inspection, especially in the early stage of software development lifecycle. For an upcoming version of a software project, it is practical to employ the historical labeled defect data of the prior versions within the same project to conduct defect prediction on the current version, i.e., Cross-Version Defect Prediction (CVDP). However, software development is a dynamic evolution process that may cause the data distribution (such as defect characteristics) to vary across versions. Furthermore, the raw features usually may not well reveal the intrinsic structure information behind the data. Therefore, it is challenging to perform effective CVDP. In this paper, we propose a two-phase CVDP framework that combines Hybrid Active Learning and Kernel PCA (HALKP) to address these two issues. In the first stage, HALKP uses a hybrid active learning method to select some informative and representative unlabeled modules from the current version for querying their labels, then merges them into the labeled modules of the prior version to form an enhanced training set. In the second stage, HALKP employs a non-linear mapping method, kernel PCA, to extract representative features by embedding the original data of two versions into a high-dimension space. We evaluate the HALKP framework on 31 versions of 10 projects with three prevalent performance indicators. The experimental results indicate that HALKP achieves encouraging results with average F-measure, g-mean and Balance of 0.480, 0.592 and 0.580, respectively and significantly outperforms nearly all baseline methods.
{"title":"Cross-version defect prediction via hybrid active learning with kernel principal component analysis","authors":"Zhou Xu, Jin Liu, Xiapu Luo, Zhang Tao","doi":"10.1109/SANER.2018.8330210","DOIUrl":"https://doi.org/10.1109/SANER.2018.8330210","url":null,"abstract":"As defects in software modules may cause product failure and financial loss, it is critical to utilize defect prediction methods to effectively identify the potentially defective modules for a thorough inspection, especially in the early stage of software development lifecycle. For an upcoming version of a software project, it is practical to employ the historical labeled defect data of the prior versions within the same project to conduct defect prediction on the current version, i.e., Cross-Version Defect Prediction (CVDP). However, software development is a dynamic evolution process that may cause the data distribution (such as defect characteristics) to vary across versions. Furthermore, the raw features usually may not well reveal the intrinsic structure information behind the data. Therefore, it is challenging to perform effective CVDP. In this paper, we propose a two-phase CVDP framework that combines Hybrid Active Learning and Kernel PCA (HALKP) to address these two issues. In the first stage, HALKP uses a hybrid active learning method to select some informative and representative unlabeled modules from the current version for querying their labels, then merges them into the labeled modules of the prior version to form an enhanced training set. In the second stage, HALKP employs a non-linear mapping method, kernel PCA, to extract representative features by embedding the original data of two versions into a high-dimension space. We evaluate the HALKP framework on 31 versions of 10 projects with three prevalent performance indicators. The experimental results indicate that HALKP achieves encouraging results with average F-measure, g-mean and Balance of 0.480, 0.592 and 0.580, respectively and significantly outperforms nearly all baseline methods.","PeriodicalId":6602,"journal":{"name":"2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"41 1","pages":"209-220"},"PeriodicalIF":0.0,"publicationDate":"2018-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90517518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-20DOI: 10.1109/SANER.2018.8330227
Aikaterini Paltoglou, Vassilis Zafeiris, E. Giakoumakis, N. Diamantidis
JavaScript (JS) is a dynamic, weakly-typed and object-based programming language that expanded its reach, in recent years, from the desktop web browser to a wide range of runtime platforms in embedded, mobile and server hosts. Moreover, the scope of functionality implemented in JS scaled from DOM manipulation in dynamic HTML pages to full-scale applications for various domains, stressing the need for code reusability and maintainability. Towards this direction, the ECMAScript 6 (ES6) revision of the language standardized the syntax for class and module definitions, streamlining the encapsulation of data and functionality at various levels of granularity. This work focuses on refactoring client-side web applications for the elimination of code smells, relevant to global variables and functions that are declared in JS files linked to a web page. These declarations "pollute" the global namespace at runtime and often lead to name conflicts with undesired effects. We propose a method for the encapsulation of global declarations through automated refactoring to ES6 modules. Our approach transforms each linked JS script of a web application to an ES6 module with appropriate import and export declarations that are inferred through static analysis. A prototype implementation of the proposed method, based on WALA libraries, has been evaluated on a set of open source projects. The evaluation results support the applicability and runtime efficiency of the proposed method.
{"title":"Automated refactoring of client-side JavaScript code to ES6 modules","authors":"Aikaterini Paltoglou, Vassilis Zafeiris, E. Giakoumakis, N. Diamantidis","doi":"10.1109/SANER.2018.8330227","DOIUrl":"https://doi.org/10.1109/SANER.2018.8330227","url":null,"abstract":"JavaScript (JS) is a dynamic, weakly-typed and object-based programming language that expanded its reach, in recent years, from the desktop web browser to a wide range of runtime platforms in embedded, mobile and server hosts. Moreover, the scope of functionality implemented in JS scaled from DOM manipulation in dynamic HTML pages to full-scale applications for various domains, stressing the need for code reusability and maintainability. Towards this direction, the ECMAScript 6 (ES6) revision of the language standardized the syntax for class and module definitions, streamlining the encapsulation of data and functionality at various levels of granularity. This work focuses on refactoring client-side web applications for the elimination of code smells, relevant to global variables and functions that are declared in JS files linked to a web page. These declarations \"pollute\" the global namespace at runtime and often lead to name conflicts with undesired effects. We propose a method for the encapsulation of global declarations through automated refactoring to ES6 modules. Our approach transforms each linked JS script of a web application to an ES6 module with appropriate import and export declarations that are inferred through static analysis. A prototype implementation of the proposed method, based on WALA libraries, has been evaluated on a set of open source projects. The evaluation results support the applicability and runtime efficiency of the proposed method.","PeriodicalId":6602,"journal":{"name":"2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"90 1","pages":"402-412"},"PeriodicalIF":0.0,"publicationDate":"2018-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89960059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-20DOI: 10.1109/SANER.2018.8330206
Eduard van der Bent, Jurriaan Hage, Joost Visser, Georgios Gousios
Puppet is a declarative language for configuration management that has rapidly gained popularity in recent years. Numerous organizations now rely on Puppet code for deploying their software systems onto cloud infrastructures. In this paper we provide a definition of code quality for Puppet code and an automated technique for measuring and rating Puppet code quality. To this end, we first explore the notion of code quality as it applies to Puppet code by performing a survey among Puppet developers. Second, we develop a measurement model for the maintainability aspect of Puppet code quality. To arrive at this measurement model, we derive appropriate quality metrics from our survey results and from existing software quality models. We implemented the Puppet code quality model in a software analysis tool. We validate our definition of Puppet code quality and the measurement model by a structured interview with Puppet experts and by comparing the tool results with quality judgments of those experts. The validation shows that the measurement model and tool provide quality judgments of Puppet code that closely match the judgments of experts. Also, the experts deem the model appropriate and usable in practice. The Software Improvement Group (SIG) has started using the model in its consultancy practice.
{"title":"How good is your puppet? An empirically defined and validated quality model for puppet","authors":"Eduard van der Bent, Jurriaan Hage, Joost Visser, Georgios Gousios","doi":"10.1109/SANER.2018.8330206","DOIUrl":"https://doi.org/10.1109/SANER.2018.8330206","url":null,"abstract":"Puppet is a declarative language for configuration management that has rapidly gained popularity in recent years. Numerous organizations now rely on Puppet code for deploying their software systems onto cloud infrastructures. In this paper we provide a definition of code quality for Puppet code and an automated technique for measuring and rating Puppet code quality. To this end, we first explore the notion of code quality as it applies to Puppet code by performing a survey among Puppet developers. Second, we develop a measurement model for the maintainability aspect of Puppet code quality. To arrive at this measurement model, we derive appropriate quality metrics from our survey results and from existing software quality models. We implemented the Puppet code quality model in a software analysis tool. We validate our definition of Puppet code quality and the measurement model by a structured interview with Puppet experts and by comparing the tool results with quality judgments of those experts. The validation shows that the measurement model and tool provide quality judgments of Puppet code that closely match the judgments of experts. Also, the experts deem the model appropriate and usable in practice. The Software Improvement Group (SIG) has started using the model in its consultancy practice.","PeriodicalId":6602,"journal":{"name":"2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"31 1","pages":"164-174"},"PeriodicalIF":0.0,"publicationDate":"2018-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91011434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}