首页 > 最新文献

2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER)最新文献

英文 中文
Towards a framework for analysis, transformation, and manipulation of Makefiles 建立一个分析、转换和操作makefile的框架
Douglas H. Martin
Build systems are an integral part of the software development process, being responsible for turning source code into a deliverable product. They are, however, difficult to comprehend and maintain at times. Make, the most popular build language, is often cited as being difficult to debug. In this work, we propose a framework to analyze and manipulate Makefiles, and discover how the language is used in open source systems using existing software analysis techniques like source transformation and clone detection.
构建系统是软件开发过程中不可或缺的一部分,负责将源代码转换为可交付的产品。然而,它们有时很难理解和维护。Make是最流行的构建语言,经常被认为难以调试。在这项工作中,我们提出了一个框架来分析和操作makefile,并发现如何使用现有的软件分析技术(如源代码转换和克隆检测)在开源系统中使用该语言。
{"title":"Towards a framework for analysis, transformation, and manipulation of Makefiles","authors":"Douglas H. Martin","doi":"10.1109/SANER.2015.7081890","DOIUrl":"https://doi.org/10.1109/SANER.2015.7081890","url":null,"abstract":"Build systems are an integral part of the software development process, being responsible for turning source code into a deliverable product. They are, however, difficult to comprehend and maintain at times. Make, the most popular build language, is often cited as being difficult to debug. In this work, we propose a framework to analyze and manipulate Makefiles, and discover how the language is used in open source systems using existing software analysis techniques like source transformation and clone detection.","PeriodicalId":355949,"journal":{"name":"2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131849992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An empirical study of work fragmentation in software evolution tasks 软件演化任务中工作碎片化的实证研究
Heider Sanchez, R. Robbes, Víctor M. González
Information workers and software developers are exposed to work fragmentation, an interleaving of activities and interruptions during their normal work day. Small-scale observational studies have shown that this can be detrimental to their work. In this paper, we perform a large-scale study of this phenomenon for the particular case of software developers performing software evolution tasks. Our study is based on several thousands interaction traces collected by Mylyn, for dozens of developers. We observe that work fragmentation is correlated to lower observed productivity at both the macro level (for entire sessions), and at the micro level (around markers of work fragmentation); further, longer activity switches seem to strengthen the effect. These observations are basis for subsequent studies investigating the phenomenon of work fragmentation.
信息工作者和软件开发人员在日常工作中面临着工作碎片化、活动的交错和中断。小规模的观察研究表明,这可能对他们的工作有害。在本文中,我们针对软件开发人员执行软件进化任务的特殊情况,对这种现象进行了大规模的研究。我们的研究是基于Mylyn为几十个开发人员收集的数千个交互痕迹。我们观察到,在宏观层面(整个会议)和微观层面(围绕工作碎片的标记),工作碎片化与观察到的较低生产率相关;此外,更长的活动转换似乎加强了这种效果。这些观察结果为后续调查工作碎片化现象的研究奠定了基础。
{"title":"An empirical study of work fragmentation in software evolution tasks","authors":"Heider Sanchez, R. Robbes, Víctor M. González","doi":"10.1109/SANER.2015.7081835","DOIUrl":"https://doi.org/10.1109/SANER.2015.7081835","url":null,"abstract":"Information workers and software developers are exposed to work fragmentation, an interleaving of activities and interruptions during their normal work day. Small-scale observational studies have shown that this can be detrimental to their work. In this paper, we perform a large-scale study of this phenomenon for the particular case of software developers performing software evolution tasks. Our study is based on several thousands interaction traces collected by Mylyn, for dozens of developers. We observe that work fragmentation is correlated to lower observed productivity at both the macro level (for entire sessions), and at the micro level (around markers of work fragmentation); further, longer activity switches seem to strengthen the effect. These observations are basis for subsequent studies investigating the phenomenon of work fragmentation.","PeriodicalId":355949,"journal":{"name":"2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132244699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Towards incremental model slicing for delta-oriented software product lines 面向增量的软件产品线的增量模型切片
Sascha Lity, H. Baller, Ina Schaefer
The analysis of nowadays software systems for supporting, e.g., testing, verification or debugging is becoming more challenging due to their increasing complexity. Model slicing is a promising analysis technique to tackle this issue by abstracting from those parts not influencing the current point of interest. In the context of software product lines, applying model slicing separately for each variant is in general infeasible. Delta modeling allows exploiting the explicit specification of commonality and variability within deltas and enables the reuse of artifacts and already obtained results to reduce the modeling and analysis efforts. In this paper, we propose a novel approach for incremental model slicing for delta-oriented software product lines. Based on the specification of model changes between variants by means of model regression deltas, an incremental adaptation of variant-specific dependency graphs as well as an incremental slice computation is achieved. The slice computation further allows for the derivation of differences between slices for the same point of interest enhancing, e.g., change impact analysis. We provide details of our incremental approach, discuss benefits and present future work.
当今软件系统的分析,如测试、验证或调试,由于其日益增加的复杂性,变得越来越具有挑战性。模型切片是一种很有前途的分析技术,通过从那些不影响当前感兴趣点的部分中抽象出来来解决这个问题。在软件产品线的环境中,对每个变体分别应用模型切片通常是不可行的。增量建模允许利用增量中的通用性和可变性的显式规范,并支持工件和已经获得的结果的重用,以减少建模和分析工作。本文提出了一种面向增量的软件产品线的增量模型切片方法。基于模型回归增量对变量间模型变化的描述,实现了变量依赖图的增量适应和增量切片计算。切片计算进一步允许对相同兴趣点增强的切片之间的差异进行推导,例如,变化影响分析。我们提供了增量方法的细节,讨论了好处,并提出了未来的工作。
{"title":"Towards incremental model slicing for delta-oriented software product lines","authors":"Sascha Lity, H. Baller, Ina Schaefer","doi":"10.1109/SANER.2015.7081871","DOIUrl":"https://doi.org/10.1109/SANER.2015.7081871","url":null,"abstract":"The analysis of nowadays software systems for supporting, e.g., testing, verification or debugging is becoming more challenging due to their increasing complexity. Model slicing is a promising analysis technique to tackle this issue by abstracting from those parts not influencing the current point of interest. In the context of software product lines, applying model slicing separately for each variant is in general infeasible. Delta modeling allows exploiting the explicit specification of commonality and variability within deltas and enables the reuse of artifacts and already obtained results to reduce the modeling and analysis efforts. In this paper, we propose a novel approach for incremental model slicing for delta-oriented software product lines. Based on the specification of model changes between variants by means of model regression deltas, an incremental adaptation of variant-specific dependency graphs as well as an incremental slice computation is achieved. The slice computation further allows for the derivation of differences between slices for the same point of interest enhancing, e.g., change impact analysis. We provide details of our incremental approach, discuss benefits and present future work.","PeriodicalId":355949,"journal":{"name":"2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124041659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Would static analysis tools help developers with code reviews? 静态分析工具会帮助开发人员进行代码审查吗?
Sebastiano Panichella, V. Arnaoudova, M. D. Penta, G. Antoniol
Code reviews have been conducted since decades in software projects, with the aim of improving code quality from many different points of view. During code reviews, developers are supported by checklists, coding standards and, possibly, by various kinds of static analysis tools. This paper investigates whether warnings highlighted by static analysis tools are taken care of during code reviews and, whether there are kinds of warnings that tend to be removed more than others. Results of a study conducted by mining the Gerrit repository of six Java open source projects indicate that the density of warnings only slightly vary after each review. The overall percentage of warnings removed during reviews is slightly higher than what previous studies found for the overall project evolution history. However, when looking (quantitatively and qualitatively) at specific categories of warnings, we found that during code reviews developers focus on certain kinds of problems. For such categories of warnings the removal percentage tend to be very high, often above 50% and sometimes up to 100%. Examples of those are warnings in the imports, regular expressions, and type resolution categories. In conclusion, while a broad warning detection might produce way too many false positives, enforcing the removal of certain warnings prior to the patch submission could reduce the amount of effort provided during the code review process.
代码审查已经在软件项目中进行了几十年,其目的是从许多不同的角度改进代码质量。在代码审查期间,开发人员得到检查表、编码标准以及各种静态分析工具的支持。本文调查了在代码审查期间,静态分析工具所强调的警告是否得到了注意,以及是否存在比其他类型的警告更容易被删除的警告。通过挖掘六个Java开源项目的Gerrit存储库进行的一项研究的结果表明,每次审查后警告的密度只有轻微的变化。在审查期间删除的警告的总体百分比略高于之前对整个项目演化历史的研究。然而,当(定量地和定性地)查看特定类别的警告时,我们发现在代码审查期间,开发人员关注的是某些类型的问题。对于这类警告,删除率往往非常高,通常超过50%,有时甚至高达100%。例如导入、正则表达式和类型解析类别中的警告。总之,虽然广泛的警告检测可能会产生太多的误报,但在提交补丁之前强制删除某些警告可以减少代码审查过程中提供的工作量。
{"title":"Would static analysis tools help developers with code reviews?","authors":"Sebastiano Panichella, V. Arnaoudova, M. D. Penta, G. Antoniol","doi":"10.1109/SANER.2015.7081826","DOIUrl":"https://doi.org/10.1109/SANER.2015.7081826","url":null,"abstract":"Code reviews have been conducted since decades in software projects, with the aim of improving code quality from many different points of view. During code reviews, developers are supported by checklists, coding standards and, possibly, by various kinds of static analysis tools. This paper investigates whether warnings highlighted by static analysis tools are taken care of during code reviews and, whether there are kinds of warnings that tend to be removed more than others. Results of a study conducted by mining the Gerrit repository of six Java open source projects indicate that the density of warnings only slightly vary after each review. The overall percentage of warnings removed during reviews is slightly higher than what previous studies found for the overall project evolution history. However, when looking (quantitatively and qualitatively) at specific categories of warnings, we found that during code reviews developers focus on certain kinds of problems. For such categories of warnings the removal percentage tend to be very high, often above 50% and sometimes up to 100%. Examples of those are warnings in the imports, regular expressions, and type resolution categories. In conclusion, while a broad warning detection might produce way too many false positives, enforcing the removal of certain warnings prior to the patch submission could reduce the amount of effort provided during the code review process.","PeriodicalId":355949,"journal":{"name":"2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130207386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 71
Automated extraction of failure reproduction steps from user interaction traces 从用户交互跟踪中自动提取故障再现步骤
T. Roehm, Stefan Nosovic, B. Brügge
Bug reports submitted by users and crash reports collected by crash reporting tools often lack information about reproduction steps, i.e. the steps necessary to reproduce a failure. Hence, developers have difficulties to reproduce field failures and might not be able to fix all reported bugs. We present an approach to automatically extract failure reproduction steps from user interaction traces. We capture interactions between a user and a WIMP GUI using a capture/replay tool. Then, we extract the minimal, failure-inducing subsequence of captured interaction traces. We use three algorithms to perform this extraction: Delta Debugging, Sequential Pattern Mining, and a combination of both. Delta Debugging automatically replays subsequences of an interaction trace to identify the minimal, failure-inducing subsequence. Sequential Pattern Mining identifies the common subsequence in interaction traces inducing the same failure. We evaluated our approach in a case study. We injected four bugs to the code of a mail client application, collected interaction traces of five participants trying to find these bugs, and applied the extraction algorithms. Delta Debugging extracted the minimal, failure-inducing interaction subsequence in 90% of all cases. Sequential Pattern Mining produced failure-inducing interaction sequences in 75% of all cases and removed on average 93% of unnecessary interactions, potentially enabling manual analysis by developers. Both algorithms complement each other because they are applicable in different contexts and can be combined to improve performance.
用户提交的Bug报告和崩溃报告工具收集的崩溃报告通常缺乏关于再现步骤的信息,即再现故障所需的步骤。因此,开发人员很难重现现场故障,并且可能无法修复所有报告的错误。我们提出了一种从用户交互轨迹中自动提取故障再现步骤的方法。我们使用捕获/重放工具捕获用户与WIMP GUI之间的交互。然后,我们提取捕获的交互轨迹的最小的、诱导失败的子序列。我们使用三种算法来执行此提取:增量调试、顺序模式挖掘以及两者的组合。Delta调试自动重放交互跟踪的子序列,以识别最小的、引起故障的子序列。顺序模式挖掘识别导致相同故障的交互跟踪中的公共子序列。我们在一个案例研究中评估了我们的方法。我们向邮件客户机应用程序的代码中注入了四个错误,收集了试图找到这些错误的五个参与者的交互跟踪,并应用了提取算法。Delta调试在90%的情况下提取最小的、导致故障的交互子序列。顺序模式挖掘在75%的情况下产生了导致失败的交互序列,并且平均删除了93%的不必要的交互,这可能使开发人员能够进行手工分析。这两种算法相互补充,因为它们适用于不同的上下文中,并且可以组合以提高性能。
{"title":"Automated extraction of failure reproduction steps from user interaction traces","authors":"T. Roehm, Stefan Nosovic, B. Brügge","doi":"10.1109/SANER.2015.7081822","DOIUrl":"https://doi.org/10.1109/SANER.2015.7081822","url":null,"abstract":"Bug reports submitted by users and crash reports collected by crash reporting tools often lack information about reproduction steps, i.e. the steps necessary to reproduce a failure. Hence, developers have difficulties to reproduce field failures and might not be able to fix all reported bugs. We present an approach to automatically extract failure reproduction steps from user interaction traces. We capture interactions between a user and a WIMP GUI using a capture/replay tool. Then, we extract the minimal, failure-inducing subsequence of captured interaction traces. We use three algorithms to perform this extraction: Delta Debugging, Sequential Pattern Mining, and a combination of both. Delta Debugging automatically replays subsequences of an interaction trace to identify the minimal, failure-inducing subsequence. Sequential Pattern Mining identifies the common subsequence in interaction traces inducing the same failure. We evaluated our approach in a case study. We injected four bugs to the code of a mail client application, collected interaction traces of five participants trying to find these bugs, and applied the extraction algorithms. Delta Debugging extracted the minimal, failure-inducing interaction subsequence in 90% of all cases. Sequential Pattern Mining produced failure-inducing interaction sequences in 75% of all cases and removed on average 93% of unnecessary interactions, potentially enabling manual analysis by developers. Both algorithms complement each other because they are applicable in different contexts and can be combined to improve performance.","PeriodicalId":355949,"journal":{"name":"2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129168247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A non-convex abstract domain for the value analysis of binaries 用于二进制值分析的非凸抽象域
Sven Mattsen, Arne Wichmann, S. Schupp
A challenge in sound reverse engineering of binary executables is to determine sets of possible targets for dynamic jumps. One technique to address this challenge is abstract interpretation, where singleton values in registers and memory locations are overapproximated to collections of possible values. With contemporary abstract interpretation techniques, convexity is usually enforced on these collections, which causes unacceptable loss of precision. We present a non-convex abstract domain, suitable for the analysis of binary executables. The domain is based on binary decision diagrams (BDD) to allow an efficient representation of non-convex sets of integers. Non-convex sets are necessary to represent the results of jump table lookups and bitwise operations, which are more frequent in executables than in high-level code because of optimizing compilers. Our domain computes abstract bitwise and arithmetic operations precisely and looses precision only for division and multiplication. Because the operations are defined on the structure of the BDDs, they remain efficient even if executed on very large sets. In executables, conditional jumps require solving formulas built with negation and conjunction. We implement a constraint solver using the fast intersection and complementation of BDD-based sets. Our domain is implemented as a plug-in, called BDDStab, and integrated with the binary analysis framework Jakstab. We use Jakstab's k-set and interval domains to discuss the increase in precision for a selection of compiler-generated executables.
二进制可执行文件的可靠逆向工程中的一个挑战是确定动态跳转的可能目标集。解决这一挑战的一种技术是抽象解释,其中寄存器和内存位置中的单例值过度近似于可能值的集合。在当代抽象解释技术中,通常对这些集合强制使用凸性,这会导致不可接受的精度损失。我们提出了一个非凸抽象域,适合于分析二进制可执行文件。该领域基于二进制决策图(BDD),以允许整数的非凸集的有效表示。非凸集对于表示跳转表查找和位操作的结果是必要的,由于优化编译器,这在可执行文件中比在高级代码中更常见。我们的领域精确地计算抽象的位和算术运算,只有除法和乘法才会失去精度。因为这些操作是在bdd的结构上定义的,所以即使在非常大的集合上执行,它们仍然是有效的。在可执行文件中,条件跳转需要求解由否定和连接构建的公式。我们利用基于bdd的集合的快速交补实现了一个约束求解器。我们的域被实现为一个名为BDDStab的插件,并与二进制分析框架Jakstab集成。我们使用Jakstab的k-set和interval域来讨论编译器生成的可执行文件选择精度的提高。
{"title":"A non-convex abstract domain for the value analysis of binaries","authors":"Sven Mattsen, Arne Wichmann, S. Schupp","doi":"10.1109/SANER.2015.7081837","DOIUrl":"https://doi.org/10.1109/SANER.2015.7081837","url":null,"abstract":"A challenge in sound reverse engineering of binary executables is to determine sets of possible targets for dynamic jumps. One technique to address this challenge is abstract interpretation, where singleton values in registers and memory locations are overapproximated to collections of possible values. With contemporary abstract interpretation techniques, convexity is usually enforced on these collections, which causes unacceptable loss of precision. We present a non-convex abstract domain, suitable for the analysis of binary executables. The domain is based on binary decision diagrams (BDD) to allow an efficient representation of non-convex sets of integers. Non-convex sets are necessary to represent the results of jump table lookups and bitwise operations, which are more frequent in executables than in high-level code because of optimizing compilers. Our domain computes abstract bitwise and arithmetic operations precisely and looses precision only for division and multiplication. Because the operations are defined on the structure of the BDDs, they remain efficient even if executed on very large sets. In executables, conditional jumps require solving formulas built with negation and conjunction. We implement a constraint solver using the fast intersection and complementation of BDD-based sets. Our domain is implemented as a plug-in, called BDDStab, and integrated with the binary analysis framework Jakstab. We use Jakstab's k-set and interval domains to discuss the increase in precision for a selection of compiler-generated executables.","PeriodicalId":355949,"journal":{"name":"2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128004693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A framework for cost-effective dependence-based dynamic impact analysis 一个具有成本效益的基于依赖的动态影响分析框架
Haipeng Cai, Raúl A. Santelices
Dynamic impact analysis can greatly assist developers with managing software changes by focusing their attention on the effects of potential changes relative to concrete program executions. While dependence-based dynamic impact analysis (DDIA) provides finer-grained results than traceability-based approaches, traditional DDIA techniques often produce imprecise results, incurring excessive costs thus hindering their adoption in many practical situations. In this paper, we present the design and evaluation of a DDIA framework and its three new instances that offer not only much more precise impact sets but also flexible cost-effectiveness options to meet diverse application needs such as different budgets and levels of detail of results. By exploiting both static dependencies and various dynamic information including method-execution traces, statement coverage, and dynamic points-to data, our techniques achieve that goal at reasonable costs according to our experiment results. Our study also suggests that statement coverage has generally stronger effects on the precision and cost-effectiveness of DDIA than dynamic points-to data.
动态影响分析可以极大地帮助开发人员管理软件变更,将他们的注意力集中在与具体程序执行相关的潜在变更的影响上。虽然基于依赖的动态影响分析(DDIA)比基于可跟踪性的方法提供更细粒度的结果,但传统的DDIA技术经常产生不精确的结果,从而导致过高的成本,从而阻碍了它们在许多实际情况中的采用。在本文中,我们介绍了DDIA框架的设计和评估及其三个新实例,它们不仅提供了更精确的影响集,而且提供了灵活的成本效益选择,以满足不同的应用需求,如不同的预算和结果细节水平。通过利用静态依赖关系和各种动态信息(包括方法执行跟踪、语句覆盖和动态数据点),根据我们的实验结果,我们的技术以合理的成本实现了这一目标。我们的研究还表明,语句覆盖通常比动态点到数据对DDIA的精度和成本效益有更大的影响。
{"title":"A framework for cost-effective dependence-based dynamic impact analysis","authors":"Haipeng Cai, Raúl A. Santelices","doi":"10.1109/SANER.2015.7081833","DOIUrl":"https://doi.org/10.1109/SANER.2015.7081833","url":null,"abstract":"Dynamic impact analysis can greatly assist developers with managing software changes by focusing their attention on the effects of potential changes relative to concrete program executions. While dependence-based dynamic impact analysis (DDIA) provides finer-grained results than traceability-based approaches, traditional DDIA techniques often produce imprecise results, incurring excessive costs thus hindering their adoption in many practical situations. In this paper, we present the design and evaluation of a DDIA framework and its three new instances that offer not only much more precise impact sets but also flexible cost-effectiveness options to meet diverse application needs such as different budgets and levels of detail of results. By exploiting both static dependencies and various dynamic information including method-execution traces, statement coverage, and dynamic points-to data, our techniques achieve that goal at reasonable costs according to our experiment results. Our study also suggests that statement coverage has generally stronger effects on the precision and cost-effectiveness of DDIA than dynamic points-to data.","PeriodicalId":355949,"journal":{"name":"2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115957614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
An observational study on API usage constraints and their documentation 对API使用限制及其文档的观察性研究
M. Saied, H. Sahraoui, Bruno Dufour
Nowadays, APIs represent the most common reuse form when developing software. However, the reuse benefits depend greatly on the ability of client application developers to use correctly the APIs. In this paper, we present an observational study on the API usage constraints and their documentation. To conduct the study on a large number of APIs, we implemented and validated strategies to automatically detect four types of usage constraints in existing APIs. We observed that some of the constraint types are frequent and that for three types, they are not documented in general. Surprisingly, the absence of documentation is, in general, specific to the constraints and not due to the non documenting habits of developers.
如今,api代表了开发软件时最常见的重用形式。然而,重用的好处在很大程度上取决于客户端应用程序开发人员正确使用api的能力。在本文中,我们对API使用约束及其文档进行了观察性研究。为了对大量api进行研究,我们实现并验证了自动检测现有api中四种类型使用约束的策略。我们观察到一些约束类型是经常出现的,对于其中三种类型,它们通常没有文档记录。令人惊讶的是,文档的缺失通常是特定于约束的,而不是由于开发人员的无文档习惯。
{"title":"An observational study on API usage constraints and their documentation","authors":"M. Saied, H. Sahraoui, Bruno Dufour","doi":"10.1109/SANER.2015.7081813","DOIUrl":"https://doi.org/10.1109/SANER.2015.7081813","url":null,"abstract":"Nowadays, APIs represent the most common reuse form when developing software. However, the reuse benefits depend greatly on the ability of client application developers to use correctly the APIs. In this paper, we present an observational study on the API usage constraints and their documentation. To conduct the study on a large number of APIs, we implemented and validated strategies to automatically detect four types of usage constraints in existing APIs. We observed that some of the constraint types are frequent and that for three types, they are not documented in general. Surprisingly, the absence of documentation is, in general, specific to the constraints and not due to the non documenting habits of developers.","PeriodicalId":355949,"journal":{"name":"2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130809500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
A software quality model for RPG RPG软件质量模型
Gergely Ladányi, Z. Tóth, R. Ferenc, Tibor Keresztesi
The IBM i mainframe was designed to manage business applications for which the reliability and quality is a matter of national security. The RPG programming language is the most frequently used one on this platform. The maintainability of the source code has big influence on the development costs, probably this is the reason why it is one of the most attractive, observed and evaluated quality characteristic of all. For improving or at least preserving the maintainability level of software it is necessary to evaluate it regularly. In this study we present a quality model based on the ISO/IEC 25010 international standard for evaluating the maintainability of software systems written in RPG. As an evaluation step of the quality model we show a case study in which we explain how we integrated the quality model as a continuous quality monitoring tool into the business processes of a mid-size software company which has more than twenty years of experience in developing RPG applications.
IBM i大型机设计用于管理业务应用程序,其可靠性和质量关系到国家安全。RPG编程语言是该平台上最常用的编程语言。源代码的可维护性对开发成本有很大的影响,可能这就是为什么它是所有质量特征中最吸引人、最受观察和评价的一个原因。为了提高或至少保持软件的可维护性水平,有必要定期对其进行评估。在本研究中,我们提出了一个基于ISO/IEC 25010国际标准的质量模型,用于评估用RPG编写的软件系统的可维护性。作为质量模型的评估步骤,我们展示了一个案例研究,其中我们解释了如何将质量模型作为持续的质量监控工具集成到一家中型软件公司的业务流程中,该公司在开发RPG应用程序方面拥有20多年的经验。
{"title":"A software quality model for RPG","authors":"Gergely Ladányi, Z. Tóth, R. Ferenc, Tibor Keresztesi","doi":"10.1109/SANER.2015.7081819","DOIUrl":"https://doi.org/10.1109/SANER.2015.7081819","url":null,"abstract":"The IBM i mainframe was designed to manage business applications for which the reliability and quality is a matter of national security. The RPG programming language is the most frequently used one on this platform. The maintainability of the source code has big influence on the development costs, probably this is the reason why it is one of the most attractive, observed and evaluated quality characteristic of all. For improving or at least preserving the maintainability level of software it is necessary to evaluate it regularly. In this study we present a quality model based on the ISO/IEC 25010 international standard for evaluating the maintainability of software systems written in RPG. As an evaluation step of the quality model we show a case study in which we explain how we integrated the quality model as a continuous quality monitoring tool into the business processes of a mid-size software company which has more than twenty years of experience in developing RPG applications.","PeriodicalId":355949,"journal":{"name":"2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131156420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Threshold-free code clone detection for a large-scale heterogeneous Java repository 大规模异构Java存储库的无阈值代码克隆检测
I. Keivanloo, Feng Zhang, Ying Zou
Code clones are unavoidable entities in software ecosystems. A variety of clone-detection algorithms are available for finding code clones. For Type-3 clone detection at method granularity (i.e., similar methods with changes in statements), dissimilarity threshold is one of the possible configuration parameters. Existing approaches use a single threshold to detect Type-3 clones across a repository. However, our study shows that to detect Type-3 clones at method granularity on a large-scale heterogeneous repository, multiple thresholds are often required. We find that the performance of clone detection improves if selecting different thresholds for various groups of clones in a heterogeneous repository (i.e., various applications). In this paper, we propose a threshold-free approach to detect Type-3 clones at method granularity across a large number of applications. Our approach uses an unsupervised learning algorithm, i.e., k-means, to determine true and false clones. We use a clone benchmark with 330,840 tagged clones from 24,824 open source Java projects for our study. We observe that our approach improves the performance significantly by 12% in terms of F-measure. Furthermore, our threshold-free approach eliminates the concern of practitioners about possible misconfiguration of Type-3 clone detection tools.
代码克隆是软件生态系统中不可避免的实体。各种克隆检测算法可用于查找代码克隆。对于方法粒度上的Type-3克隆检测(即,语句中有变化的类似方法),不相似阈值是可能的配置参数之一。现有的方法使用单个阈值来检测跨存储库的Type-3克隆。然而,我们的研究表明,要在大规模异构存储库上以方法粒度检测Type-3克隆,通常需要多个阈值。我们发现,如果在异构存储库(即各种应用程序)中为不同组的克隆选择不同的阈值,克隆检测的性能会得到改善。在本文中,我们提出了一种无阈值方法,可以在大量应用程序中以方法粒度检测Type-3克隆。我们的方法使用无监督学习算法,即k-means,来确定真假克隆。在我们的研究中,我们使用了来自24,824个开源Java项目的330,840个标记克隆的克隆基准。我们观察到,我们的方法在F-measure方面显着提高了12%的性能。此外,我们的无阈值方法消除了从业者对3型克隆检测工具可能配置错误的担忧。
{"title":"Threshold-free code clone detection for a large-scale heterogeneous Java repository","authors":"I. Keivanloo, Feng Zhang, Ying Zou","doi":"10.1109/SANER.2015.7081830","DOIUrl":"https://doi.org/10.1109/SANER.2015.7081830","url":null,"abstract":"Code clones are unavoidable entities in software ecosystems. A variety of clone-detection algorithms are available for finding code clones. For Type-3 clone detection at method granularity (i.e., similar methods with changes in statements), dissimilarity threshold is one of the possible configuration parameters. Existing approaches use a single threshold to detect Type-3 clones across a repository. However, our study shows that to detect Type-3 clones at method granularity on a large-scale heterogeneous repository, multiple thresholds are often required. We find that the performance of clone detection improves if selecting different thresholds for various groups of clones in a heterogeneous repository (i.e., various applications). In this paper, we propose a threshold-free approach to detect Type-3 clones at method granularity across a large number of applications. Our approach uses an unsupervised learning algorithm, i.e., k-means, to determine true and false clones. We use a clone benchmark with 330,840 tagged clones from 24,824 open source Java projects for our study. We observe that our approach improves the performance significantly by 12% in terms of F-measure. Furthermore, our threshold-free approach eliminates the concern of practitioners about possible misconfiguration of Type-3 clone detection tools.","PeriodicalId":355949,"journal":{"name":"2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129467204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
期刊
2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1