首页 > 最新文献

2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation最新文献

英文 中文
CoordInspector: A Tool for Extracting Coordination Data from Legacy Code coordinator:从遗留代码中提取协调数据的工具
N. Rodrigues, L. Barbosa
More and more current software systems rely on non trivial coordination logic for combining autonomous services typically running on different platforms and often owned by different organizations. Often, however, coordination data is deeply entangled in the code and, therefore, difficult to isolate and analyse separately. CoordInspector is a software tool which combines slicing and program analysis techniques to isolate all coordination elements from the source code of an existing application. Such a reverse engineering process provides a clear view of the actually invoked services as well as of the orchestration patterns which bind them together. The tool analyses Common Intermediate Language (CIL) code, the native language of Microsoft .Net framework. Therefore, the scope of application of CoordInspector is quite large: potentially any piece of code developed in any of the programming languages which compiles to the .Net framework. The tool generates graphical representations of the coordination layer together and identifies the underlying business process orchestrations, rendering them as Orc specifications.
越来越多的当前软件系统依赖于非平凡的协调逻辑来组合通常运行在不同平台上且通常由不同组织拥有的自治服务。然而,协调数据经常在代码中纠缠不清,因此很难单独分离和分析。CoordInspector是一个软件工具,它结合了切片和程序分析技术,将所有协调元素从现有应用程序的源代码中分离出来。这样的逆向工程过程提供了实际调用的服务以及将它们绑定在一起的编排模式的清晰视图。该工具分析通用中间语言(CIL)代码,即microsoft.net框架的原生语言。因此,coordinator的应用范围是相当大的:可以将任何一种编程语言开发的代码编译到。net框架中。该工具一起生成协调层的图形表示,并标识底层业务流程编排,将其呈现为Orc规范。
{"title":"CoordInspector: A Tool for Extracting Coordination Data from Legacy Code","authors":"N. Rodrigues, L. Barbosa","doi":"10.1109/SCAM.2008.10","DOIUrl":"https://doi.org/10.1109/SCAM.2008.10","url":null,"abstract":"More and more current software systems rely on non trivial coordination logic for combining autonomous services typically running on different platforms and often owned by different organizations. Often, however, coordination data is deeply entangled in the code and, therefore, difficult to isolate and analyse separately. CoordInspector is a software tool which combines slicing and program analysis techniques to isolate all coordination elements from the source code of an existing application. Such a reverse engineering process provides a clear view of the actually invoked services as well as of the orchestration patterns which bind them together. The tool analyses Common Intermediate Language (CIL) code, the native language of Microsoft .Net framework. Therefore, the scope of application of CoordInspector is quite large: potentially any piece of code developed in any of the programming languages which compiles to the .Net framework. The tool generates graphical representations of the coordination layer together and identifies the underlying business process orchestrations, rendering them as Orc specifications.","PeriodicalId":433693,"journal":{"name":"2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115655820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Automatic Determination of May/Must Set Usage in Data-Flow Analysis 自动确定可能/必须设置使用数据流分析
A. Stone, M. Strout, Shweta Behere
Data-flow analysis is a common technique to gather program information for use in transformations such as register allocation, dead-code elimination, common subexpression elimination, scheduling, and others. Tools for generating data-flow analysis implementations remove the need for implementers to explicitly write code that iterates over statements in a program, but still require them to implement details regarding the effects of aliasing, side effects, arrays, and user-defined structures. This paper presents the DFAGen Tool, which generates implementations for locally separable (e.g. bit-vector) data-flow analyses that are pointer, side-effect, and aggregate cognizant from an analysis specification that assumes only scalars. Analysis specifications are typically seven lines long and similar to those in standard compiler textbooks. The main contribution of this work is the automatic determination of may and must set usage within automatically generated data-flow analysis implementations.
数据流分析是一种收集程序信息的常用技术,用于转换,如寄存器分配、死代码消除、公共子表达式消除、调度等。用于生成数据流分析实现的工具消除了实现者显式编写迭代程序中语句的代码的需要,但仍然要求他们实现有关混叠、副作用、数组和用户定义结构的影响的细节。本文介绍了DFAGen工具,它生成局部可分离(例如位向量)数据流分析的实现,这些数据流分析是指针,副作用和聚合认知,来自仅假设标量的分析规范。分析规范通常有七行,类似于标准编译器教科书中的规范。这项工作的主要贡献是在自动生成的数据流分析实现中自动确定可能和必须设置的使用。
{"title":"Automatic Determination of May/Must Set Usage in Data-Flow Analysis","authors":"A. Stone, M. Strout, Shweta Behere","doi":"10.1109/SCAM.2008.28","DOIUrl":"https://doi.org/10.1109/SCAM.2008.28","url":null,"abstract":"Data-flow analysis is a common technique to gather program information for use in transformations such as register allocation, dead-code elimination, common subexpression elimination, scheduling, and others. Tools for generating data-flow analysis implementations remove the need for implementers to explicitly write code that iterates over statements in a program, but still require them to implement details regarding the effects of aliasing, side effects, arrays, and user-defined structures. This paper presents the DFAGen Tool, which generates implementations for locally separable (e.g. bit-vector) data-flow analyses that are pointer, side-effect, and aggregate cognizant from an analysis specification that assumes only scalars. Analysis specifications are typically seven lines long and similar to those in standard compiler textbooks. The main contribution of this work is the automatic determination of may and must set usage within automatically generated data-flow analysis implementations.","PeriodicalId":433693,"journal":{"name":"2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127266624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Constructing Subtle Faults Using Higher Order Mutation Testing 利用高阶突变测试构造细微故障
Yue Jia, M. Harman
Traditional mutation testing considers only first order mutants, created by the injection of a single fault. Often these first order mutants denote trivial faults that are easily killed. This paper investigates higher order mutants (HOMs). It introduces the concept of a subsuming HOM; one that is harder to kill than the first order mutants from which it is constructed. By definition, subsuming HOMs denote subtle fault combinations. The paper reports the results of an empirical study into subsuming HOMs, using six benchmark programs. This is the largest study of mutation testing to date. To overcome the exponential explosion in the number of mutants considered, the paper introduces a search based approach to the identification of subsuming HOMs. Results are presented for a greedy algorithm, a genetic algorithm and a hill climbing algorithm.
传统的突变测试只考虑一阶突变,由注入单个故障产生。通常,这些一级突变体表示很容易被消灭的微不足道的缺陷。本文研究了高阶突变体(HOMs)。它引入了包含HOM的概念;它比构建它的一级突变体更难杀死。根据定义,包含HOMs表示细微的断层组合。本文报告了一项实证研究的结果,纳入住房成本,使用六个基准程序。这是迄今为止最大规模的突变检测研究。为了克服考虑的突变体数量呈指数爆炸式增长的问题,本文引入了一种基于搜索的方法来识别包含HOMs。给出了贪心算法、遗传算法和爬坡算法的求解结果。
{"title":"Constructing Subtle Faults Using Higher Order Mutation Testing","authors":"Yue Jia, M. Harman","doi":"10.1109/SCAM.2008.36","DOIUrl":"https://doi.org/10.1109/SCAM.2008.36","url":null,"abstract":"Traditional mutation testing considers only first order mutants, created by the injection of a single fault. Often these first order mutants denote trivial faults that are easily killed. This paper investigates higher order mutants (HOMs). It introduces the concept of a subsuming HOM; one that is harder to kill than the first order mutants from which it is constructed. By definition, subsuming HOMs denote subtle fault combinations. The paper reports the results of an empirical study into subsuming HOMs, using six benchmark programs. This is the largest study of mutation testing to date. To overcome the exponential explosion in the number of mutants considered, the paper introduces a search based approach to the identification of subsuming HOMs. Results are presented for a greedy algorithm, a genetic algorithm and a hill climbing algorithm.","PeriodicalId":433693,"journal":{"name":"2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation","volume":"26 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124309830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 210
Beyond Annotations: A Proposal for Extensible Java (XJ) 超越注释:可扩展Java (XJ)的建议
T. Clark, P. Sammut, J. Willans
Annotations provide a limited way of extending Java in order to tailor the language for specific tasks. This paper describes a proposal for a Java extension which generalises annotations to allow Java to be a platform for developing domain specific languages.
注释提供了一种有限的扩展Java的方式,以便为特定的任务定制语言。本文描述了一个Java扩展的建议,该扩展对注释进行了一般化,使Java能够成为开发特定领域语言的平台。
{"title":"Beyond Annotations: A Proposal for Extensible Java (XJ)","authors":"T. Clark, P. Sammut, J. Willans","doi":"10.1109/SCAM.2008.34","DOIUrl":"https://doi.org/10.1109/SCAM.2008.34","url":null,"abstract":"Annotations provide a limited way of extending Java in order to tailor the language for specific tasks. This paper describes a proposal for a Java extension which generalises annotations to allow Java to be a platform for developing domain specific languages.","PeriodicalId":433693,"journal":{"name":"2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123168826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
The Evolution and Decay of Statically Detected Source Code Vulnerabilities 静态检测源代码漏洞的演化与衰减
M. D. Penta, L. Cerulo, Lerina Aversano
The presence of vulnerable statements in the source code is a crucial problem for maintainers: properly monitoring and, if necessary, removing them is highly desirable to ensure high security and reliability. To this aim, a number of static analysis tools have been developed to detect the presence of instructions that can be subject to vulnerability attacks, ranging from buffer overflow exploitations to command injection and cross-site scripting.Based on the availability of existing tools and of data extracted from software repositories, this paper reports an empirical study on the evolution of vulnerable statements detected in three software systems with different static analysis tools. Specifically, the study investigates on vulnerability evolution trends and on the decay time exhibited by different kinds of vulnerabilities.
对于维护人员来说,源代码中存在易受攻击的语句是一个关键问题:适当地监视并在必要时删除它们,以确保高安全性和可靠性是非常可取的。为此,已经开发了许多静态分析工具来检测可能受到漏洞攻击的指令的存在,范围从缓冲区溢出利用到命令注入和跨站点脚本。基于现有工具的可用性和从软件存储库中提取的数据,本文对使用不同静态分析工具在三个软件系统中检测到的脆弱语句的演变进行了实证研究。具体而言,研究了不同类型漏洞的演化趋势和衰减时间。
{"title":"The Evolution and Decay of Statically Detected Source Code Vulnerabilities","authors":"M. D. Penta, L. Cerulo, Lerina Aversano","doi":"10.1109/SCAM.2008.20","DOIUrl":"https://doi.org/10.1109/SCAM.2008.20","url":null,"abstract":"The presence of vulnerable statements in the source code is a crucial problem for maintainers: properly monitoring and, if necessary, removing them is highly desirable to ensure high security and reliability. To this aim, a number of static analysis tools have been developed to detect the presence of instructions that can be subject to vulnerability attacks, ranging from buffer overflow exploitations to command injection and cross-site scripting.Based on the availability of existing tools and of data extracted from software repositories, this paper reports an empirical study on the evolution of vulnerable statements detected in three software systems with different static analysis tools. Specifically, the study investigates on vulnerability evolution trends and on the decay time exhibited by different kinds of vulnerabilities.","PeriodicalId":433693,"journal":{"name":"2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation","volume":"249 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116292003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Automated Migration of List Based JSP Web Pages to AJAX 基于列表的JSP Web页面到AJAX的自动迁移
J. Chu, T. Dean
AJAX is a Web application programming technique that allows portions of a Web page to be loaded dynamically, separate from other parts of the Web page. This gives the user a much smoother experience when viewing the Web page. This paper describes the process of converting a class of Web pages from round-trip to AJAX.
AJAX是一种Web应用程序编程技术,它允许动态加载Web页面的某些部分,与Web页面的其他部分分离。这为用户在查看Web页面时提供了更流畅的体验。本文描述了将一类Web页面从往返转换为AJAX的过程。
{"title":"Automated Migration of List Based JSP Web Pages to AJAX","authors":"J. Chu, T. Dean","doi":"10.1109/SCAM.2008.29","DOIUrl":"https://doi.org/10.1109/SCAM.2008.29","url":null,"abstract":"AJAX is a Web application programming technique that allows portions of a Web page to be loaded dynamically, separate from other parts of the Web page. This gives the user a much smoother experience when viewing the Web page. This paper describes the process of converting a class of Web pages from round-trip to AJAX.","PeriodicalId":433693,"journal":{"name":"2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131256603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Evaluating Key Statements Analysis 评价关键报表分析
D. Binkley, N. Gold, M. Harman, Zheng Li, Kiarash Mahdavi
Key statement analysis extracts from a program, statements that form the core of the programpsilas computation. A good set of key statements is small but has a large impact. Key statements form a useful starting point for understanding and manipulating a program. An empirical investigation of three kinds of key statements is presented. The three are based on Bieman and Ottpsilas principal variables. To be effective, the key statements must have high impact and form a small, highly cohesive unit. Using a minor improvement of metrics for measuring impact and cohesion, key statements are shown to capture about 75% of the semantic effect of the function from which they are drawn. At the same time, they have cohesion about 20 percentage points higher than the corresponding function. A statistical analysis of the differences shows that key statements have higher average impact and higher average cohesion (p<0.001).
关键语句分析是从程序中提取语句,这些语句构成了程序的核心计算。一组好的关键语句很小,但影响很大。关键语句是理解和操作程序的一个有用的起点。本文对三种关键语句进行了实证研究。这三个是基于Bieman和Ottpsilas主变量。为了有效,关键语句必须具有高影响力,并形成一个小而高凝聚力的单元。通过对度量影响和内聚的指标进行微小的改进,关键语句捕获了从中绘制它们的函数的大约75%的语义效果。同时,它们的内聚性比相应的功能高出约20个百分点。对差异的统计分析表明,关键语句具有更高的平均影响力和更高的平均凝聚力(p<0.001)。
{"title":"Evaluating Key Statements Analysis","authors":"D. Binkley, N. Gold, M. Harman, Zheng Li, Kiarash Mahdavi","doi":"10.1109/SCAM.2008.40","DOIUrl":"https://doi.org/10.1109/SCAM.2008.40","url":null,"abstract":"Key statement analysis extracts from a program, statements that form the core of the programpsilas computation. A good set of key statements is small but has a large impact. Key statements form a useful starting point for understanding and manipulating a program. An empirical investigation of three kinds of key statements is presented. The three are based on Bieman and Ottpsilas principal variables. To be effective, the key statements must have high impact and form a small, highly cohesive unit. Using a minor improvement of metrics for measuring impact and cohesion, key statements are shown to capture about 75% of the semantic effect of the function from which they are drawn. At the same time, they have cohesion about 20 percentage points higher than the corresponding function. A statistical analysis of the differences shows that key statements have higher average impact and higher average cohesion (p<0.001).","PeriodicalId":433693,"journal":{"name":"2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132929604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Analysis and Transformations for Efficient Query-Based Debugging 基于查询的高效调试的分析和转换
Michael Gorbovitski, K. T. Tekle, Tom Rothamel, S. Stoller, Yanhong A. Liu
This paper describes a framework that supports powerful queries in debugging tools, and describes in particular the transformations, alias analysis, and type analysis used to make the queries efficient. The framework allows queries over the states of all objects at any point in the execution as well as over the history of states. The transformations are based on incrementally maintaining the results of expensive queries studied in previous work. The alias analysis extends the flow-sensitive intraprocedural analysis to an efficient flow-sensitive interprocedural analysis for an object-oriented language with also a form of context sensitivity. We also show the power of the framework and the effectiveness of the analyses through case studies and experiments with XML DOM tree transformations, an FTP client, and others. We were able to easily determine the sources of all injected bugs, and we also found an actual bug in the case study on the FTP client.
本文描述了一个在调试工具中支持强大查询的框架,并特别描述了用于使查询高效的转换、别名分析和类型分析。该框架允许在执行过程中的任何时刻查询所有对象的状态以及状态的历史记录。这些转换基于增量式地维护在以前的工作中研究过的昂贵查询的结果。别名分析将流敏感的过程内分析扩展为面向对象语言的高效流敏感过程间分析,同时还具有上下文敏感的形式。我们还通过案例研究和XML DOM树转换、FTP客户端等实验展示了框架的强大功能和分析的有效性。我们能够轻松地确定所有注入错误的来源,并且我们还在FTP客户机的案例研究中发现了一个实际错误。
{"title":"Analysis and Transformations for Efficient Query-Based Debugging","authors":"Michael Gorbovitski, K. T. Tekle, Tom Rothamel, S. Stoller, Yanhong A. Liu","doi":"10.1109/SCAM.2008.27","DOIUrl":"https://doi.org/10.1109/SCAM.2008.27","url":null,"abstract":"This paper describes a framework that supports powerful queries in debugging tools, and describes in particular the transformations, alias analysis, and type analysis used to make the queries efficient. The framework allows queries over the states of all objects at any point in the execution as well as over the history of states. The transformations are based on incrementally maintaining the results of expensive queries studied in previous work. The alias analysis extends the flow-sensitive intraprocedural analysis to an efficient flow-sensitive interprocedural analysis for an object-oriented language with also a form of context sensitivity. We also show the power of the framework and the effectiveness of the analyses through case studies and experiments with XML DOM tree transformations, an FTP client, and others. We were able to easily determine the sources of all injected bugs, and we also found an actual bug in the case study on the FTP client.","PeriodicalId":433693,"journal":{"name":"2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128530715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Aspect-Aware Points-to Analysis 方面感知指向分析
Qiang Sun, Jianjun Zhao
Points-to analysis is a fundamental analysis technique whose results are useful in compiler optimization and software engineering tools. Although many points-to analysis algorithms have been proposed for procedural and object-oriented languages like C and Java, there is no points-to analysis for aspect-oriented languages so far. Based on Andersen-style points-to analysis for Java, we propose flow- and context-insensitive points-to analysis for AspectJ. The main idea is to perform the analysis crossing the boundary between aspects and classes. Therefore, our technique is able to handle the uniqueaspectual features. To investigate the effectiveness of our technique, we implement our analysis approach on top of the ajc AspectJ compiler and evaluate it on nine AspectJ benchmarks. The experimental result indicates that, compared to existing Java approaches, the proposed technique can achieve a significant higher precision and run in practical time and space.
指向分析是一种基本的分析技术,其结果在编译器优化和软件工程工具中非常有用。尽管针对C和Java等过程性和面向对象语言已经提出了许多指向分析算法,但到目前为止还没有针对面向方面语言的指向分析算法。基于Java的andersen风格的点对分析,我们建议对AspectJ进行流和上下文不敏感的点对分析。其主要思想是执行跨方面和类之间的边界的分析。因此,我们的技术能够处理独特的特征。为了研究我们技术的有效性,我们在ajc AspectJ编译器之上实现了我们的分析方法,并在9个AspectJ基准测试上对其进行了评估。实验结果表明,与现有的Java方法相比,该方法可以实现更高的精度,并且在实际时间和空间上运行。
{"title":"Aspect-Aware Points-to Analysis","authors":"Qiang Sun, Jianjun Zhao","doi":"10.1109/SCAM.2008.30","DOIUrl":"https://doi.org/10.1109/SCAM.2008.30","url":null,"abstract":"Points-to analysis is a fundamental analysis technique whose results are useful in compiler optimization and software engineering tools. Although many points-to analysis algorithms have been proposed for procedural and object-oriented languages like C and Java, there is no points-to analysis for aspect-oriented languages so far. Based on Andersen-style points-to analysis for Java, we propose flow- and context-insensitive points-to analysis for AspectJ. The main idea is to perform the analysis crossing the boundary between aspects and classes. Therefore, our technique is able to handle the uniqueaspectual features. To investigate the effectiveness of our technique, we implement our analysis approach on top of the ajc AspectJ compiler and evaluate it on nine AspectJ benchmarks. The experimental result indicates that, compared to existing Java approaches, the proposed technique can achieve a significant higher precision and run in practical time and space.","PeriodicalId":433693,"journal":{"name":"2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122237010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Is Cloned Code More Stable than Non-cloned Code? 克隆代码比非克隆代码更稳定吗?
J. Krinke
This paper presents a study on the stability of cloned code. The results from an analysis of 200 weeks of evolution of five software system show that the stability as measured by changes to the system is dominated by the deletion of code clones. It can also be observed that additions to a systems are more often additions to non-cloned code than additions to cloned code. If the dominating factor of deletions is eliminated, it can generally be concluded that cloned code is more stable than non-cloned code.
本文对克隆代码的稳定性进行了研究。对五个软件系统200周的演化分析结果表明,通过系统变化来衡量的稳定性主要是由代码克隆的删除所决定的。还可以观察到,对系统的添加通常是对非克隆代码的添加,而不是对克隆代码的添加。如果剔除删除的主导因素,一般可以得出克隆代码比非克隆代码更稳定的结论。
{"title":"Is Cloned Code More Stable than Non-cloned Code?","authors":"J. Krinke","doi":"10.1109/SCAM.2008.14","DOIUrl":"https://doi.org/10.1109/SCAM.2008.14","url":null,"abstract":"This paper presents a study on the stability of cloned code. The results from an analysis of 200 weeks of evolution of five software system show that the stability as measured by changes to the system is dominated by the deletion of code clones. It can also be observed that additions to a systems are more often additions to non-cloned code than additions to cloned code. If the dominating factor of deletions is eliminated, it can generally be concluded that cloned code is more stable than non-cloned code.","PeriodicalId":433693,"journal":{"name":"2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133319148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 130
期刊
2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1