首页 > 最新文献

2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)最新文献

英文 中文
Synthesizing Framework Models for Symbolic Execution 符号执行的综合框架模型
Pub Date : 2016-05-14 DOI: 10.1145/2884781.2884856
Jinseong Jeon, Xiaokang Qiu, Jonathan Fetter-Degges, J. Foster, Armando Solar-Lezama
Symbolic execution is a powerful program analysis technique, but it is difficult to apply to programs built using frameworks such as Swing and Android, because the framework code itself is hard to symbolically execute. The standard solution is to manually create a framework model that can be symbolically executed, but developing and maintaining a model is difficult and error-prone. In this paper, we present Pasket, a new system that takes a first step toward automatically generating Java framework models to support symbolic execution. Pasket's focus is on creating models by instantiating design patterns. Pasket takes as input class, method, and type information from the framework API, together with tutorial programs that exercise the framework. From these artifacts and Pasket’s internal knowledge of design patterns, Pasket synthesizes a framework model whose behavior on the tutorial programs matches that of the original framework. We evaluated Pasket by synthesizing models for subsets of Swing and Android. Our results show that the models derived by Pasket are sufficient to allow us to use off-the-shelf symbolic execution tools to analyze Java programs that rely on frameworks.
符号执行是一种强大的程序分析技术,但它很难应用于使用Swing和Android等框架构建的程序,因为框架代码本身很难符号执行。标准的解决方案是手动创建一个可以象征性地执行的框架模型,但是开发和维护模型是困难的,而且容易出错。在本文中,我们介绍了Pasket,这是一个向自动生成Java框架模型以支持符号执行迈出了第一步的新系统。Pasket的重点是通过实例化设计模式来创建模型。Pasket接受来自框架API的类、方法和类型信息作为输入,以及使用该框架的教程程序。根据这些工件和Pasket对设计模式的内部知识,Pasket合成了一个框架模型,该模型在教程程序上的行为与原始框架的行为相匹配。我们通过综合Swing和Android子集的模型来评估Pasket。我们的结果表明,由Pasket派生的模型足以让我们使用现成的符号执行工具来分析依赖框架的Java程序。
{"title":"Synthesizing Framework Models for Symbolic Execution","authors":"Jinseong Jeon, Xiaokang Qiu, Jonathan Fetter-Degges, J. Foster, Armando Solar-Lezama","doi":"10.1145/2884781.2884856","DOIUrl":"https://doi.org/10.1145/2884781.2884856","url":null,"abstract":"Symbolic execution is a powerful program analysis technique, but it is difficult to apply to programs built using frameworks such as Swing and Android, because the framework code itself is hard to symbolically execute. The standard solution is to manually create a framework model that can be symbolically executed, but developing and maintaining a model is difficult and error-prone. In this paper, we present Pasket, a new system that takes a first step toward automatically generating Java framework models to support symbolic execution. Pasket's focus is on creating models by instantiating design patterns. Pasket takes as input class, method, and type information from the framework API, together with tutorial programs that exercise the framework. From these artifacts and Pasket’s internal knowledge of design patterns, Pasket synthesizes a framework model whose behavior on the tutorial programs matches that of the original framework. We evaluated Pasket by synthesizing models for subsets of Swing and Android. Our results show that the models derived by Pasket are sufficient to allow us to use off-the-shelf symbolic execution tools to analyze Java programs that rely on frameworks.","PeriodicalId":6485,"journal":{"name":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","volume":"29 23 1","pages":"156-167"},"PeriodicalIF":0.0,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85178458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Code Review Quality: How Developers See It 代码评审质量:开发人员如何看待它
Pub Date : 2016-05-14 DOI: 10.1145/2884781.2884840
Oleksii Kononenko, Olga Baysal, Michael W. Godfrey
In a large, long-lived project, an effective code review process is key to ensuring the long-term quality of the code base. In this work, we study code review practices of a large, open source project, and we investigate how the developers themselves perceive code review quality. We present a qualitative study that summarizes the results from a survey of 88 Mozilla core developers. The results provide developer insights into how they define review quality, what factors contribute to how they evaluate submitted code, and what challenges they face when performing review tasks. We found that the review quality is primarily associated with the thoroughness of the feedback, the reviewer's familiarity with the code, and the perceived quality of the code itself. Also, we found that while different factors are perceived to contribute to the review quality, reviewers often find it difficult to keep their technical skills up-to-date, manage personal priorities, and mitigate context switching.
在大型的、长期存在的项目中,有效的代码审查过程是确保代码库长期质量的关键。在这项工作中,我们研究了一个大型开源项目的代码审查实践,我们调查了开发人员自己是如何看待代码审查质量的。我们提出了一项定性研究,总结了对88名Mozilla核心开发人员的调查结果。结果为开发人员提供了他们如何定义评审质量,哪些因素影响了他们如何评估提交的代码,以及他们在执行评审任务时面临的挑战。我们发现评审质量主要与反馈的完整性、评审人员对代码的熟悉程度以及对代码本身的感知质量有关。此外,我们发现,虽然不同的因素被认为对评审质量有贡献,但评审人员经常发现很难保持他们的技术技能是最新的,管理个人优先级,并减少上下文切换。
{"title":"Code Review Quality: How Developers See It","authors":"Oleksii Kononenko, Olga Baysal, Michael W. Godfrey","doi":"10.1145/2884781.2884840","DOIUrl":"https://doi.org/10.1145/2884781.2884840","url":null,"abstract":"In a large, long-lived project, an effective code review process is key to ensuring the long-term quality of the code base. In this work, we study code review practices of a large, open source project, and we investigate how the developers themselves perceive code review quality. We present a qualitative study that summarizes the results from a survey of 88 Mozilla core developers. The results provide developer insights into how they define review quality, what factors contribute to how they evaluate submitted code, and what challenges they face when performing review tasks. We found that the review quality is primarily associated with the thoroughness of the feedback, the reviewer's familiarity with the code, and the perceived quality of the code itself. Also, we found that while different factors are perceived to contribute to the review quality, reviewers often find it difficult to keep their technical skills up-to-date, manage personal priorities, and mitigate context switching.","PeriodicalId":6485,"journal":{"name":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","volume":"33 1","pages":"1028-1038"},"PeriodicalIF":0.0,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75292878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 148
RETracer: Triaging Crashes by Reverse Execution from Partial Memory Dumps RETracer:通过从部分内存转储反向执行来分类崩溃
Pub Date : 2016-05-14 DOI: 10.1145/2884781.2884844
Weidong Cui, Marcus Peinado, S. Cha, Y. Fratantonio, V. Kemerlis
Many software providers operate crash reporting services to automatically collect crashes from millions of customers and file bug reports. Precisely triaging crashes is necessary and important for software providers because the millions of crashes that may be reported every day are critical in identifying high impact bugs. However, the triaging accuracy of existing systems is limited, as they rely only on the syntactic information of the stack trace at the moment of a crash without analyzing program semantics.In this paper, we present RETracer, the first system to triage software crashes based on program semantics reconstructed from memory dumps. RETracer was designed to meet the requirements of large-scale crash reporting services. RETracer performs binary-level backward taint analysis without a recorded execution trace to understand how functions on the stack contribute to the crash. The main challenge is that the machine state at an earlier time cannot be recovered completely from a memory dump, since most instructions are information destroying.We have implemented RETracer for x86 and x86-64 native code, and compared it with the existing crash triaging tool used by Microsoft. We found that RETracer eliminates two thirds of triage errors based on a manual analysis of 140 bugs fixed in Microsoft Windows and Office. RETracer has been deployed as the main crash triaging system on Microsoft’s crash reporting service.
许多软件供应商提供崩溃报告服务,自动收集数百万客户的崩溃,并提交错误报告。对于软件提供商来说,精确地对崩溃进行分类是必要和重要的,因为每天可能报告的数百万崩溃对于识别高影响错误至关重要。然而,现有系统的分类准确性是有限的,因为它们只依赖于崩溃时堆栈跟踪的语法信息,而不分析程序语义。在本文中,我们提出了RETracer,这是第一个基于从内存转储重构的程序语义来分类软件崩溃的系统。RETracer是为了满足大规模事故报告服务的需求而设计的。RETracer在没有记录执行跟踪的情况下执行二进制级向后污染分析,以了解堆栈上的函数是如何导致崩溃的。主要的挑战是不能从内存转储中完全恢复较早时间的机器状态,因为大多数指令都是信息破坏。我们在x86和x86-64本机代码上实现了RETracer,并将其与微软现有的崩溃分类工具进行了比较。我们发现,基于对Microsoft Windows和Office中修复的140个错误的手动分析,RETracer消除了三分之二的分类错误。RETracer已被部署为微软崩溃报告服务的主要崩溃分类系统。
{"title":"RETracer: Triaging Crashes by Reverse Execution from Partial Memory Dumps","authors":"Weidong Cui, Marcus Peinado, S. Cha, Y. Fratantonio, V. Kemerlis","doi":"10.1145/2884781.2884844","DOIUrl":"https://doi.org/10.1145/2884781.2884844","url":null,"abstract":"Many software providers operate crash reporting services to automatically collect crashes from millions of customers and file bug reports. Precisely triaging crashes is necessary and important for software providers because the millions of crashes that may be reported every day are critical in identifying high impact bugs. However, the triaging accuracy of existing systems is limited, as they rely only on the syntactic information of the stack trace at the moment of a crash without analyzing program semantics.In this paper, we present RETracer, the first system to triage software crashes based on program semantics reconstructed from memory dumps. RETracer was designed to meet the requirements of large-scale crash reporting services. RETracer performs binary-level backward taint analysis without a recorded execution trace to understand how functions on the stack contribute to the crash. The main challenge is that the machine state at an earlier time cannot be recovered completely from a memory dump, since most instructions are information destroying.We have implemented RETracer for x86 and x86-64 native code, and compared it with the existing crash triaging tool used by Microsoft. We found that RETracer eliminates two thirds of triage errors based on a manual analysis of 140 bugs fixed in Microsoft Windows and Office. RETracer has been deployed as the main crash triaging system on Microsoft’s crash reporting service.","PeriodicalId":6485,"journal":{"name":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","volume":"25 1","pages":"820-831"},"PeriodicalIF":0.0,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81933125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
On The Limits of Mutation Reduction Strategies 关于突变约简策略的极限
Pub Date : 2016-05-14 DOI: 10.1145/2884781.2884787
Rahul Gopinath, Mohammad Amin Alipour, Iftekhar Ahmed, Carlos Jensen, Alex Groce
Although mutation analysis is considered the best way to evaluate the effectiveness of a test suite, hefty computational cost often limits its use. To address this problem, various mutation reduction strategies have been proposed, all seeking to reduce the number of mutants while maintaining the representativeness of an exhaustive mutation analysis. While research has focused on the reduction achieved, the effectiveness of these strategies in selecting representative mutants, and the limits in doing so have not been investigated, either theoretically or empirically. We investigate the practical limits to the effectiveness of mutation reduction strategies, and provide a simple theoretical framework for thinking about the absolute limits. Our results show that the limit in improvement of effectiveness over random sampling for real-world open source programs is a mean of only 13.078%. Interestingly, there is no limit to the improvement that can be made by addition of new mutation operators. Given that this is the maximum that can be achieved with perfect advance knowledge of mutation kills, what can be practically achieved may be much worse. We conclude that more effort should be focused on enhancing mutations than removing operators in the name of selective mutation for questionable benefit.
尽管突变分析被认为是评估测试套件有效性的最佳方法,但高昂的计算成本往往限制了它的使用。为了解决这个问题,已经提出了各种减少突变的策略,所有这些策略都寻求减少突变的数量,同时保持详尽突变分析的代表性。虽然研究的重点是减少所取得的成果,但这些策略在选择具有代表性的突变体方面的有效性,以及这样做的局限性,无论是理论上还是经验上都没有得到调查。我们研究了突变减少策略有效性的实际限制,并为思考绝对限制提供了一个简单的理论框架。我们的结果表明,在现实世界的开源程序中,与随机抽样相比,改进效率的极限平均只有13.078%。有趣的是,通过添加新的突变操作符可以实现的改进是没有限制的。考虑到这是在完全掌握突变致死知识的情况下所能达到的最大值,实际上所能达到的结果可能会更糟。我们的结论是,更多的努力应该集中在增强突变上,而不是以选择性突变的名义去除操作符,以获得可疑的利益。
{"title":"On The Limits of Mutation Reduction Strategies","authors":"Rahul Gopinath, Mohammad Amin Alipour, Iftekhar Ahmed, Carlos Jensen, Alex Groce","doi":"10.1145/2884781.2884787","DOIUrl":"https://doi.org/10.1145/2884781.2884787","url":null,"abstract":"Although mutation analysis is considered the best way to evaluate the effectiveness of a test suite, hefty computational cost often limits its use. To address this problem, various mutation reduction strategies have been proposed, all seeking to reduce the number of mutants while maintaining the representativeness of an exhaustive mutation analysis. While research has focused on the reduction achieved, the effectiveness of these strategies in selecting representative mutants, and the limits in doing so have not been investigated, either theoretically or empirically. We investigate the practical limits to the effectiveness of mutation reduction strategies, and provide a simple theoretical framework for thinking about the absolute limits. Our results show that the limit in improvement of effectiveness over random sampling for real-world open source programs is a mean of only 13.078%. Interestingly, there is no limit to the improvement that can be made by addition of new mutation operators. Given that this is the maximum that can be achieved with perfect advance knowledge of mutation kills, what can be practically achieved may be much worse. We conclude that more effort should be focused on enhancing mutations than removing operators in the name of selective mutation for questionable benefit.","PeriodicalId":6485,"journal":{"name":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","volume":"215 1","pages":"511-522"},"PeriodicalIF":0.0,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79579118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
Multi-objective Software Effort Estimation 多目标软件工作量评估
Pub Date : 2016-05-14 DOI: 10.1145/2884781.2884830
Federica Sarro, Alessio Petrozziello, M. Harman
We introduce a bi-objective effort estimation algorithm that combines Confidence Interval Analysis and assessment of Mean Absolute Error. We evaluate our proposed algorithm on three different alternative formulations, baseline comparators and current state-of-the-art effort estimators applied to five real-world datasets from the PROMISE repository, involving 724 different software projects in total. The results reveal that our algorithm outperforms the baseline, state-of-the-art and all three alternative formulations, statistically significantly (p
提出了一种结合置信区间分析和平均绝对误差评估的双目标工作量估计算法。我们在三种不同的替代公式、基线比较器和当前最先进的工作量估计器上评估了我们提出的算法,这些算法应用于PROMISE存储库中的五个真实数据集,总共涉及724个不同的软件项目。结果表明,我们的算法在统计上显著优于基线,最先进的和所有三种替代公式(p
{"title":"Multi-objective Software Effort Estimation","authors":"Federica Sarro, Alessio Petrozziello, M. Harman","doi":"10.1145/2884781.2884830","DOIUrl":"https://doi.org/10.1145/2884781.2884830","url":null,"abstract":"We introduce a bi-objective effort estimation algorithm that combines Confidence Interval Analysis and assessment of Mean Absolute Error. We evaluate our proposed algorithm on three different alternative formulations, baseline comparators and current state-of-the-art effort estimators applied to five real-world datasets from the PROMISE repository, involving 724 different software projects in total. The results reveal that our algorithm outperforms the baseline, state-of-the-art and all three alternative formulations, statistically significantly (p","PeriodicalId":6485,"journal":{"name":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","volume":"72 5 1","pages":"619-630"},"PeriodicalIF":0.0,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87755207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 150
Optimizing Selection of Competing Services with Probabilistic Hierarchical Refinement 基于概率层次优化的竞争服务优化选择
Pub Date : 2016-05-14 DOI: 10.1145/2884781.2884861
Tian Huat Tan, Manman Chen, Jun Sun, Yang Liu, É. André, Yinxing Xue, J. Dong
Recently, many large enterprises (e.g., Netflix, Amazon) have decomposed their monolithic application into services, and composed them to fulfill their business functionalities. Many hosting services on the cloud, with different Quality of Service (QoS) (e.g., availability, cost), can be used to host the services. This is an example of competing services. QoS is crucial for the satisfaction of users. It is important to choose a set of services that maximize the overall QoS, and satisfy all QoS requirements for the service composition. This problem, known as optimal service selection, is NP-hard. Therefore, an effective method for reducing the search space and guiding the search process is highly desirable. To this end, we introduce a novel technique, called Probabilistic Hierarchical Refinement (PROHR). PROHR effectively reduces the search space by removing competing services that cannot be part of the selection. PROHR provides two methods, probabilistic ranking and hierarchical refinement, that enable smart exploration of the reduced search space. Unlike existing approaches that perform poorly when QoS requirements become stricter, PROHR maintains high performance and accuracy, independent of the strictness of the QoS requirements. PROHR has been evaluated on a publicly available dataset, and has shown significant improvement over existing approaches.
最近,许多大型企业(例如Netflix、Amazon)已经将它们的单片应用程序分解为服务,并将它们组合起来以实现它们的业务功能。云上的许多托管服务,具有不同的服务质量(QoS)(例如,可用性、成本),可用于托管服务。这是竞争服务的一个例子。QoS对于用户的满意度至关重要。重要的是选择一组服务,使整体QoS最大化,并满足服务组合的所有QoS需求。这个问题被称为最优服务选择,是np困难的。因此,迫切需要一种减少搜索空间和指导搜索过程的有效方法。为此,我们引入了一种新的技术,称为概率层次细化(PROHR)。PROHR通过删除不能作为选择一部分的竞争服务,有效地减少了搜索空间。PROHR提供了两种方法,概率排序和层次细化,可以对缩减的搜索空间进行智能探索。不像现有的方法,当QoS要求变得更严格时,性能会变差,PROHR保持高性能和准确性,而不依赖于QoS要求的严格性。PROHR已经在一个公开可用的数据集上进行了评估,并显示出比现有方法有重大改进。
{"title":"Optimizing Selection of Competing Services with Probabilistic Hierarchical Refinement","authors":"Tian Huat Tan, Manman Chen, Jun Sun, Yang Liu, É. André, Yinxing Xue, J. Dong","doi":"10.1145/2884781.2884861","DOIUrl":"https://doi.org/10.1145/2884781.2884861","url":null,"abstract":"Recently, many large enterprises (e.g., Netflix, Amazon) have decomposed their monolithic application into services, and composed them to fulfill their business functionalities. Many hosting services on the cloud, with different Quality of Service (QoS) (e.g., availability, cost), can be used to host the services. This is an example of competing services. QoS is crucial for the satisfaction of users. It is important to choose a set of services that maximize the overall QoS, and satisfy all QoS requirements for the service composition. This problem, known as optimal service selection, is NP-hard. Therefore, an effective method for reducing the search space and guiding the search process is highly desirable. To this end, we introduce a novel technique, called Probabilistic Hierarchical Refinement (PROHR). PROHR effectively reduces the search space by removing competing services that cannot be part of the selection. PROHR provides two methods, probabilistic ranking and hierarchical refinement, that enable smart exploration of the reduced search space. Unlike existing approaches that perform poorly when QoS requirements become stricter, PROHR maintains high performance and accuracy, independent of the strictness of the QoS requirements. PROHR has been evaluated on a publicly available dataset, and has shown significant improvement over existing approaches.","PeriodicalId":6485,"journal":{"name":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","volume":"48 1","pages":"85-95"},"PeriodicalIF":0.0,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90415759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Decoupling Level: A New Metric for Architectural Maintenance Complexity 解耦层次:体系结构维护复杂性的新度量
Pub Date : 2016-05-14 DOI: 10.1145/2884781.2884825
Ran Mo, Yuanfang Cai, R. Kazman, Lu Xiao, Qiong Feng
Despite decades of research on software metrics, we still cannot reliably measure if one design is more maintainable than another. Software managers and architects need to understand whether their software architecture is "good enough", whether it is decaying over time and, if so, by how much. In this paper, we contribute a new architecture maintainability metric---Decoupling Level (DL)---derived from Baldwin andClark's option theory. Instead of measuring how coupled an architecture is, we measure how well the software can be decoupled into small and independently replaceable modules. We measured the DL for 108 open source projects and 21 industrial projects, each of which has multiple releases. Our main result shows that the larger the DL, the better thearchitecture. By "better" we mean: the more likely bugs and changes can be localized and separated, and the more likely that developers can make changes independently. The DL metric also opens the possibility of quantifying canonical principles of single responsibility and separation of concerns, aiding cross-project comparison and architecture decay monitoring.
尽管对软件度量进行了几十年的研究,我们仍然不能可靠地度量一个设计是否比另一个设计更易于维护。软件经理和架构师需要了解他们的软件架构是否“足够好”,是否随着时间的推移而衰减,如果是,衰减多少。在本文中,我们提出了一种新的架构可维护性度量——解耦水平(Decoupling Level, DL)——源自Baldwin和clark的期权理论。我们不是度量体系结构的耦合程度,而是度量软件解耦成小的、独立可替换的模块的程度。我们测量了108个开源项目和21个工业项目的深度学习,每个项目都有多个版本。我们的主要结果表明DL越大,体系结构越好。我们所说的“更好”是指:bug和更改越有可能被本地化和分离,开发人员就越有可能独立进行更改。DL度量也开启了量化单一职责和关注点分离的规范原则的可能性,有助于跨项目比较和架构衰减监控。
{"title":"Decoupling Level: A New Metric for Architectural Maintenance Complexity","authors":"Ran Mo, Yuanfang Cai, R. Kazman, Lu Xiao, Qiong Feng","doi":"10.1145/2884781.2884825","DOIUrl":"https://doi.org/10.1145/2884781.2884825","url":null,"abstract":"Despite decades of research on software metrics, we still cannot reliably measure if one design is more maintainable than another. Software managers and architects need to understand whether their software architecture is \"good enough\", whether it is decaying over time and, if so, by how much. In this paper, we contribute a new architecture maintainability metric---Decoupling Level (DL)---derived from Baldwin andClark's option theory. Instead of measuring how coupled an architecture is, we measure how well the software can be decoupled into small and independently replaceable modules. We measured the DL for 108 open source projects and 21 industrial projects, each of which has multiple releases. Our main result shows that the larger the DL, the better thearchitecture. By \"better\" we mean: the more likely bugs and changes can be localized and separated, and the more likely that developers can make changes independently. The DL metric also opens the possibility of quantifying canonical principles of single responsibility and separation of concerns, aiding cross-project comparison and architecture decay monitoring.","PeriodicalId":6485,"journal":{"name":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","volume":"69 1","pages":"499-510"},"PeriodicalIF":0.0,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90761041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
Building a Theory of Job Rotation in Software Engineering from an Instrumental Case Study 从工具性案例研究构建软件工程中的岗位轮换理论
Pub Date : 2016-05-14 DOI: 10.1145/2884781.2884837
Ronnie E. S. Santos, F. Silva, C. Magalhães, Cleviton V. F. Monteiro
Job Rotation is an organizational practice in which individuals are frequently moved from a job (or project) to another in the same organization. Studies in other areas have found that this practice has both negative and positive effects on individuals’ work. However, there are only few studies addressing this issue in software engineering so far. The goal of our study is to investigate the effects of job rotation on work related factors in software engineering by performing a qualitative case study on a large software organization that uses job rotation as an organizational practice. We interviewed senior managers, project managers, and software engineers that had experienced this practice. Altogether, 48 participants were involved in all phases of this research. Collected data was analyzed using qualitative coding techniques and the results were checked and validated with participants through member checking. Our findings suggest that it is necessary to find balance between the positive effects on work variety and learning opportunities, and negative effects on cognitive workload and performance. Further, the lack of feedback resulting from constant movement among projects and teams may have a negative impact on performance feedback. We conclude that job rotation is an important organizational practice with important positive results. However, managers must be aware of potential negative effects and deploy tactics to balance them. We discuss such tactics in this article.
工作轮岗是一种组织实践,指的是个人在同一组织中经常从一个工作(或项目)转移到另一个工作(或项目)。其他领域的研究发现,这种做法对个人的工作既有负面影响,也有积极影响。然而,到目前为止,在软件工程中解决这个问题的研究还很少。我们研究的目标是通过对一个使用工作轮换作为组织实践的大型软件组织进行定性案例研究,来调查工作轮换对软件工程中工作相关因素的影响。我们采访了经历过这种实践的高级经理、项目经理和软件工程师。总共有48名参与者参与了这项研究的各个阶段。收集到的数据使用定性编码技术进行分析,并通过成员检查与参与者对结果进行检查和验证。我们的研究结果表明,有必要在工作多样性和学习机会的积极影响与认知工作量和绩效的消极影响之间找到平衡。此外,由于项目和团队之间的不断变动而导致的反馈的缺乏可能会对绩效反馈产生负面影响。我们得出结论,轮岗是一种重要的组织实践,具有重要的积极效果。然而,管理者必须意识到潜在的负面影响,并采取策略来平衡它们。我们将在本文中讨论这些策略。
{"title":"Building a Theory of Job Rotation in Software Engineering from an Instrumental Case Study","authors":"Ronnie E. S. Santos, F. Silva, C. Magalhães, Cleviton V. F. Monteiro","doi":"10.1145/2884781.2884837","DOIUrl":"https://doi.org/10.1145/2884781.2884837","url":null,"abstract":"Job Rotation is an organizational practice in which individuals are frequently moved from a job (or project) to another in the same organization. Studies in other areas have found that this practice has both negative and positive effects on individuals’ work. However, there are only few studies addressing this issue in software engineering so far. The goal of our study is to investigate the effects of job rotation on work related factors in software engineering by performing a qualitative case study on a large software organization that uses job rotation as an organizational practice. We interviewed senior managers, project managers, and software engineers that had experienced this practice. Altogether, 48 participants were involved in all phases of this research. Collected data was analyzed using qualitative coding techniques and the results were checked and validated with participants through member checking. Our findings suggest that it is necessary to find balance between the positive effects on work variety and learning opportunities, and negative effects on cognitive workload and performance. Further, the lack of feedback resulting from constant movement among projects and teams may have a negative impact on performance feedback. We conclude that job rotation is an important organizational practice with important positive results. However, managers must be aware of potential negative effects and deploy tactics to balance them. We discuss such tactics in this article.","PeriodicalId":6485,"journal":{"name":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","volume":"110 1","pages":"971-981"},"PeriodicalIF":0.0,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87709539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Automated Energy Optimization of HTTP Requests for Mobile Applications 移动应用的HTTP请求的自动能量优化
Pub Date : 2016-05-14 DOI: 10.1145/2884781.2884867
Ding Li, Yingjun Lyu, Jiaping Gui, William G. J. Halfond
Energy is a critical resource for apps that run on mobile de- vices. Among all operations, making HTTP requests is one of the most energy consuming. Previous studies have shown that bundling smaller HTTP requests into a single larger HTTP request can be an effective way to improve energy efficiency of network communication, but have not defined an automated way to detect when apps can be bundled nor to transform the apps to do this bundling. In this paper we propose an approach to reduce the energy consumption of HTTP requests in Android apps by automatically detecting and then bundling multiple HTTP requests. Our approach first detects HTTP requests that can be bundled using static analysis, then uses a proxy based technique to bundle HTTP requests at runtime. We evaluated our approach on a set of real world marketplace Android apps. In this evaluation, our approach achieved an average energy reduction of 15% for the subject apps and did not impose a significant runtime overhead on the optimized apps.
对于在移动设备上运行的应用程序来说,能源是一种至关重要的资源。在所有操作中,HTTP请求是最耗能的操作之一。以前的研究表明,将较小的HTTP请求捆绑成单个较大的HTTP请求可以有效地提高网络通信的能源效率,但没有定义一种自动化的方法来检测何时可以捆绑应用程序,也没有将应用程序转换为进行这种捆绑。在本文中,我们提出了一种方法,通过自动检测和捆绑多个HTTP请求来减少Android应用中HTTP请求的能耗。我们的方法首先检测可以使用静态分析捆绑的HTTP请求,然后使用基于代理的技术在运行时捆绑HTTP请求。我们在一系列真实市场的Android应用上评估了我们的方法。在这次评估中,我们的方法使主题应用程序的平均能耗降低了15%,并且没有对优化后的应用程序施加显着的运行时开销。
{"title":"Automated Energy Optimization of HTTP Requests for Mobile Applications","authors":"Ding Li, Yingjun Lyu, Jiaping Gui, William G. J. Halfond","doi":"10.1145/2884781.2884867","DOIUrl":"https://doi.org/10.1145/2884781.2884867","url":null,"abstract":"Energy is a critical resource for apps that run on mobile de- vices. Among all operations, making HTTP requests is one of the most energy consuming. Previous studies have shown that bundling smaller HTTP requests into a single larger HTTP request can be an effective way to improve energy efficiency of network communication, but have not defined an automated way to detect when apps can be bundled nor to transform the apps to do this bundling. In this paper we propose an approach to reduce the energy consumption of HTTP requests in Android apps by automatically detecting and then bundling multiple HTTP requests. Our approach first detects HTTP requests that can be bundled using static analysis, then uses a proxy based technique to bundle HTTP requests at runtime. We evaluated our approach on a set of real world marketplace Android apps. In this evaluation, our approach achieved an average energy reduction of 15% for the subject apps and did not impose a significant runtime overhead on the optimized apps.","PeriodicalId":6485,"journal":{"name":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","volume":"16 1","pages":"249-260"},"PeriodicalIF":0.0,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78713721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 101
AntMiner: Mining More Bugs by Reducing Noise Interference AntMiner:通过减少噪音干扰来挖掘更多bug
Pub Date : 2016-05-14 DOI: 10.1145/2884781.2884870
Bin Liang, Pan Bian, Yan Zhang, Wenchang Shi, Wei You, Yan Cai
Detecting bugs with code mining has proven to be an effective approach. However, the existing methods suffer from reporting serious false positives and false negatives. In this paper, we developed an approach called AntMiner to improve the precision of code mining by carefully preprocessing the source code. Specifically, we employ the program slicing technique to decompose the original source repository into independent sub-repositories, taking critical operations (automatically extracted from source code) as slicing criteria. In this way, the statements irrelevant to a critical operation are excluded from the corresponding sub-repository. Besides, various semantics-equivalent representations are normalized into a canonical form. Eventually, the mining process can be performed on a refined code database, and false positives and false negatives can be significantly pruned. We have implemented AntMiner and applied it to detect bugs in the Linux kernel. It reported 52 violations that have been either confirmed as real bugs by the kernel development community or fixed in new kernel versions. Among them, 41 cannot be detected by a widely used representative analysis tool Coverity. Besides, the result of a comparative analysis shows that our approach can effectively improve the precision of code mining and detect subtle bugs that have previously been missed.
使用代码挖掘检测bug已被证明是一种有效的方法。然而,现有方法存在严重的假阳性和假阴性报告问题。在本文中,我们开发了一种称为AntMiner的方法,通过对源代码进行仔细的预处理来提高代码挖掘的精度。具体来说,我们采用程序切片技术将原始源存储库分解为独立的子存储库,并将关键操作(从源代码中自动提取)作为切片标准。这样,与关键操作无关的语句就会从相应的子存储库中排除。此外,还将各种语义等价表示归一化为规范形式。最终,挖掘过程可以在一个精炼的代码数据库上执行,并且可以显著地减少误报和误报。我们已经实现了AntMiner,并应用它来检测Linux内核中的bug。它报告了52个错误,这些错误要么被内核开发社区确认为真正的错误,要么在新的内核版本中得到修复。其中有41个无法被广泛使用的代表性分析工具Coverity检测到。此外,对比分析的结果表明,我们的方法可以有效地提高代码挖掘的精度,并检测到以前遗漏的细微错误。
{"title":"AntMiner: Mining More Bugs by Reducing Noise Interference","authors":"Bin Liang, Pan Bian, Yan Zhang, Wenchang Shi, Wei You, Yan Cai","doi":"10.1145/2884781.2884870","DOIUrl":"https://doi.org/10.1145/2884781.2884870","url":null,"abstract":"Detecting bugs with code mining has proven to be an effective approach. However, the existing methods suffer from reporting serious false positives and false negatives. In this paper, we developed an approach called AntMiner to improve the precision of code mining by carefully preprocessing the source code. Specifically, we employ the program slicing technique to decompose the original source repository into independent sub-repositories, taking critical operations (automatically extracted from source code) as slicing criteria. In this way, the statements irrelevant to a critical operation are excluded from the corresponding sub-repository. Besides, various semantics-equivalent representations are normalized into a canonical form. Eventually, the mining process can be performed on a refined code database, and false positives and false negatives can be significantly pruned. We have implemented AntMiner and applied it to detect bugs in the Linux kernel. It reported 52 violations that have been either confirmed as real bugs by the kernel development community or fixed in new kernel versions. Among them, 41 cannot be detected by a widely used representative analysis tool Coverity. Besides, the result of a comparative analysis shows that our approach can effectively improve the precision of code mining and detect subtle bugs that have previously been missed.","PeriodicalId":6485,"journal":{"name":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","volume":"148 1","pages":"333-344"},"PeriodicalIF":0.0,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76243839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
期刊
2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1