首页 > 最新文献

2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)最新文献

英文 中文
How Modern News Aggregators Help Development Communities Shape and Share Knowledge 现代新闻聚合器如何帮助发展社区塑造和分享知识
Pub Date : 2018-05-27 DOI: 10.1145/3180155.3180180
M. Aniche, Christoph Treude, Igor Steinmacher, I. Wiese, G. Pinto, M. Storey, M. Gerosa
Many developers rely on modern news aggregator sites such as reddit and hn to stay up to date with the latest technological developments and trends. In order to understand what motivates developers to contribute, what kind of content is shared, and how knowledge is shaped by the community, we interviewed and surveyed developers that participate on the reddit programming subreddit and we analyzed a sample of posts on both reddit and hn. We learned what kind of content is shared in these websites and developer motivations for posting, sharing, discussing, evaluating, and aggregating knowledge on these aggregators, while revealing challenges developers face in terms of how content and participant behavior is moderated. Our insights aim to improve the practices developers follow when using news aggregators, as well as guide tool makers on how to improve their tools. Our findings are also relevant to researchers that study developer communities of practice.
许多开发者依赖于像reddit和hn这样的现代新闻聚合网站来跟上最新的技术发展和趋势。为了了解是什么促使开发者做出贡献,什么样的内容被分享,以及知识是如何被社区塑造的,我们采访并调查了参与reddit编程版块的开发者,并分析了reddit和hn上的帖子样本。我们了解了这些网站上共享的内容类型,以及开发者在这些聚合器上发布、分享、讨论、评估和聚合知识的动机,同时揭示了开发者在如何调节内容和参与者行为方面面临的挑战。我们的见解旨在改进开发人员在使用新闻聚合器时遵循的实践,以及指导工具制造商如何改进他们的工具。我们的发现也与研究开发人员实践社区的研究人员相关。
{"title":"How Modern News Aggregators Help Development Communities Shape and Share Knowledge","authors":"M. Aniche, Christoph Treude, Igor Steinmacher, I. Wiese, G. Pinto, M. Storey, M. Gerosa","doi":"10.1145/3180155.3180180","DOIUrl":"https://doi.org/10.1145/3180155.3180180","url":null,"abstract":"Many developers rely on modern news aggregator sites such as reddit and hn to stay up to date with the latest technological developments and trends. In order to understand what motivates developers to contribute, what kind of content is shared, and how knowledge is shaped by the community, we interviewed and surveyed developers that participate on the reddit programming subreddit and we analyzed a sample of posts on both reddit and hn. We learned what kind of content is shared in these websites and developer motivations for posting, sharing, discussing, evaluating, and aggregating knowledge on these aggregators, while revealing challenges developers face in terms of how content and participant behavior is moderated. Our insights aim to improve the practices developers follow when using news aggregators, as well as guide tool makers on how to improve their tools. Our findings are also relevant to researchers that study developer communities of practice.","PeriodicalId":6560,"journal":{"name":"2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)","volume":"33 1","pages":"499-510"},"PeriodicalIF":0.0,"publicationDate":"2018-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89853694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
Prioritizing Browser Environments for Web Application Test Execution 为Web应用程序测试执行确定浏览器环境的优先级
Pub Date : 2018-05-27 DOI: 10.1145/3180155.3180244
Jung-Hyun Kwon, In-Young Ko, G. Rothermel
When testing client-side web applications, it is important to consider different web-browser environments. Different properties of these environments such as web-browser types and underlying platforms may cause a web application to exhibit different types of failures. As web applications evolve, they must be regression tested across these different environments. Because there are many environments to consider this process can be expensive, resulting in delayed feedback about failures in applications. In this work, we propose six techniques for providing a developer with faster feedback on failures when regression testing web applications across different web-browser environments. Our techniques draw on methods used in test case prioritization; however, in our case we prioritize web-browser environments, based on information on recent and frequent failures. We evaluated our approach using four non-trivial and popular open-source web applications. Our results show that our techniques outperform two baseline methods, namely, no ordering and random ordering, in terms of the cost-effectiveness. The improvement rates ranged from -12.24% to 39.05% for no ordering, and from -0.04% to 45.85% for random ordering.
在测试客户端web应用程序时,考虑不同的web浏览器环境是很重要的。这些环境的不同属性(如web浏览器类型和底层平台)可能导致web应用程序出现不同类型的故障。随着web应用程序的发展,它们必须在这些不同的环境中进行回归测试。由于有许多环境需要考虑,因此此过程的成本可能会很高,从而导致应用程序中有关故障的反馈延迟。在这项工作中,我们提出了六种技术,以便在跨不同web浏览器环境进行web应用程序回归测试时,为开发人员提供更快的失败反馈。我们的技术借鉴了测试用例优先级中使用的方法;然而,在我们的案例中,我们根据最近和频繁的故障信息来优先考虑web浏览器环境。我们使用四个重要且流行的开源web应用程序来评估我们的方法。我们的结果表明,我们的技术在成本效益方面优于两种基线方法,即无排序和随机排序。无排序改善率为-12.24% ~ 39.05%,随机排序改善率为-0.04% ~ 45.85%。
{"title":"Prioritizing Browser Environments for Web Application Test Execution","authors":"Jung-Hyun Kwon, In-Young Ko, G. Rothermel","doi":"10.1145/3180155.3180244","DOIUrl":"https://doi.org/10.1145/3180155.3180244","url":null,"abstract":"When testing client-side web applications, it is important to consider different web-browser environments. Different properties of these environments such as web-browser types and underlying platforms may cause a web application to exhibit different types of failures. As web applications evolve, they must be regression tested across these different environments. Because there are many environments to consider this process can be expensive, resulting in delayed feedback about failures in applications. In this work, we propose six techniques for providing a developer with faster feedback on failures when regression testing web applications across different web-browser environments. Our techniques draw on methods used in test case prioritization; however, in our case we prioritize web-browser environments, based on information on recent and frequent failures. We evaluated our approach using four non-trivial and popular open-source web applications. Our results show that our techniques outperform two baseline methods, namely, no ordering and random ordering, in terms of the cost-effectiveness. The improvement rates ranged from -12.24% to 39.05% for no ordering, and from -0.04% to 45.85% for random ordering.","PeriodicalId":6560,"journal":{"name":"2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)","volume":"309 1","pages":"468-479"},"PeriodicalIF":0.0,"publicationDate":"2018-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76889599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Semantic Program Repair Using a Reference Implementation 使用参考实现进行语义程序修复
Pub Date : 2018-05-27 DOI: 10.1145/3180155.3180247
Sergey Mechtaev, Manh-Dung Nguyen, Yannic Noller, Lars Grunske, Abhik Roychoudhury
Automated program repair has been studied via the use of techniques involving search, semantic analysis and artificial intelligence. Most of these techniques rely on tests as the correctness criteria, which causes the test overfitting problem. Although various approaches such as learning from code corpus have been proposed to address this problem, they are unable to guarantee that the generated patches generalize beyond the given tests. This work studies automated repair of errors using a reference implementation. The reference implementation is symbolically analyzed to automatically infer a specification of the intended behavior. This specification is then used to synthesize a patch that enforces conditional equivalence of the patched and the reference programs. The use of the reference implementation as an implicit correctness criterion alleviates overfitting in test-based repair. Besides, since we generate patches by semantic analysis, the reference program may have a substantially different implementation from the patched program, which distinguishes our approach from existing techniques for regression repair like Relifix. Our experiments in repairing the embedded Linux Busybox with GNU Coreutils as reference (and vice-versa) revealed that the proposed approach scales to real-world programs and enables the generation of more correct patches.
通过使用涉及搜索、语义分析和人工智能的技术,研究了自动程序修复。这些技术大多依赖测试作为正确性标准,这会导致测试过拟合问题。尽管已经提出了各种方法,例如从代码语料库中学习来解决这个问题,但它们无法保证生成的补丁能够泛化到给定的测试之外。这项工作研究使用参考实现的错误自动修复。对参考实现进行符号化分析,以自动推断预期行为的规范。然后使用该规范来合成一个补丁,以强制补丁和参考程序的条件等价。参考实现作为隐式正确性标准的使用减轻了基于测试的修复中的过拟合。此外,由于我们通过语义分析生成补丁,参考程序可能与补丁程序具有本质上不同的实现,这将我们的方法与现有的回归修复技术(如Relifix)区分开来。我们以GNU coretils为参考修复嵌入式Linux Busybox(反之亦然)的实验表明,所提出的方法适用于现实世界的程序,并能够生成更正确的补丁。
{"title":"Semantic Program Repair Using a Reference Implementation","authors":"Sergey Mechtaev, Manh-Dung Nguyen, Yannic Noller, Lars Grunske, Abhik Roychoudhury","doi":"10.1145/3180155.3180247","DOIUrl":"https://doi.org/10.1145/3180155.3180247","url":null,"abstract":"Automated program repair has been studied via the use of techniques involving search, semantic analysis and artificial intelligence. Most of these techniques rely on tests as the correctness criteria, which causes the test overfitting problem. Although various approaches such as learning from code corpus have been proposed to address this problem, they are unable to guarantee that the generated patches generalize beyond the given tests. This work studies automated repair of errors using a reference implementation. The reference implementation is symbolically analyzed to automatically infer a specification of the intended behavior. This specification is then used to synthesize a patch that enforces conditional equivalence of the patched and the reference programs. The use of the reference implementation as an implicit correctness criterion alleviates overfitting in test-based repair. Besides, since we generate patches by semantic analysis, the reference program may have a substantially different implementation from the patched program, which distinguishes our approach from existing techniques for regression repair like Relifix. Our experiments in repairing the embedded Linux Busybox with GNU Coreutils as reference (and vice-versa) revealed that the proposed approach scales to real-world programs and enables the generation of more correct patches.","PeriodicalId":6560,"journal":{"name":"2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)","volume":"272 1","pages":"129-139"},"PeriodicalIF":0.0,"publicationDate":"2018-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76560882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 66
A Graph Solver for the Automated Generation of Consistent Domain-Specific Models 用于自动生成一致领域特定模型的图求解器
Pub Date : 2018-05-27 DOI: 10.1145/3180155.3180186
Oszkár Semeráth, András Szabolcs Nagy, Dániel Varró
Many testing and benchmarking scenarios in software and systems engineering depend on the systematic generation of graph models. For instance, tool qualification necessitated by safety standards would require a large set of consistent (well-formed or malformed) instance models specific to a domain. However, automatically generating consistent graph models which comply with a metamodel and satisfy all well-formedness constraints of industrial domains is a significant challenge. Existing solutions which map graph models into first-order logic specification to use back-end logic solvers (like Alloy or Z3) have severe scalability issues. In the paper, we propose a graph solver framework for the automated generation of consistent domain-specific instance models which operates directly over graphs by combining advanced techniques such as refinement of partial models, shape analysis, incremental graph query evaluation, and rule-based design space exploration to provide a more efficient guidance. Our initial performance evaluation carried out in four domains demonstrates that our approach is able to generate models which are 1-2 orders of magnitude larger (with 500 to 6000 objects!) compared to mapping-based approaches natively using Alloy.
软件和系统工程中的许多测试和基准测试场景依赖于图形模型的系统生成。例如,安全标准所必需的工具鉴定将需要特定于某个领域的大量一致的(格式良好或格式错误的)实例模型。然而,自动生成符合元模型并满足工业领域所有格式良好约束的一致性图模型是一个重大挑战。将图形模型映射到一阶逻辑规范以使用后端逻辑求解器(如Alloy或Z3)的现有解决方案存在严重的可伸缩性问题。在本文中,我们提出了一个图求解器框架,用于自动生成一致的特定领域实例模型,该模型通过结合部分模型的细化、形状分析、增量图查询评估和基于规则的设计空间探索等先进技术,直接对图进行操作,以提供更有效的指导。我们在四个领域进行的初步性能评估表明,与使用Alloy的基于映射的方法相比,我们的方法能够生成大1-2个数量级的模型(500到6000个对象!)。
{"title":"A Graph Solver for the Automated Generation of Consistent Domain-Specific Models","authors":"Oszkár Semeráth, András Szabolcs Nagy, Dániel Varró","doi":"10.1145/3180155.3180186","DOIUrl":"https://doi.org/10.1145/3180155.3180186","url":null,"abstract":"Many testing and benchmarking scenarios in software and systems engineering depend on the systematic generation of graph models. For instance, tool qualification necessitated by safety standards would require a large set of consistent (well-formed or malformed) instance models specific to a domain. However, automatically generating consistent graph models which comply with a metamodel and satisfy all well-formedness constraints of industrial domains is a significant challenge. Existing solutions which map graph models into first-order logic specification to use back-end logic solvers (like Alloy or Z3) have severe scalability issues. In the paper, we propose a graph solver framework for the automated generation of consistent domain-specific instance models which operates directly over graphs by combining advanced techniques such as refinement of partial models, shape analysis, incremental graph query evaluation, and rule-based design space exploration to provide a more efficient guidance. Our initial performance evaluation carried out in four domains demonstrates that our approach is able to generate models which are 1-2 orders of magnitude larger (with 500 to 6000 objects!) compared to mapping-based approaches natively using Alloy.","PeriodicalId":6560,"journal":{"name":"2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)","volume":"61 1","pages":"969-980"},"PeriodicalIF":0.0,"publicationDate":"2018-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75212908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Statistical Learning of API Fully Qualified Names in Code Snippets of Online Forums 在线论坛代码片段中API完全限定名的统计学习
Pub Date : 2018-05-27 DOI: 10.1145/3180155.3180230
H. Phan, H. Nguyen, Ngoc M. Tran, Linh-Huyen Truong, A. Nguyen, T. Nguyen
Software developers often make use of the online forums such as StackOverflow to learn how to use software libraries and their APIs. However, the code snippets in such a forum often contain undeclared, ambiguous, or largely unqualified external references. Such declaration ambiguity and external reference ambiguity present challenges for developers in learning to correctly use the APIs. In this paper, we propose StatType, a statistical approach to resolve the fully qualified names (FQNs) for the API elements in such code snippets. Unlike existing approaches that are based on heuristics, StatType has two well-integrated factors. We first learn from a large training code corpus the FQNs that often co-occur. Then, to derive the FQN for an API name in a code snippet, we use that knowledge and leverage the context consisting of neighboring API names. To realize those factors, we treat the problem as statistical machine translation from source code with partially qualified names to source code with FQNs of the APIs. Our empirical evaluation on real-world code and StackOverflow posts shows that StatType achieves very high accuracy with 97.6% precision and 96.7% recall, which is 16.5% relatively higher than the state-of-the-art approach.
软件开发人员经常利用在线论坛,如StackOverflow来学习如何使用软件库及其api。然而,这样的论坛中的代码片段通常包含未声明的、不明确的或基本上不合格的外部引用。这种声明歧义和外部引用歧义给开发人员学习正确使用api带来了挑战。在本文中,我们提出了StatType,这是一种用于解析此类代码片段中API元素的完全限定名称(fqn)的统计方法。与现有的基于启发式的方法不同,StatType有两个很好的集成因素。我们首先从一个大型的训练代码语料库中学习经常同时出现的fqn。然后,为了在代码片段中派生API名称的FQN,我们使用该知识并利用由相邻API名称组成的上下文。为了实现这些因素,我们将问题视为从具有部分限定名称的源代码到具有api的fqn的源代码的统计机器翻译。我们对真实世界代码和StackOverflow帖子的经验评估表明,StatType达到了非常高的准确率,精度为97.6%,召回率为96.7%,比最先进的方法高出16.5%。
{"title":"Statistical Learning of API Fully Qualified Names in Code Snippets of Online Forums","authors":"H. Phan, H. Nguyen, Ngoc M. Tran, Linh-Huyen Truong, A. Nguyen, T. Nguyen","doi":"10.1145/3180155.3180230","DOIUrl":"https://doi.org/10.1145/3180155.3180230","url":null,"abstract":"Software developers often make use of the online forums such as StackOverflow to learn how to use software libraries and their APIs. However, the code snippets in such a forum often contain undeclared, ambiguous, or largely unqualified external references. Such declaration ambiguity and external reference ambiguity present challenges for developers in learning to correctly use the APIs. In this paper, we propose StatType, a statistical approach to resolve the fully qualified names (FQNs) for the API elements in such code snippets. Unlike existing approaches that are based on heuristics, StatType has two well-integrated factors. We first learn from a large training code corpus the FQNs that often co-occur. Then, to derive the FQN for an API name in a code snippet, we use that knowledge and leverage the context consisting of neighboring API names. To realize those factors, we treat the problem as statistical machine translation from source code with partially qualified names to source code with FQNs of the APIs. Our empirical evaluation on real-world code and StackOverflow posts shows that StatType achieves very high accuracy with 97.6% precision and 96.7% recall, which is 16.5% relatively higher than the state-of-the-art approach.","PeriodicalId":6560,"journal":{"name":"2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)","volume":"55 1","pages":"632-642"},"PeriodicalIF":0.0,"publicationDate":"2018-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80448305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Towards Optimal Concolic Testing 走向最优结肠试验
Pub Date : 2018-05-27 DOI: 10.1145/3180155.3180177
Xinyu Wang, Jun Sun, Zhenbang Chen, Peixin Zhang, Jingyi Wang, Yun Lin
Concolic testing integrates concrete execution (e.g., random testing) and symbolic execution for test case generation. It is shown to be more cost-effective than random testing or symbolic execution sometimes. A concolic testing strategy is a function which decides when to apply random testing or symbolic execution, and if it is the latter case, which program path to symbolically execute. Many heuristics-based strategies have been proposed. It is still an open problem what is the optimal concolic testing strategy. In this work, we make two contributions towards solving this problem. First, we show the optimal strategy can be defined based on the probability of program paths and the cost of constraint solving. The problem of identifying the optimal strategy is then reduced to a model checking problem of Markov Decision Processes with Costs. Secondly, in view of the complexity in identifying the optimal strategy, we design a greedy algorithm for approximating the optimal strategy. We conduct two sets of experiments. One is based on randomly generated models and the other is based on a set of C programs. The results show that existing heuristics have much room to improve and our greedy algorithm often outperforms existing heuristics.
Concolic测试集成了具体执行(例如,随机测试)和生成测试用例的符号执行。有时,它被证明比随机测试或符号执行更具成本效益。concolic测试策略是一个函数,它决定何时应用随机测试或符号执行,如果是后一种情况,则决定哪条程序路径符号执行。人们提出了许多基于启发式的策略。什么是最优的结肠测试策略仍然是一个悬而未决的问题。在这项工作中,我们为解决这个问题做出了两方面的贡献。首先,我们证明了最优策略可以根据规划路径的概率和约束求解的代价来定义。然后将最优策略的识别问题简化为带成本的马尔可夫决策过程的模型检验问题。其次,针对最优策略识别的复杂性,设计了一种贪心算法来逼近最优策略。我们进行了两组实验。一个是基于随机生成的模型,另一个是基于一组C程序。结果表明,现有的启发式算法有很大的改进空间,我们的贪心算法往往优于现有的启发式算法。
{"title":"Towards Optimal Concolic Testing","authors":"Xinyu Wang, Jun Sun, Zhenbang Chen, Peixin Zhang, Jingyi Wang, Yun Lin","doi":"10.1145/3180155.3180177","DOIUrl":"https://doi.org/10.1145/3180155.3180177","url":null,"abstract":"Concolic testing integrates concrete execution (e.g., random testing) and symbolic execution for test case generation. It is shown to be more cost-effective than random testing or symbolic execution sometimes. A concolic testing strategy is a function which decides when to apply random testing or symbolic execution, and if it is the latter case, which program path to symbolically execute. Many heuristics-based strategies have been proposed. It is still an open problem what is the optimal concolic testing strategy. In this work, we make two contributions towards solving this problem. First, we show the optimal strategy can be defined based on the probability of program paths and the cost of constraint solving. The problem of identifying the optimal strategy is then reduced to a model checking problem of Markov Decision Processes with Costs. Secondly, in view of the complexity in identifying the optimal strategy, we design a greedy algorithm for approximating the optimal strategy. We conduct two sets of experiments. One is based on randomly generated models and the other is based on a set of C programs. The results show that existing heuristics have much room to improve and our greedy algorithm often outperforms existing heuristics.","PeriodicalId":6560,"journal":{"name":"2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)","volume":"19 1","pages":"291-302"},"PeriodicalIF":0.0,"publicationDate":"2018-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91518946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 56
Enlightened Debugging 开明的调试
Pub Date : 2018-05-27 DOI: 10.1145/3180155.3180242
Xiangyu Li, Shaowei Zhu, Marcelo d’Amorim, A. Orso
Numerous automated techniques have been proposed to reduce the cost of software debugging, a notoriously time-consuming and human-intensive activity. Among these techniques, Statistical Fault Localization (SFL) is particularly popular. One issue with SFL is that it is based on strong, often unrealistic assumptions on how developers behave when debugging. To address this problem, we propose Enlighten, an interactive, feedback-driven fault localization technique. Given a failing test, Enlighten (1) leverages SFL and dynamic dependence analysis to identify suspicious method invocations and corresponding data values, (2) presents the developer with a query about the most suspicious invocation expressed in terms of inputs and outputs, (3) encodes the developer feedback on the correctness of individual data values as extra program specifications, and (4) repeats these steps until the fault is found. We evaluated Enlighten in two ways. First, we applied Enlighten to 1,807 real and seeded faults in 3 open source programs using an automated oracle as a simulated user; for over 96% of these faults, Enlighten required less than 10 interactions with the simulated user to localize the fault, and a sensitivity analysis showed that the results were robust to erroneous responses. Second, we performed an actual user study on 4 faults with 24 participants and found that participants who used Enlighten performed significantly better than those not using our tool, in terms of both number of faults localized and time needed to localize the faults.
已经提出了许多自动化技术来减少软件调试的成本,这是一项众所周知的耗时和人力密集的活动。在这些技术中,统计故障定位(SFL)尤为流行。SFL的一个问题是,它基于对开发人员在调试时的行为的强烈的、通常是不切实际的假设。为了解决这个问题,我们提出了一种交互式的、反馈驱动的故障定位技术。给定一个失败的测试,启蒙(1)利用SFL和动态依赖分析来识别可疑的方法调用和相应的数据值,(2)向开发人员提供关于以输入和输出表示的最可疑调用的查询,(3)将开发人员对单个数据值正确性的反馈编码为额外的程序规范,(4)重复这些步骤,直到发现故障。我们从两个方面评估了enlightenment。首先,我们使用自动化oracle作为模拟用户,将enlightenment应用于3个开源程序中的1,807个真实和种子故障;对于超过96%的故障,启蒙需要与模拟用户进行不到10次交互来定位故障,并且灵敏度分析表明结果对错误响应具有鲁棒性。其次,我们对24名参与者进行了4个故障的实际用户研究,发现使用启蒙工具的参与者在故障定位数量和故障定位所需时间方面的表现明显优于未使用我们工具的参与者。
{"title":"Enlightened Debugging","authors":"Xiangyu Li, Shaowei Zhu, Marcelo d’Amorim, A. Orso","doi":"10.1145/3180155.3180242","DOIUrl":"https://doi.org/10.1145/3180155.3180242","url":null,"abstract":"Numerous automated techniques have been proposed to reduce the cost of software debugging, a notoriously time-consuming and human-intensive activity. Among these techniques, Statistical Fault Localization (SFL) is particularly popular. One issue with SFL is that it is based on strong, often unrealistic assumptions on how developers behave when debugging. To address this problem, we propose Enlighten, an interactive, feedback-driven fault localization technique. Given a failing test, Enlighten (1) leverages SFL and dynamic dependence analysis to identify suspicious method invocations and corresponding data values, (2) presents the developer with a query about the most suspicious invocation expressed in terms of inputs and outputs, (3) encodes the developer feedback on the correctness of individual data values as extra program specifications, and (4) repeats these steps until the fault is found. We evaluated Enlighten in two ways. First, we applied Enlighten to 1,807 real and seeded faults in 3 open source programs using an automated oracle as a simulated user; for over 96% of these faults, Enlighten required less than 10 interactions with the simulated user to localize the fault, and a sensitivity analysis showed that the results were robust to erroneous responses. Second, we performed an actual user study on 4 faults with 24 participants and found that participants who used Enlighten performed significantly better than those not using our tool, in terms of both number of faults localized and time needed to localize the faults.","PeriodicalId":6560,"journal":{"name":"2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)","volume":"1 1","pages":"82-92"},"PeriodicalIF":0.0,"publicationDate":"2018-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90400011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Almost There: A Study on Quasi-Contributors in Open-Source Software Projects 差一点:开源软件项目中的准贡献者研究
Pub Date : 2018-05-27 DOI: 10.1145/3180155.3180208
Igor Steinmacher, G. Pinto, I. Wiese, M. Gerosa
Recent studies suggest that well-known OSS projects struggle to find the needed workforce to continue evolving—in part because external developers fail to overcome their first contribution barriers. In this paper, we investigate how and why quasi-contributors (external developers who did not succeed in getting their contributions accepted to an OSS project) fail. To achieve our goal, we collected data from 21 popular, non-trivial GitHub projects, identified quasi-contributors, and analyzed their pull-requests. In addition, we conducted surveys with quasi-contributors, and projects' integrators, to understand their perceptions about nonacceptance.We found 10,099 quasi-contributors — about 70% of the total actual contributors — that submitted 12,367 non-accepted pull-requests. In five projects, we found more quasi-contributors than actual contributors. About one-third of the developers who took our survey disagreed with the nonacceptance, and around 30% declared the nonacceptance demotivated or prevented them from placing another pull-request. The main reasons for pull-request nonacceptance from the quasi-contributors' perspective were "superseded/duplicated pull-request" and "mismatch between developer's and team's vision/opinion." A manual analysis of a representative sample of 263 pull-requests corroborated with this finding. We also found reasons related to the relationship with the community and lack of experience or commitment from the quasi-contributors. This empirical study is particularly relevant to those interested in fostering developers' participation and retention in OSS communities.
最近的研究表明,知名的OSS项目很难找到所需的劳动力来继续发展——部分原因是外部开发人员未能克服他们的第一个贡献障碍。在本文中,我们调查了准贡献者(没有成功地让他们的贡献被OSS项目接受的外部开发人员)失败的方式和原因。为了实现我们的目标,我们从21个流行的、重要的GitHub项目中收集了数据,确定了准贡献者,并分析了他们的拉取请求。此外,我们对准贡献者和项目集成商进行了调查,以了解他们对不接受的看法。我们发现10,099个准贡献者(约占实际贡献者总数的70%)提交了12,367个未被接受的pull请求。在五个项目中,我们发现准贡献者多于实际贡献者。在接受我们调查的开发者中,约有三分之一的人不同意不接受游戏,约30%的人表示不接受游戏让他们失去了动力,或者阻止了他们提出下一个下拉请求。从准贡献者的角度来看,不接受拉取请求的主要原因是“取代/重复的拉取请求”和“开发人员和团队的愿景/意见之间的不匹配”。对263个撤回请求的代表性样本进行的人工分析证实了这一发现。我们还发现了与社区关系以及准贡献者缺乏经验或承诺有关的原因。这个实证研究特别与那些对促进开发人员参与和保留OSS社区感兴趣的人相关。
{"title":"Almost There: A Study on Quasi-Contributors in Open-Source Software Projects","authors":"Igor Steinmacher, G. Pinto, I. Wiese, M. Gerosa","doi":"10.1145/3180155.3180208","DOIUrl":"https://doi.org/10.1145/3180155.3180208","url":null,"abstract":"Recent studies suggest that well-known OSS projects struggle to find the needed workforce to continue evolving—in part because external developers fail to overcome their first contribution barriers. In this paper, we investigate how and why quasi-contributors (external developers who did not succeed in getting their contributions accepted to an OSS project) fail. To achieve our goal, we collected data from 21 popular, non-trivial GitHub projects, identified quasi-contributors, and analyzed their pull-requests. In addition, we conducted surveys with quasi-contributors, and projects' integrators, to understand their perceptions about nonacceptance.We found 10,099 quasi-contributors — about 70% of the total actual contributors — that submitted 12,367 non-accepted pull-requests. In five projects, we found more quasi-contributors than actual contributors. About one-third of the developers who took our survey disagreed with the nonacceptance, and around 30% declared the nonacceptance demotivated or prevented them from placing another pull-request. The main reasons for pull-request nonacceptance from the quasi-contributors' perspective were \"superseded/duplicated pull-request\" and \"mismatch between developer's and team's vision/opinion.\" A manual analysis of a representative sample of 263 pull-requests corroborated with this finding. We also found reasons related to the relationship with the community and lack of experience or commitment from the quasi-contributors. This empirical study is particularly relevant to those interested in fostering developers' participation and retention in OSS communities.","PeriodicalId":6560,"journal":{"name":"2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)","volume":"82 1","pages":"256-266"},"PeriodicalIF":0.0,"publicationDate":"2018-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84368000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 82
Collective Program Analysis 集体项目分析
Pub Date : 2018-05-27 DOI: 10.1145/3180155.3180252
Ganesha Upadhyaya, Hridesh Rajan
Popularity of data-driven software engineering has led to an increasing demand on the infrastructures to support efficient execution of tasks that require deeper source code analysis. While task optimization and parallelization are the adopted solutions, other research directions are less explored. We present collective program analysis (CPA), a technique for scaling large scale source code analyses, especially those that make use of control and data flow analysis, by leveraging analysis specific similarity. Analysis specific similarity is about, whether two or more programs can be considered similar for a given analysis. The key idea of collective program analysis is to cluster programs based on analysis specific similarity, such that running the analysis on one candidate in each cluster is sufficient to produce the result for others. For determining analysis specific similarity and clustering analysis-equivalent programs, we use a sparse representation and a canonical labeling scheme. Our evaluation shows that for a variety of source code analyses on a large dataset of programs, substantial reduction in the analysis time can be achieved; on average a 69% reduction when compared to a baseline and on average a 36% reduction when compared to a prior technique. We also found that a large amount of analysis-equivalent programs exists in large datasets.
数据驱动软件工程的流行导致对基础设施的需求不断增加,以支持需要深入源代码分析的任务的有效执行。虽然任务优化和并行化是采用的解决方案,但其他研究方向的探索较少。我们提出了集体程序分析(CPA),这是一种通过利用分析的特定相似性来扩展大规模源代码分析的技术,特别是那些利用控制和数据流分析的技术。分析的特定相似性是关于两个或多个程序对于给定的分析是否可以认为是相似的。集体程序分析的关键思想是根据分析的特定相似性对程序进行聚类,这样,在每个聚类中的一个候选程序上运行分析就足以为其他程序产生结果。为了确定分析特定的相似性和聚类分析等效程序,我们使用了稀疏表示和规范标记方案。我们的评估表明,对于大型程序数据集上的各种源代码分析,可以实现分析时间的大幅减少;与基线相比,平均减少69%,与先前的技术相比,平均减少36%。我们还发现,在大型数据集中存在大量的分析等效程序。
{"title":"Collective Program Analysis","authors":"Ganesha Upadhyaya, Hridesh Rajan","doi":"10.1145/3180155.3180252","DOIUrl":"https://doi.org/10.1145/3180155.3180252","url":null,"abstract":"Popularity of data-driven software engineering has led to an increasing demand on the infrastructures to support efficient execution of tasks that require deeper source code analysis. While task optimization and parallelization are the adopted solutions, other research directions are less explored. We present collective program analysis (CPA), a technique for scaling large scale source code analyses, especially those that make use of control and data flow analysis, by leveraging analysis specific similarity. Analysis specific similarity is about, whether two or more programs can be considered similar for a given analysis. The key idea of collective program analysis is to cluster programs based on analysis specific similarity, such that running the analysis on one candidate in each cluster is sufficient to produce the result for others. For determining analysis specific similarity and clustering analysis-equivalent programs, we use a sparse representation and a canonical labeling scheme. Our evaluation shows that for a variety of source code analyses on a large dataset of programs, substantial reduction in the analysis time can be achieved; on average a 69% reduction when compared to a baseline and on average a 36% reduction when compared to a prior technique. We also found that a large amount of analysis-equivalent programs exists in large datasets.","PeriodicalId":6560,"journal":{"name":"2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)","volume":"16 1","pages":"620-631"},"PeriodicalIF":0.0,"publicationDate":"2018-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88036924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
FaCoY – A Code-to-Code Search Engine FaCoY -代码对代码搜索引擎
Pub Date : 2018-05-27 DOI: 10.1145/3180155.3180187
Kisub Kim, Dongsun Kim, Tegawendé F. Bissyandé, Eunjong Choi, Li Li, Jacques Klein, Yves Le Traon
Code search is an unavoidable activity in software development. Various approaches and techniques have been explored in the literature to support code search tasks. Most of these approaches focus on serving user queries provided as natural language free-form input. However, there exists a wide range of use-case scenarios where a code-to-code approach would be most beneficial. For example, research directions in code transplantation, code diversity, patch recommendation can leverage a code-to-code search engine to find essential ingredients for their techniques. In this paper, we propose FaCoY, a novel approach for statically finding code fragments which may be semantically similar to user input code. FaCoY implements a query alternation strategy: instead of directly matching code query tokens with code in the search space, FaCoY first attempts to identify other tokens which may also be relevant in implementing the functional behavior of the input code. With various experiments, we show that (1) FaCoY is more effective than online code-to-code search engines; (2) FaCoY can detect more semantic code clones (i.e., Type-4) in BigCloneBench than the state-of-the-art; (3) FaCoY, while static, can detect code fragments which are indeed similar with respect to runtime execution behavior; and (4) FaCoY can be useful in code/patch recommendation.
代码搜索是软件开发中不可避免的活动。文献中已经探索了各种方法和技术来支持代码搜索任务。这些方法中的大多数都侧重于服务作为自然语言自由格式输入提供的用户查询。然而,在很多用例场景中,代码到代码的方法是最有益的。例如,代码移植、代码多样性、补丁推荐等研究方向可以利用代码到代码搜索引擎来寻找其技术的基本成分。在本文中,我们提出了FaCoY,这是一种用于静态查找代码片段的新方法,这些代码片段可能在语义上与用户输入代码相似。FaCoY实现了一种查询交替策略:FaCoY不是直接将代码查询令牌与搜索空间中的代码进行匹配,而是首先尝试识别其他可能与实现输入代码的功能行为相关的令牌。通过各种实验,我们表明:(1)FaCoY比在线代码对代码搜索引擎更有效;(2) FaCoY可以在BigCloneBench中检测到更多的语义代码克隆(即Type-4);(3) FaCoY虽然是静态的,但可以检测到在运行时执行行为方面确实相似的代码片段;(4) FaCoY在代码/补丁推荐中很有用。
{"title":"FaCoY – A Code-to-Code Search Engine","authors":"Kisub Kim, Dongsun Kim, Tegawendé F. Bissyandé, Eunjong Choi, Li Li, Jacques Klein, Yves Le Traon","doi":"10.1145/3180155.3180187","DOIUrl":"https://doi.org/10.1145/3180155.3180187","url":null,"abstract":"Code search is an unavoidable activity in software development. Various approaches and techniques have been explored in the literature to support code search tasks. Most of these approaches focus on serving user queries provided as natural language free-form input. However, there exists a wide range of use-case scenarios where a code-to-code approach would be most beneficial. For example, research directions in code transplantation, code diversity, patch recommendation can leverage a code-to-code search engine to find essential ingredients for their techniques. In this paper, we propose FaCoY, a novel approach for statically finding code fragments which may be semantically similar to user input code. FaCoY implements a query alternation strategy: instead of directly matching code query tokens with code in the search space, FaCoY first attempts to identify other tokens which may also be relevant in implementing the functional behavior of the input code. With various experiments, we show that (1) FaCoY is more effective than online code-to-code search engines; (2) FaCoY can detect more semantic code clones (i.e., Type-4) in BigCloneBench than the state-of-the-art; (3) FaCoY, while static, can detect code fragments which are indeed similar with respect to runtime execution behavior; and (4) FaCoY can be useful in code/patch recommendation.","PeriodicalId":6560,"journal":{"name":"2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)","volume":"581 1","pages":"946-957"},"PeriodicalIF":0.0,"publicationDate":"2018-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85323383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 96
期刊
2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1