首页 > 最新文献

Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering最新文献

英文 中文
Designing for dystopia: software engineering research for the post-apocalypse 为反乌托邦而设计:后启示录时代的软件工程研究
Titus Barik, Rahul Pandita, Justin Middleton, E. Murphy-Hill
Software engineering researchers have a tendency to be optimistic about the future. Though useful, optimism bias bolsters unrealistic expectations towards desirable outcomes. We argue that explicitly framing software engineering research through pessimistic futures, or dystopias, will mitigate optimism bias and engender more diverse and thought-provoking research directions. We demonstrate through three pop culture dystopias, Battlestar Galactica, Fallout 3, and Children of Men, how reflecting on dystopian scenarios provides research opportunities as well as implications, such as making research accessible to non-experts, that are relevant to our present.
软件工程研究人员倾向于对未来持乐观态度。乐观偏见虽然有用,但却助长了人们对理想结果不切实际的期望。我们认为,通过悲观的未来或反乌托邦来明确地构建软件工程研究,将减轻乐观偏见,并产生更多样化和发人深省的研究方向。我们通过三个流行文化的反乌托邦——太空堡垒卡拉狄加、辐射3和人类之子——来证明,反思反乌托邦场景如何提供研究机会和启示,比如让非专家也能接触到与我们当前相关的研究。
{"title":"Designing for dystopia: software engineering research for the post-apocalypse","authors":"Titus Barik, Rahul Pandita, Justin Middleton, E. Murphy-Hill","doi":"10.1145/2950290.2983986","DOIUrl":"https://doi.org/10.1145/2950290.2983986","url":null,"abstract":"Software engineering researchers have a tendency to be optimistic about the future. Though useful, optimism bias bolsters unrealistic expectations towards desirable outcomes. We argue that explicitly framing software engineering research through pessimistic futures, or dystopias, will mitigate optimism bias and engender more diverse and thought-provoking research directions. We demonstrate through three pop culture dystopias, Battlestar Galactica, Fallout 3, and Children of Men, how reflecting on dystopian scenarios provides research opportunities as well as implications, such as making research accessible to non-experts, that are relevant to our present.","PeriodicalId":20532,"journal":{"name":"Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering","volume":"133 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75479991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
ECHO: instantaneous in situ race detection in the IDE ECHO: IDE中的瞬时原位种族检测
Sheng Zhan, Jeff Huang
We present ECHO, a new technique that detects data races instantaneously in the IDE while developers code. ECHO is the first technique of its kind for incremental race detection supporting both code addition and deletion in the IDE. Unlike conventional static race detectors, ECHO warns developers of potential data races immediately as they are introduced into the program. The core underpinning ECHO is a set of new change-aware static analyses based on a novel static happens-before graph that, given a program change, efficiently compute the change-relevant information without re-analyzing the whole program. Our evaluation within a Java environment on both popular benchmarks and real- world applications shows promising results: for each code addition, or deletion, ECHO can instantly pinpoint all the races in a few milliseconds on average, three to four orders of magnitude faster than a conventional whole-program race detector with the same precision.
我们介绍了ECHO,这是一种新技术,可以在开发人员编写代码时在IDE中即时检测数据竞争。ECHO是同类技术中第一种支持IDE中代码添加和删除的增量竞争检测技术。与传统的静态竞争检测器不同,ECHO在引入程序时立即警告开发人员潜在的数据竞争。ECHO的核心基础是一组新的变化感知静态分析,该分析基于一种新的静态“发生前图”,给定程序更改,该图可以有效地计算与更改相关的信息,而无需重新分析整个程序。我们在Java环境中对流行的基准测试和现实世界的应用程序进行了评估,结果令人鼓舞:对于每一个代码添加或删除,ECHO平均可以在几毫秒内立即查明所有的竞争,在相同的精度下,比传统的整个程序竞争检测器快三到四个数量级。
{"title":"ECHO: instantaneous in situ race detection in the IDE","authors":"Sheng Zhan, Jeff Huang","doi":"10.1145/2950290.2950332","DOIUrl":"https://doi.org/10.1145/2950290.2950332","url":null,"abstract":"We present ECHO, a new technique that detects data races instantaneously in the IDE while developers code. ECHO is the first technique of its kind for incremental race detection supporting both code addition and deletion in the IDE. Unlike conventional static race detectors, ECHO warns developers of potential data races immediately as they are introduced into the program. The core underpinning ECHO is a set of new change-aware static analyses based on a novel static happens-before graph that, given a program change, efficiently compute the change-relevant information without re-analyzing the whole program. Our evaluation within a Java environment on both popular benchmarks and real- world applications shows promising results: for each code addition, or deletion, ECHO can instantly pinpoint all the races in a few milliseconds on average, three to four orders of magnitude faster than a conventional whole-program race detector with the same precision.","PeriodicalId":20532,"journal":{"name":"Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85907519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Directed test generation to detect loop inefficiencies 定向测试生成,以检测循环效率低下
Monika Dhok, M. Ramanathan
Redundant traversal of loops in the context of other loops has been recently identified as a source of performance bugs in many Java libraries. This has resulted in the design of static and dynamic analysis techniques to detect these performance bugs automatically. However, while the effectiveness of dynamic analyses is dependent on the analyzed input tests, static analyses are less effective in automatically validating the presence of these problems, validating the fixes and avoiding regressions in future versions. This necessitates the design of an approach to automatically generate tests for exposing redundant traversal of loops. In this paper, we design a novel, scalable and automatic approach that addresses this goal. Our approach takes a library and an initial set of coverage-driven randomly generated tests as input and generates tests which enable detection of redundant traversal of loops. Our approach is broadly composed of three phases – analysis of the execution of random tests to generate method summaries, identification of methods with potential nested loops along with the appropriate context to expose the problem, and test generation to invoke the identified methods with the appropriate parameters. The generated tests can be analyzed by existing dynamic tools to detect possible performance issues. We have implemented our approach on top of the SOOT bytecode analysis framework and validated it on many open-source Java libraries. Our experiments reveal the effectiveness of our approach in generating 224 tests that reveal 46 bugs across seven libraries, including 34 previously unknown bugs. The tests generated using our approach significantly outperform the randomly generated tests in their ability to expose the inefficiencies, demonstrating the usefulness of our design. The implementation of our tool, named Glider, is available at http://drona.csa.iisc.ac.in/~sss/tools/glider.
在其他循环的上下文中,循环的冗余遍历最近被确定为许多Java库中性能错误的一个来源。这导致了静态和动态分析技术的设计,以自动检测这些性能缺陷。然而,虽然动态分析的有效性取决于所分析的输入测试,但静态分析在自动验证这些问题的存在、验证修复和避免未来版本中的回归方面效率较低。这就需要设计一种方法来自动生成暴露冗余循环遍历的测试。在本文中,我们设计了一种新颖的、可扩展的和自动的方法来实现这一目标。我们的方法采用一个库和一组覆盖驱动的随机生成的初始测试作为输入,并生成能够检测冗余循环遍历的测试。我们的方法大致由三个阶段组成——分析随机测试的执行以生成方法摘要,识别具有潜在嵌套循环的方法以及暴露问题的适当上下文,以及生成测试以调用具有适当参数的已识别方法。生成的测试可以通过现有的动态工具进行分析,以检测可能的性能问题。我们已经在SOOT字节码分析框架之上实现了我们的方法,并在许多开源Java库上进行了验证。我们的实验表明,我们的方法在生成224个测试中是有效的,这些测试揭示了7个库中的46个错误,其中包括34个以前未知的错误。使用我们的方法生成的测试在暴露低效率的能力上明显优于随机生成的测试,证明了我们的设计的有用性。我们的工具的实现名为Glider,可在http://drona.csa.iisc.ac.in/~sss/tools/glider上获得。
{"title":"Directed test generation to detect loop inefficiencies","authors":"Monika Dhok, M. Ramanathan","doi":"10.1145/2950290.2950360","DOIUrl":"https://doi.org/10.1145/2950290.2950360","url":null,"abstract":"Redundant traversal of loops in the context of other loops has been recently identified as a source of performance bugs in many Java libraries. This has resulted in the design of static and dynamic analysis techniques to detect these performance bugs automatically. However, while the effectiveness of dynamic analyses is dependent on the analyzed input tests, static analyses are less effective in automatically validating the presence of these problems, validating the fixes and avoiding regressions in future versions. This necessitates the design of an approach to automatically generate tests for exposing redundant traversal of loops. In this paper, we design a novel, scalable and automatic approach that addresses this goal. Our approach takes a library and an initial set of coverage-driven randomly generated tests as input and generates tests which enable detection of redundant traversal of loops. Our approach is broadly composed of three phases – analysis of the execution of random tests to generate method summaries, identification of methods with potential nested loops along with the appropriate context to expose the problem, and test generation to invoke the identified methods with the appropriate parameters. The generated tests can be analyzed by existing dynamic tools to detect possible performance issues. We have implemented our approach on top of the SOOT bytecode analysis framework and validated it on many open-source Java libraries. Our experiments reveal the effectiveness of our approach in generating 224 tests that reveal 46 bugs across seven libraries, including 34 previously unknown bugs. The tests generated using our approach significantly outperform the randomly generated tests in their ability to expose the inefficiencies, demonstrating the usefulness of our design. The implementation of our tool, named Glider, is available at http://drona.csa.iisc.ac.in/~sss/tools/glider.","PeriodicalId":20532,"journal":{"name":"Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering","volume":"78 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86888877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Can testedness be effectively measured? 可测试性是否可以有效测量?
Iftekhar Ahmed, Rahul Gopinath, Caius Brindescu, Alex Groce, Carlos Jensen
Among the major questions that a practicing tester faces are deciding where to focus additional testing effort, and deciding when to stop testing. Test the least-tested code, and stop when all code is well-tested, is a reasonable answer. Many measures of "testedness" have been proposed; unfortunately, we do not know whether these are truly effective. In this paper we propose a novel evaluation of two of the most important and widely-used measures of test suite quality. The first measure is statement coverage, the simplest and best-known code coverage measure. The second measure is mutation score, a supposedly more powerful, though expensive, measure. We evaluate these measures using the actual criteria of interest: if a program element is (by these measures) well tested at a given point in time, it should require fewer future bug-fixes than a "poorly tested" element. If not, then it seems likely that we are not effectively measuring testedness. Using a large number of open source Java programs from Github and Apache, we show that both statement coverage and mutation score have only a weak negative correlation with bug-fixes. Despite the lack of strong correlation, there are statistically and practically significant differences between program elements for various binary criteria. Program elements (other than classes) covered by any test case see about half as many bug-fixes as those not covered, and a similar line can be drawn for mutation score thresholds. Our results have important implications for both software engineering practice and research evaluation.
实践测试人员面临的主要问题之一是决定在哪里集中额外的测试工作,以及决定何时停止测试。测试最少测试的代码,并在所有代码都经过良好测试时停止测试,这是一个合理的答案。人们提出了许多“可测试性”的衡量标准;不幸的是,我们不知道这些是否真的有效。在本文中,我们对测试套件质量的两个最重要和最广泛使用的度量提出了一种新的评估方法。第一个度量是语句覆盖率,这是最简单和最著名的代码覆盖率度量。第二种方法是突变分数,这是一种被认为更有效但更昂贵的方法。我们使用感兴趣的实际标准来评估这些度量:如果一个程序元素(通过这些度量)在给定的时间点上得到了很好的测试,那么它应该比一个“测试不好”的元素需要更少的错误修复。如果不是,那么我们似乎没有有效地衡量可测试性。通过使用来自Github和Apache的大量开源Java程序,我们发现语句覆盖率和突变得分与bug修复只有微弱的负相关。尽管缺乏强相关性,但在各种二元标准的程序元素之间存在统计上和实践上的显著差异。被任何测试用例覆盖的程序元素(类除外)看到的bug修复数量大约是未被覆盖的程序元素的一半,并且可以为突变分数阈值绘制类似的线。我们的研究结果对软件工程实践和研究评估都有重要的意义。
{"title":"Can testedness be effectively measured?","authors":"Iftekhar Ahmed, Rahul Gopinath, Caius Brindescu, Alex Groce, Carlos Jensen","doi":"10.1145/2950290.2950324","DOIUrl":"https://doi.org/10.1145/2950290.2950324","url":null,"abstract":"Among the major questions that a practicing tester faces are deciding where to focus additional testing effort, and deciding when to stop testing. Test the least-tested code, and stop when all code is well-tested, is a reasonable answer. Many measures of \"testedness\" have been proposed; unfortunately, we do not know whether these are truly effective. In this paper we propose a novel evaluation of two of the most important and widely-used measures of test suite quality. The first measure is statement coverage, the simplest and best-known code coverage measure. The second measure is mutation score, a supposedly more powerful, though expensive, measure. We evaluate these measures using the actual criteria of interest: if a program element is (by these measures) well tested at a given point in time, it should require fewer future bug-fixes than a \"poorly tested\" element. If not, then it seems likely that we are not effectively measuring testedness. Using a large number of open source Java programs from Github and Apache, we show that both statement coverage and mutation score have only a weak negative correlation with bug-fixes. Despite the lack of strong correlation, there are statistically and practically significant differences between program elements for various binary criteria. Program elements (other than classes) covered by any test case see about half as many bug-fixes as those not covered, and a similar line can be drawn for mutation score thresholds. Our results have important implications for both software engineering practice and research evaluation.","PeriodicalId":20532,"journal":{"name":"Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering","volume":"388 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90121109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
End-to-end memory behavior profiling with DINAMITE 使用DINAMITE进行端到端内存行为分析
Svetozar Miucin, C. Brady, Alexandra Fedorova
Performance bottlenecks related to a program's memory behavior are common, yet very hard to debug. Tools that attempt to aid software engineers in diagnosing these bugs are typically designed to handle specific use cases; they do not provide information to comprehensively explore memory problems and to find solutions. Detailed traces of memory accesses would enable developers to ask various questions about the program's memory behaviour, but these traces quickly become very large even for short executions. We present DINAMITE: a toolkit for Dynamic INstrumentation and Analysis for MassIve Trace Exploration. DINAMITE instruments every memory access with highly debug information and provides a suite of extensible analysis tools to aid programmers in pinpointing memory bottlenecks.
与程序内存行为相关的性能瓶颈很常见,但很难调试。试图帮助软件工程师诊断这些bug的工具通常是为了处理特定的用例而设计的;它们不能提供全面探索记忆问题和找到解决方案的信息。内存访问的详细跟踪将使开发人员能够询问有关程序内存行为的各种问题,但是即使对于短时间执行,这些跟踪也会很快变得非常大。我们提出了DINAMITE:一个用于大规模痕量勘探的动态仪器和分析工具包。DINAMITE为每个内存访问提供高度调试信息,并提供一套可扩展的分析工具,以帮助程序员精确定位内存瓶颈。
{"title":"End-to-end memory behavior profiling with DINAMITE","authors":"Svetozar Miucin, C. Brady, Alexandra Fedorova","doi":"10.1145/2950290.2983941","DOIUrl":"https://doi.org/10.1145/2950290.2983941","url":null,"abstract":"Performance bottlenecks related to a program's memory behavior are common, yet very hard to debug. Tools that attempt to aid software engineers in diagnosing these bugs are typically designed to handle specific use cases; they do not provide information to comprehensively explore memory problems and to find solutions. Detailed traces of memory accesses would enable developers to ask various questions about the program's memory behaviour, but these traces quickly become very large even for short executions. We present DINAMITE: a toolkit for Dynamic INstrumentation and Analysis for MassIve Trace Exploration. DINAMITE instruments every memory access with highly debug information and provides a suite of extensible analysis tools to aid programmers in pinpointing memory bottlenecks.","PeriodicalId":20532,"journal":{"name":"Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering","volume":"80 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90394603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Designing minimal effective normative systems with the help of lightweight formal methods 在轻量级形式化方法的帮助下设计最小有效的规范系统
Jianye Hao, Eunsuk Kang, Jun Sun, D. Jackson
Normative systems (i.e., a set of rules) are an important approach to achieving effective coordination among (often an arbitrary number of) agents in multiagent systems. A normative system should be effective in ensuring the satisfaction of a desirable system property, and minimal (i.e., not containing norms that unnecessarily over-constrain the behaviors of agents). Designing or even automatically synthesizing minimal effective normative systems is highly non-trivial. Previous attempts on synthesizing such systems through simulations often fail to generate normative systems which are both minimal and effective. In this work, we propose a framework that facilitates designing of minimal effective normative systems using lightweight formal methods. Given a minimal effective normative system which coordinates many agents must be minimal and effective for a small number of agents, we start with automatically synthesizing one such system with a few agents. We then increase the number of agents so as to check whether the same design remains minimal and effective. If it is, we manually establish an induction proof so as to lift the design to an arbitrary number of agents.
规范系统(即一套规则)是在多智能体系统中实现(通常是任意数量的)智能体之间有效协调的重要方法。规范系统应该有效地确保满足理想的系统属性,并且最小化(即,不包含不必要地过度约束代理行为的规范)。设计甚至自动合成最小有效的规范系统是非常重要的。以前通过模拟综合这些系统的尝试往往不能产生既最小又有效的规范系统。在这项工作中,我们提出了一个框架,该框架有助于使用轻量级形式化方法设计最小有效的规范系统。给定一个协调多个agent的最小有效规范系统,对于少数agent必须是最小有效的,我们从自动合成一个这样的系统开始。然后,我们增加代理的数量,以检查相同的设计是否仍然是最小和有效的。如果是,我们手动建立归纳证明,从而将设计提升到任意数量的代理。
{"title":"Designing minimal effective normative systems with the help of lightweight formal methods","authors":"Jianye Hao, Eunsuk Kang, Jun Sun, D. Jackson","doi":"10.1145/2950290.2950307","DOIUrl":"https://doi.org/10.1145/2950290.2950307","url":null,"abstract":"Normative systems (i.e., a set of rules) are an important approach to achieving effective coordination among (often an arbitrary number of) agents in multiagent systems. A normative system should be effective in ensuring the satisfaction of a desirable system property, and minimal (i.e., not containing norms that unnecessarily over-constrain the behaviors of agents). Designing or even automatically synthesizing minimal effective normative systems is highly non-trivial. Previous attempts on synthesizing such systems through simulations often fail to generate normative systems which are both minimal and effective. In this work, we propose a framework that facilitates designing of minimal effective normative systems using lightweight formal methods. Given a minimal effective normative system which coordinates many agents must be minimal and effective for a small number of agents, we start with automatically synthesizing one such system with a few agents. We then increase the number of agents so as to check whether the same design remains minimal and effective. If it is, we manually establish an induction proof so as to lift the design to an arbitrary number of agents.","PeriodicalId":20532,"journal":{"name":"Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79224698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Continuous deployment of mobile software at facebook (showcase) 在facebook持续部署移动软件(showcase)
Chuck Rossi, Elisa Shibley, Shi Su, Kent L. Beck, T. Savor, M. Stumm
Continuous deployment is the practice of releasing software updates to production as soon as it is ready, which is receiving increased adoption in industry. The frequency of updates of mobile software has traditionally lagged the state of practice for cloud-based services for a number of reasons. Mobile versions can only be released periodically. Users can choose when and if to upgrade, which means that several different releases coexist in production. There are hundreds of Android hardware variants, which increases the risk of having errors in the software being deployed. Facebook has made significant progress in increasing the frequency of its mobile deployments. Over a period of 4 years, the Android release has gone from a deployment every 8 weeks to a deployment every week. In this paper, we describe in detail the mobile deployment process at FB. We present our findings from an extensive analysis of software engineering metrics based on data collected over a period of 7 years. A key finding is that the frequency of deployment does not directly affect developer productivity or software quality. We argue that this finding is due to the fact that increasing the frequency of continuous deployment forces improved release and deployment automation, which in turn reduces developer workload. Additionally, the data we present shows that dog-fooding and obtaining feedback from alpha and beta customers is critical to maintaining release quality.
持续部署是一种实践,即一旦软件更新准备好,就将其发布到生产环境中,这在工业中得到越来越多的采用。由于一些原因,移动软件的更新频率传统上落后于基于云的服务的实践状态。移动版本只能定期发布。用户可以选择何时以及是否升级,这意味着生产环境中同时存在几个不同的版本。有数百种Android硬件变体,这增加了在部署的软件中出现错误的风险。Facebook在提高移动部署频率方面取得了重大进展。在4年的时间里,Android的发布已经从每8周部署一次变成了每周部署一次。本文详细介绍了FB的移动部署流程。我们从对软件工程度量的广泛分析中提出了我们的发现,这些分析基于7年来收集的数据。一个关键的发现是部署的频率并不直接影响开发人员的生产力或软件质量。我们认为,这一发现是由于这样一个事实,即增加持续部署的频率强制改进了发布和部署自动化,这反过来又减少了开发人员的工作量。此外,我们提供的数据表明,从alpha和beta客户那里获取反馈对于保持发行质量至关重要。
{"title":"Continuous deployment of mobile software at facebook (showcase)","authors":"Chuck Rossi, Elisa Shibley, Shi Su, Kent L. Beck, T. Savor, M. Stumm","doi":"10.1145/2950290.2994157","DOIUrl":"https://doi.org/10.1145/2950290.2994157","url":null,"abstract":"Continuous deployment is the practice of releasing software updates to production as soon as it is ready, which is receiving increased adoption in industry. The frequency of updates of mobile software has traditionally lagged the state of practice for cloud-based services for a number of reasons. Mobile versions can only be released periodically. Users can choose when and if to upgrade, which means that several different releases coexist in production. There are hundreds of Android hardware variants, which increases the risk of having errors in the software being deployed. Facebook has made significant progress in increasing the frequency of its mobile deployments. Over a period of 4 years, the Android release has gone from a deployment every 8 weeks to a deployment every week. In this paper, we describe in detail the mobile deployment process at FB. We present our findings from an extensive analysis of software engineering metrics based on data collected over a period of 7 years. A key finding is that the frequency of deployment does not directly affect developer productivity or software quality. We argue that this finding is due to the fact that increasing the frequency of continuous deployment forces improved release and deployment automation, which in turn reduces developer workload. Additionally, the data we present shows that dog-fooding and obtaining feedback from alpha and beta customers is critical to maintaining release quality.","PeriodicalId":20532,"journal":{"name":"Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79466135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Cozy: synthesizing collection data structures 舒适:合成集合数据结构
Calvin Loncaric
Many applications require specialized data structures not found in standard libraries. Implementing new data structures by hand is tedious and error-prone. To alleviate this difficulty, we built a tool called Cozy that synthesizes data structures using counter-example guided inductive synthesis. We evaluate Cozy by showing how its synthesized implementations compare to handwritten implementations in terms of correctness and performance across four real-world programs. Cozy's data structures match the performance of the handwritten implementations while avoiding human error.
许多应用程序需要在标准库中找不到的专用数据结构。手工实现新的数据结构既繁琐又容易出错。为了减轻这个困难,我们构建了一个名为Cozy的工具,它使用反例引导归纳合成来合成数据结构。我们通过在四个实际程序中展示其合成实现与手写实现在正确性和性能方面的比较来评估Cozy。Cozy的数据结构与手写实现的性能相匹配,同时避免了人为错误。
{"title":"Cozy: synthesizing collection data structures","authors":"Calvin Loncaric","doi":"10.1145/2950290.2986032","DOIUrl":"https://doi.org/10.1145/2950290.2986032","url":null,"abstract":"Many applications require specialized data structures not found in standard libraries. Implementing new data structures by hand is tedious and error-prone. To alleviate this difficulty, we built a tool called Cozy that synthesizes data structures using counter-example guided inductive synthesis. We evaluate Cozy by showing how its synthesized implementations compare to handwritten implementations in terms of correctness and performance across four real-world programs. Cozy's data structures match the performance of the handwritten implementations while avoiding human error.","PeriodicalId":20532,"journal":{"name":"Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering","volume":"170 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79635595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Making invisible things visible: tracking down known vulnerabilities at 3000 companies (showcase) 让不可见的东西可见:追踪3000家公司的已知漏洞(展示)
Gazi Mahmud
This year, software development teams around the world are consuming BILLIONS of open source and third-party components. The good news: they are accelerating time to market. The bad news: 1 in 17 components they are using include known security vulnerabilities. In this talk, I will describe what Sonatype, the company behind The Central Repository that supports Apache Maven, has learned from analyzing how thousands of applications use open source components. I will also discuss how organizations like Mayo Clinic, Exxon, Capital One, the U.S. FDA and Intuit are utilizing the principles of software supply chain automation to improve application security and how organizations can balance the need for speed with quality and security early in the development cycle.
今年,世界各地的软件开发团队正在消耗数以亿计的开源和第三方组件。好消息是:它们正在加快上市时间。坏消息是:他们使用的17个组件中有1个包含已知的安全漏洞。在这次演讲中,我将描述Sonatype,这个支持Apache Maven的中央存储库背后的公司,从分析成千上万的应用程序如何使用开源组件中学到了什么。我还将讨论像Mayo Clinic、Exxon、Capital One、美国FDA和Intuit这样的组织如何利用软件供应链自动化的原则来提高应用程序的安全性,以及组织如何在开发周期的早期平衡对速度、质量和安全性的需求。
{"title":"Making invisible things visible: tracking down known vulnerabilities at 3000 companies (showcase)","authors":"Gazi Mahmud","doi":"10.1145/2950290.2994155","DOIUrl":"https://doi.org/10.1145/2950290.2994155","url":null,"abstract":"This year, software development teams around the world are consuming BILLIONS of open source and third-party components. The good news: they are accelerating time to market. The bad news: 1 in 17 components they are using include known security vulnerabilities. In this talk, I will describe what Sonatype, the company behind The Central Repository that supports Apache Maven, has learned from analyzing how thousands of applications use open source components. I will also discuss how organizations like Mayo Clinic, Exxon, Capital One, the U.S. FDA and Intuit are utilizing the principles of software supply chain automation to improve application security and how organizations can balance the need for speed with quality and security early in the development cycle.","PeriodicalId":20532,"journal":{"name":"Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82601195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Call graph construction for Java libraries Java库的调用图构造
Michael Reif, Michael Eichberg, Ben Hermann, Johannes Lerch, M. Mezini
Today, every application uses software libraries. Yet, while a lot of research exists w.r.t. analyzing applications, research that targets the analysis of libraries independent of any application is scarce. This is unfortunate, because, for developers of libraries, such as the Java Development Kit (JDK), it is crucial to ensure that the library behaves as intended regardless of how it is used. To fill this gap, we discuss the construction of call graphs for libraries that abstract over all potential library usages. Call graphs are particularly relevant as they are a precursor of many advanced analyses, such as inter-procedural data-flow analyses. We show that the current practice of using call graph algorithms designed for applications to analyze libraries leads to call graphs that, at the same time, lack relevant call edges and contain unnecessary edges. This motivates the need for call graph construction algorithms dedicated to libraries. Unlike algorithms for applications, call graph construction algorithms for libraries must take into consideration the goals of subsequent analyses. Specifically, we show that it is essential to distinguish between the scenario of an analysis for potential exploitable vulnerabilities from the scenario of an analysis for general software quality attributes, e.g., dead methods or unused fields. This distinction affects the decision about what constitutes the library-private implementation, which therefore, needs special treatment. Thus, building one call graph that satisfies all needs is not sensical. Overall, we observed that the proposed call graph algorithms reduce the number of call edges up to 30% when compared to existing approaches.
今天,每个应用程序都使用软件库。然而,尽管存在大量的研究来分析应用程序,但是针对独立于任何应用程序的库分析的研究却很少。这是不幸的,因为对于库(如Java Development Kit (JDK))的开发人员来说,无论如何使用,确保库按照预期的方式运行是至关重要的。为了填补这一空白,我们讨论了抽象所有潜在库用法的库的调用图的构造。调用图特别重要,因为它们是许多高级分析的先驱,例如过程间数据流分析。我们表明,目前使用为应用程序设计的调用图算法来分析库的做法导致调用图同时缺乏相关的调用边并包含不必要的边。这激发了对专用于库的调用图构造算法的需求。与应用程序的算法不同,库的调用图构造算法必须考虑后续分析的目标。具体地说,我们表明有必要区分分析潜在可利用漏洞的场景和分析一般软件质量属性的场景,例如,失效方法或未使用的字段。这种区别影响到决定是什么构成了库私有实现,因此需要特殊处理。因此,构建一个满足所有需求的调用图是没有意义的。总的来说,我们观察到,与现有方法相比,所提出的调用图算法将调用边的数量减少了30%。
{"title":"Call graph construction for Java libraries","authors":"Michael Reif, Michael Eichberg, Ben Hermann, Johannes Lerch, M. Mezini","doi":"10.1145/2950290.2950312","DOIUrl":"https://doi.org/10.1145/2950290.2950312","url":null,"abstract":"Today, every application uses software libraries. Yet, while a lot of research exists w.r.t. analyzing applications, research that targets the analysis of libraries independent of any application is scarce. This is unfortunate, because, for developers of libraries, such as the Java Development Kit (JDK), it is crucial to ensure that the library behaves as intended regardless of how it is used. To fill this gap, we discuss the construction of call graphs for libraries that abstract over all potential library usages. Call graphs are particularly relevant as they are a precursor of many advanced analyses, such as inter-procedural data-flow analyses. We show that the current practice of using call graph algorithms designed for applications to analyze libraries leads to call graphs that, at the same time, lack relevant call edges and contain unnecessary edges. This motivates the need for call graph construction algorithms dedicated to libraries. Unlike algorithms for applications, call graph construction algorithms for libraries must take into consideration the goals of subsequent analyses. Specifically, we show that it is essential to distinguish between the scenario of an analysis for potential exploitable vulnerabilities from the scenario of an analysis for general software quality attributes, e.g., dead methods or unused fields. This distinction affects the decision about what constitutes the library-private implementation, which therefore, needs special treatment. Thus, building one call graph that satisfies all needs is not sensical. Overall, we observed that the proposed call graph algorithms reduce the number of call edges up to 30% when compared to existing approaches.","PeriodicalId":20532,"journal":{"name":"Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82647672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
期刊
Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1