首页 > 最新文献

Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering最新文献

英文 中文
RunDroid: recovering execution call graphs for Android applications RunDroid:恢复Android应用程序的执行调用图
Yujie Yuan, Lihua Xu, Xusheng Xiao, Andy Podgurski, Huibiao Zhu
Fault localization is a well-received technique for helping developers to identify faulty statements of a program. Research has shown that the coverages of faulty statements and its predecessors in program dependence graph are important for effective fault localization. However, app executions in Android split into segments in different components, i.e., methods, threads, and processes, posing challenges for traditional program dependence computation, and in turn rendering fault localization less effective. We present RunDroid, a tool for recovering the dynamic call graphs of app executions in Android, assisting existing tools for more precise program dependence computation. For each exectuion, RunDroid captures and recovers method calls from not only the application layer, but also between applications and the Android framework. Moreover, to deal with the widely adopted multi-threaded communications in Android applications, RunDroid also captures methods calls that are split among threads. Demo : https://github.com/MiJack/RunDroid Video : https://youtu.be/EM7TJbE-Oaw
故障定位是一种很受欢迎的技术,用于帮助开发人员识别程序的错误语句。研究表明,程序依赖图中错误语句及其前身的覆盖范围对有效的故障定位非常重要。然而,在Android中,应用程序的执行被分成不同的组件,即方法、线程和进程,这给传统的程序依赖计算带来了挑战,从而降低了错误定位的效率。我们提出了RunDroid,一个用于恢复Android中应用程序执行的动态调用图的工具,帮助现有工具进行更精确的程序依赖计算。对于每次执行,RunDroid不仅捕获和恢复来自应用层的方法调用,而且还捕获应用程序和Android框架之间的方法调用。此外,为了处理Android应用程序中广泛采用的多线程通信,RunDroid还捕获了在线程之间分裂的方法调用。演示:https://github.com/MiJack/RunDroid视频:https://youtu.be/EM7TJbE-Oaw
{"title":"RunDroid: recovering execution call graphs for Android applications","authors":"Yujie Yuan, Lihua Xu, Xusheng Xiao, Andy Podgurski, Huibiao Zhu","doi":"10.1145/3106237.3122821","DOIUrl":"https://doi.org/10.1145/3106237.3122821","url":null,"abstract":"Fault localization is a well-received technique for helping developers to identify faulty statements of a program. Research has shown that the coverages of faulty statements and its predecessors in program dependence graph are important for effective fault localization. However, app executions in Android split into segments in different components, i.e., methods, threads, and processes, posing challenges for traditional program dependence computation, and in turn rendering fault localization less effective. We present RunDroid, a tool for recovering the dynamic call graphs of app executions in Android, assisting existing tools for more precise program dependence computation. For each exectuion, RunDroid captures and recovers method calls from not only the application layer, but also between applications and the Android framework. Moreover, to deal with the widely adopted multi-threaded communications in Android applications, RunDroid also captures methods calls that are split among threads. Demo : https://github.com/MiJack/RunDroid Video : https://youtu.be/EM7TJbE-Oaw","PeriodicalId":313494,"journal":{"name":"Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129045058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
CLTSA: labelled transition system analyser with counting fluent support CLTSA:标记过渡系统分析仪计数流畅的支持
Germán Regis, Renzo Degiovanni, Nicolás D'Ippolito, Nazareno Aguirre
In this paper we present CLTSA (Counting Fluents Labelled Transition System Analyser), an extension of LTSA (Labelled Transition System Analyser) that incorporates counting fluents, a useful mechanism to capture properties related to counting events. Counting fluent temporal logic is a formalism for specifying properties of event-based systems, which complements the notion of fluent by the related concept of counting fluent. While fluents allow us to capture boolean properties of the behaviour of a reactive system, counting fluents are numerical values, that enumerate event occurrences. The tool supports a superset of FSP (Finite State Processes), that allows one to define LTL properties involving counting fluents, which can be model checked on FSP processes. Detailed information can be found at http://dc.exa.unrc.edu.ar/tools/cltsa.
在本文中,我们介绍了CLTSA(计数流标记转换系统分析器),LTSA(标记转换系统分析器)的扩展,它包含计数流,这是一种捕获与计数事件相关的属性的有用机制。计数流畅时间逻辑是一种用于指定基于事件的系统属性的形式,它通过计数流畅的相关概念补充了流畅的概念。流允许我们捕捉反应系统行为的布尔属性,计数流是枚举事件发生的数值。该工具支持FSP(有限状态进程)的超集,允许定义涉及计数流体的LTL属性,这些属性可以在FSP进程上进行模型检查。详细信息可在http://dc.exa.unrc.edu.ar/tools/cltsa上找到。
{"title":"CLTSA: labelled transition system analyser with counting fluent support","authors":"Germán Regis, Renzo Degiovanni, Nicolás D'Ippolito, Nazareno Aguirre","doi":"10.1145/3106237.3122828","DOIUrl":"https://doi.org/10.1145/3106237.3122828","url":null,"abstract":"In this paper we present CLTSA (Counting Fluents Labelled Transition System Analyser), an extension of LTSA (Labelled Transition System Analyser) that incorporates counting fluents, a useful mechanism to capture properties related to counting events. Counting fluent temporal logic is a formalism for specifying properties of event-based systems, which complements the notion of fluent by the related concept of counting fluent. While fluents allow us to capture boolean properties of the behaviour of a reactive system, counting fluents are numerical values, that enumerate event occurrences. The tool supports a superset of FSP (Finite State Processes), that allows one to define LTL properties involving counting fluents, which can be model checked on FSP processes. Detailed information can be found at http://dc.exa.unrc.edu.ar/tools/cltsa.","PeriodicalId":313494,"journal":{"name":"Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering","volume":"237 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122457556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Why do developers use trivial packages? an empirical case study on npm 为什么开发人员要使用琐碎的包?NPM的实证案例研究
Rabe Abdalkareem, Olivier Nourry, Sultan Wehaibi, Suhaib Mujahid, Emad Shihab
Code reuse is traditionally seen as good practice. Recent trends have pushed the concept of code reuse to an extreme, by using packages that implement simple and trivial tasks, which we call `trivial packages'. A recent incident where a trivial package led to the breakdown of some of the most popular web applications such as Facebook and Netflix made it imperative to question the growing use of trivial packages. Therefore, in this paper, we mine more than 230,000 npm packages and 38,000 JavaScript applications in order to study the prevalence of trivial packages. We found that trivial packages are common and are increasing in popularity, making up 16.8% of the studied npm packages. We performed a survey with 88 Node.js developers who use trivial packages to understand the reasons and drawbacks of their use. Our survey revealed that trivial packages are used because they are perceived to be well implemented and tested pieces of code. However, developers are concerned about maintaining and the risks of breakages due to the extra dependencies trivial packages introduce. To objectively verify the survey results, we empirically validate the most cited reason and drawback and find that, contrary to developers' beliefs, only 45.2% of trivial packages even have tests. However, trivial packages appear to be `deployment tested' and to have similar test, usage and community interest as non-trivial packages. On the other hand, we found that 11.5% of the studied trivial packages have more than 20 dependencies. Hence, developers should be careful about which trivial packages they decide to use.
代码重用传统上被视为一种良好的实践。最近的趋势将代码重用的概念推向了一个极端,通过使用实现简单和琐碎任务的包,我们称之为“琐碎包”。最近发生的一个小程序包导致一些最流行的网络应用程序(如Facebook和Netflix)崩溃的事件,使人们有必要对越来越多地使用小程序包提出质疑。因此,在本文中,我们挖掘了超过23万个npm包和3.8万个JavaScript应用程序,以研究琐碎包的流行程度。我们发现,琐碎包很常见,而且越来越受欢迎,占所研究的npm包的16.8%。我们对88名使用琐碎包的Node.js开发人员进行了调查,以了解他们使用这些包的原因和缺点。我们的调查显示,使用琐碎包是因为它们被认为是很好的实现和测试过的代码片段。然而,开发人员担心由于琐碎包引入的额外依赖而导致的维护和破坏风险。为了客观地验证调查结果,我们根据经验验证了被引用最多的原因和缺点,并发现,与开发人员的信念相反,只有45.2%的琐碎包甚至有测试。然而,琐碎的软件包似乎是经过“部署测试”的,并且与非琐碎的软件包具有相似的测试、使用和社区兴趣。另一方面,我们发现11.5%的琐碎包有超过20个依赖项。因此,开发人员应该注意他们决定使用哪些琐碎的包。
{"title":"Why do developers use trivial packages? an empirical case study on npm","authors":"Rabe Abdalkareem, Olivier Nourry, Sultan Wehaibi, Suhaib Mujahid, Emad Shihab","doi":"10.1145/3106237.3106267","DOIUrl":"https://doi.org/10.1145/3106237.3106267","url":null,"abstract":"Code reuse is traditionally seen as good practice. Recent trends have pushed the concept of code reuse to an extreme, by using packages that implement simple and trivial tasks, which we call `trivial packages'. A recent incident where a trivial package led to the breakdown of some of the most popular web applications such as Facebook and Netflix made it imperative to question the growing use of trivial packages. Therefore, in this paper, we mine more than 230,000 npm packages and 38,000 JavaScript applications in order to study the prevalence of trivial packages. We found that trivial packages are common and are increasing in popularity, making up 16.8% of the studied npm packages. We performed a survey with 88 Node.js developers who use trivial packages to understand the reasons and drawbacks of their use. Our survey revealed that trivial packages are used because they are perceived to be well implemented and tested pieces of code. However, developers are concerned about maintaining and the risks of breakages due to the extra dependencies trivial packages introduce. To objectively verify the survey results, we empirically validate the most cited reason and drawback and find that, contrary to developers' beliefs, only 45.2% of trivial packages even have tests. However, trivial packages appear to be `deployment tested' and to have similar test, usage and community interest as non-trivial packages. On the other hand, we found that 11.5% of the studied trivial packages have more than 20 dependencies. Hence, developers should be careful about which trivial packages they decide to use.","PeriodicalId":313494,"journal":{"name":"Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering","volume":"974 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123075354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 112
Better test cases for better automated program repair 为更好的自动化程序修复提供更好的测试用例
Jinqiu Yang, Alexey Zhikhartsev, Yuefei Liu, Lin Tan
Automated generate-and-validate program repair techniques (G&V techniques) suffer from generating many overfitted patches due to in-capabilities of test cases. Such overfitted patches are incor- rect patches, which only make all given test cases pass, but fail to fix the bugs. In this work, we propose an overfitted patch detec- tion framework named Opad (Overfitted PAtch Detection). Opad helps improve G&V techniques by enhancing existing test cases to filter out overfitted patches. To enhance test cases, Opad uses fuzz testing to generate new test cases, and employs two test or- acles (crash and memory-safety) to enhance validity checking of automatically-generated patches. Opad also uses a novel metric (named O-measure) for deciding whether automatically-generated patches overfit. Evaluated on 45 bugs from 7 large systems (the same benchmark used by GenProg and SPR), Opad filters out 75.2% (321/427) over- fitted patches generated by GenProg/AE, Kali, and SPR. In addition, Opad guides SPR to generate correct patches for one more bug (the original SPR generates correct patches for 11 bugs). Our analysis also shows that up to 40% of such automatically-generated test cases may further improve G&V techniques if empowered with better test oracles (in addition to crash and memory-safety oracles employed by Opad).
由于测试用例的能力不足,自动生成并验证程序修复技术(G&V技术)会产生许多过拟合的补丁。这种过拟合的补丁是不正确的补丁,它只能使所有给定的测试用例通过,但不能修复错误。在这项工作中,我们提出了一个名为Opad (overfitting patch Detection)的过拟合补丁检测框架。Opad通过增强现有的测试用例来过滤掉过度拟合的补丁,从而帮助改进G&V技术。为了增强测试用例,Opad使用模糊测试来生成新的测试用例,并使用两个测试工具(崩溃和内存安全)来增强自动生成补丁的有效性检查。Opad还使用一种新的度量(称为O-measure)来确定自动生成的补丁是否过拟合。对来自7个大型系统(GenProg和SPR使用相同的基准)的45个错误进行评估,Opad过滤掉了由GenProg/AE, Kali和SPR生成的75.2%(321/427)过拟合补丁。另外,Opad引导SPR为另外一个bug生成正确的补丁(原来的SPR为11个bug生成正确的补丁)。我们的分析还表明,如果使用更好的测试预言器(除了Opad使用的崩溃和内存安全预言器),多达40%的自动生成的测试用例可能会进一步改进G&V技术。
{"title":"Better test cases for better automated program repair","authors":"Jinqiu Yang, Alexey Zhikhartsev, Yuefei Liu, Lin Tan","doi":"10.1145/3106237.3106274","DOIUrl":"https://doi.org/10.1145/3106237.3106274","url":null,"abstract":"Automated generate-and-validate program repair techniques (G&V techniques) suffer from generating many overfitted patches due to in-capabilities of test cases. Such overfitted patches are incor- rect patches, which only make all given test cases pass, but fail to fix the bugs. In this work, we propose an overfitted patch detec- tion framework named Opad (Overfitted PAtch Detection). Opad helps improve G&V techniques by enhancing existing test cases to filter out overfitted patches. To enhance test cases, Opad uses fuzz testing to generate new test cases, and employs two test or- acles (crash and memory-safety) to enhance validity checking of automatically-generated patches. Opad also uses a novel metric (named O-measure) for deciding whether automatically-generated patches overfit. Evaluated on 45 bugs from 7 large systems (the same benchmark used by GenProg and SPR), Opad filters out 75.2% (321/427) over- fitted patches generated by GenProg/AE, Kali, and SPR. In addition, Opad guides SPR to generate correct patches for one more bug (the original SPR generates correct patches for 11 bugs). Our analysis also shows that up to 40% of such automatically-generated test cases may further improve G&V techniques if empowered with better test oracles (in addition to crash and memory-safety oracles employed by Opad).","PeriodicalId":313494,"journal":{"name":"Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128305432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 117
System performance optimization via design and configuration space exploration 系统性能优化通过设计和配置空间探索
Chong Tang
The runtime performance of a software system often depends on a large number of static parameters, which usually interact in complex ways to carry out system functionality and influence system performance. It's hard to understand such configuration spaces and find good combinations of parameter values to gain available levels of performance. Engineers in practice often just accept the default settings, leading such systems to significantly underperform relative to their potential. This problem, in turn, has impacts on cost, revenue, customer satisfaction, business reputation, and mission effectiveness. To improve the overall performance of the end-to-end systems, we propose to systematically explore (i) how to design new systems towards good performance through design space synthesis and evaluation, and (ii) how to auto-configure an existing system to obtain better performance through heuristic configuration space search. In addition, this research further studies execution traces of a system to predict runtime performance under new configurations.
软件系统的运行时性能通常取决于大量的静态参数,这些参数通常以复杂的方式相互作用,以实现系统功能并影响系统性能。很难理解这样的配置空间并找到参数值的良好组合来获得可用的性能级别。在实践中,工程师通常只是接受默认设置,导致这些系统的表现远远低于其潜力。这个问题反过来又会对成本、收入、客户满意度、商业声誉和任务效率产生影响。为了提高端到端系统的整体性能,我们建议系统地探索(i)如何通过设计空间综合和评估来设计具有良好性能的新系统,以及(ii)如何通过启发式配置空间搜索来自动配置现有系统以获得更好的性能。此外,本研究进一步研究了系统的执行轨迹,以预测新配置下的运行时性能。
{"title":"System performance optimization via design and configuration space exploration","authors":"Chong Tang","doi":"10.1145/3106237.3119880","DOIUrl":"https://doi.org/10.1145/3106237.3119880","url":null,"abstract":"The runtime performance of a software system often depends on a large number of static parameters, which usually interact in complex ways to carry out system functionality and influence system performance. It's hard to understand such configuration spaces and find good combinations of parameter values to gain available levels of performance. Engineers in practice often just accept the default settings, leading such systems to significantly underperform relative to their potential. This problem, in turn, has impacts on cost, revenue, customer satisfaction, business reputation, and mission effectiveness. To improve the overall performance of the end-to-end systems, we propose to systematically explore (i) how to design new systems towards good performance through design space synthesis and evaluation, and (ii) how to auto-configure an existing system to obtain better performance through heuristic configuration space search. In addition, this research further studies execution traces of a system to predict runtime performance under new configurations.","PeriodicalId":313494,"journal":{"name":"Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114991667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Model-driven software engineering in practice: privacy-enhanced filtering of network traffic 模型驱动软件工程的实践:网络流量的隐私增强过滤
Roel van Dijk, Christophe Creeten, J. V. D. Ham, J. V. D. Bos
Network traffic data contains a wealth of information for use in security analysis and application development. Unfortunately, it also usually contains confidential or otherwise sensitive information, prohibiting sharing and analysis. Existing automated anonymization solutions are hard to maintain and tend to be outdated. We present Privacy-Enhanced Filtering (PEF), a model-driven prototype framework that relies on declarative descriptions of protocols and a set of filter rules, which are used to automatically transform network traffic data to remove sensitive information. This paper discusses the design, implementation and application of PEF, which is available as open-source software and configured for use in a typical malware detection scenario.
网络流量数据包含大量用于安全分析和应用程序开发的信息。不幸的是,它通常也包含机密或其他敏感信息,禁止共享和分析。现有的自动化匿名化解决方案很难维护,而且往往已经过时。我们提出了隐私增强过滤(PEF),这是一个模型驱动的原型框架,它依赖于协议的声明性描述和一组过滤规则,用于自动转换网络流量数据以去除敏感信息。本文讨论了PEF的设计、实现和应用,PEF作为开源软件,配置用于典型的恶意软件检测场景。
{"title":"Model-driven software engineering in practice: privacy-enhanced filtering of network traffic","authors":"Roel van Dijk, Christophe Creeten, J. V. D. Ham, J. V. D. Bos","doi":"10.1145/3106237.3117777","DOIUrl":"https://doi.org/10.1145/3106237.3117777","url":null,"abstract":"Network traffic data contains a wealth of information for use in security analysis and application development. Unfortunately, it also usually contains confidential or otherwise sensitive information, prohibiting sharing and analysis. Existing automated anonymization solutions are hard to maintain and tend to be outdated. We present Privacy-Enhanced Filtering (PEF), a model-driven prototype framework that relies on declarative descriptions of protocols and a set of filter rules, which are used to automatically transform network traffic data to remove sensitive information. This paper discusses the design, implementation and application of PEF, which is available as open-source software and configured for use in a typical malware detection scenario.","PeriodicalId":313494,"journal":{"name":"Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129587280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
When program analysis meets mobile security: an industrial study of misusing Android internet sockets 当程序分析遇到移动安全:滥用Android互联网插座的工业研究
Wenqi Bu, Minhui Xue, Lihua Xu, Yajin Zhou, Zhushou Tang, Tao Xie
Despite recent progress in program analysis techniques to identify vulnerabilities in Android apps, significant challenges still remain for applying these techniques to large-scale industrial environments. Modern software-security providers, such as Qihoo 360 and Pwnzen (two leading companies in China), are often required to process more than 10 million mobile apps at each run. In this work, we focus on effectively and efficiently identifying vulnerable usage of Internet sockets in an industrial setting. To achieve this goal, we propose a practical hybrid approach that enables lightweight yet precise detection in the industrial setting. In particular, we integrate the process of categorizing potential vulnerable apps with analysis techniques, to reduce the inevitable human inspection effort. We categorize potential vulnerable apps based on characteristics of vulnerability signatures, to reduce the burden on static analysis. We flexibly integrate static and dynamic analyses for apps in each identified family, to refine the family signatures and hence target on precise detection. We implement our approach in a practical system and deploy the system on the Pwnzen platform. By using the system, we identify and report potential vulnerabilities of 24 vulnerable apps (falling into 3 vulnerability families) to their developers, and some of these reported vulnerabilities are previously unknown. The apps of each vulnerability family in total have over 50 million downloads. We also propose countermeasures and highlight promising directions for technology transfer.
尽管最近在识别Android应用程序漏洞的程序分析技术方面取得了进展,但将这些技术应用于大规模工业环境仍然存在重大挑战。现代软件安全提供商,如奇虎360和Pwnzen(两家中国领先的公司),每次运行通常需要处理超过1000万个移动应用程序。在这项工作中,我们专注于有效和高效地识别工业环境中互联网套接字的脆弱使用。为了实现这一目标,我们提出了一种实用的混合方法,可以在工业环境中实现轻量级而精确的检测。特别是,我们将分类潜在易受攻击应用程序的过程与分析技术相结合,以减少不可避免的人工检查工作。我们根据漏洞签名的特征对潜在的易受攻击应用进行分类,以减少静态分析的负担。我们灵活地将静态和动态分析集成到每个识别家族的应用程序中,以完善家族签名,从而实现精确检测。我们在一个实际的系统中实现了我们的方法,并将系统部署在Pwnzen平台上。通过使用该系统,我们识别并报告了24个易受攻击的应用程序的潜在漏洞(分为3个漏洞家族)给他们的开发者,其中一些报告的漏洞是以前未知的。每个漏洞家族的应用程序总下载量超过5000万次。提出了技术转移的对策和发展方向。
{"title":"When program analysis meets mobile security: an industrial study of misusing Android internet sockets","authors":"Wenqi Bu, Minhui Xue, Lihua Xu, Yajin Zhou, Zhushou Tang, Tao Xie","doi":"10.1145/3106237.3117764","DOIUrl":"https://doi.org/10.1145/3106237.3117764","url":null,"abstract":"Despite recent progress in program analysis techniques to identify vulnerabilities in Android apps, significant challenges still remain for applying these techniques to large-scale industrial environments. Modern software-security providers, such as Qihoo 360 and Pwnzen (two leading companies in China), are often required to process more than 10 million mobile apps at each run. In this work, we focus on effectively and efficiently identifying vulnerable usage of Internet sockets in an industrial setting. To achieve this goal, we propose a practical hybrid approach that enables lightweight yet precise detection in the industrial setting. In particular, we integrate the process of categorizing potential vulnerable apps with analysis techniques, to reduce the inevitable human inspection effort. We categorize potential vulnerable apps based on characteristics of vulnerability signatures, to reduce the burden on static analysis. We flexibly integrate static and dynamic analyses for apps in each identified family, to refine the family signatures and hence target on precise detection. We implement our approach in a practical system and deploy the system on the Pwnzen platform. By using the system, we identify and report potential vulnerabilities of 24 vulnerable apps (falling into 3 vulnerability families) to their developers, and some of these reported vulnerabilities are previously unknown. The apps of each vulnerability family in total have over 50 million downloads. We also propose countermeasures and highlight promising directions for technology transfer.","PeriodicalId":313494,"journal":{"name":"Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130307403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Reference architectures and Scrum: friends or foes? 参考架构和Scrum:是敌是友?
M. Galster, S. Angelov, Silverio Martínez-Fernández, Daniel Tofan
Software reference architectures provide templates and guidelines for designing systems in a particular domain. Companies use them to achieve interoperability of (parts of) their software, standardization, and faster development. In contrast to system-specific software architectures that "emerge" during development, reference architectures dictate significant parts of the software design early on. Agile software development frameworks (such as Scrum) acknowledge changing software requirements and the need to adapt the software design accordingly. In this paper, we present lessons learned about how reference architectures interact with Scrum (the most frequently used agile process framework). These lessons are based on observing software development projects in five companies. We found that reference architectures can support good practice in Scrum: They provide enough design upfront without too much effort, reduce documentation activities, facilitate knowledge sharing, and contribute to "architectural thinking" of developers. However, reference architectures can impose risks or even threats to the success of Scrum (e.g., to self-organizing and motivated teams).
软件参考体系结构为在特定领域中设计系统提供模板和指导方针。公司使用它们来实现软件(部分)的互操作性、标准化和更快的开发。与在开发过程中“出现”的特定于系统的软件体系结构相反,参考体系结构在早期指示了软件设计的重要部分。敏捷软件开发框架(如Scrum)承认不断变化的软件需求以及相应地调整软件设计的必要性。在本文中,我们将介绍参考架构如何与Scrum(最常用的敏捷过程框架)交互的经验教训。这些课程是基于对五家公司的软件开发项目的观察。我们发现参考架构可以支持Scrum中的良好实践:它们在不付出太多努力的情况下提供了足够的设计,减少了文档活动,促进了知识共享,并有助于开发人员的“架构思维”。然而,参考架构会给Scrum的成功带来风险甚至威胁(例如,对自组织和有动力的团队)。
{"title":"Reference architectures and Scrum: friends or foes?","authors":"M. Galster, S. Angelov, Silverio Martínez-Fernández, Daniel Tofan","doi":"10.1145/3106237.3117773","DOIUrl":"https://doi.org/10.1145/3106237.3117773","url":null,"abstract":"Software reference architectures provide templates and guidelines for designing systems in a particular domain. Companies use them to achieve interoperability of (parts of) their software, standardization, and faster development. In contrast to system-specific software architectures that \"emerge\" during development, reference architectures dictate significant parts of the software design early on. Agile software development frameworks (such as Scrum) acknowledge changing software requirements and the need to adapt the software design accordingly. In this paper, we present lessons learned about how reference architectures interact with Scrum (the most frequently used agile process framework). These lessons are based on observing software development projects in five companies. We found that reference architectures can support good practice in Scrum: They provide enough design upfront without too much effort, reduce documentation activities, facilitate knowledge sharing, and contribute to \"architectural thinking\" of developers. However, reference architectures can impose risks or even threats to the success of Scrum (e.g., to self-organizing and motivated teams).","PeriodicalId":313494,"journal":{"name":"Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130639310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Applying deep learning based automatic bug triager to industrial projects 将基于深度学习的自动bug触发器应用于工业项目
Sun-Ro Lee, Min-Jae Heo, Chan-Gun Lee, Milhan Kim, Gaeul Jeong
Finding the appropriate developer for a bug report, so called `Bug Triage', is one of the bottlenecks in the bug resolution process. To address this problem, many approaches have proposed various automatic bug triage techniques in recent studies. We argue that most previous studies focused on open source projects only and did not consider deep learning techniques. In this paper, we propose to use Convolutional Neural Network and word embedding to build an automatic bug triager. The results of the experiments applied to both industrial and open source projects reveal benefits of the automatic approach and suggest co-operation of human and automatic triagers. Our experience in integrating and operating the proposed system in an industrial development environment is also reported.
为bug报告找到合适的开发人员,也就是所谓的“bug分类”,是bug解决过程中的瓶颈之一。为了解决这个问题,在最近的研究中,许多方法提出了各种自动错误分类技术。我们认为,大多数以前的研究只关注开源项目,而没有考虑深度学习技术。在本文中,我们提出使用卷积神经网络和词嵌入来构建一个自动错误触发器。应用于工业和开源项目的实验结果揭示了自动方法的好处,并建议人工和自动触发器的合作。还报告了我们在工业发展环境中整合和操作拟议系统的经验。
{"title":"Applying deep learning based automatic bug triager to industrial projects","authors":"Sun-Ro Lee, Min-Jae Heo, Chan-Gun Lee, Milhan Kim, Gaeul Jeong","doi":"10.1145/3106237.3117776","DOIUrl":"https://doi.org/10.1145/3106237.3117776","url":null,"abstract":"Finding the appropriate developer for a bug report, so called `Bug Triage', is one of the bottlenecks in the bug resolution process. To address this problem, many approaches have proposed various automatic bug triage techniques in recent studies. We argue that most previous studies focused on open source projects only and did not consider deep learning techniques. In this paper, we propose to use Convolutional Neural Network and word embedding to build an automatic bug triager. The results of the experiments applied to both industrial and open source projects reveal benefits of the automatic approach and suggest co-operation of human and automatic triagers. Our experience in integrating and operating the proposed system in an industrial development environment is also reported.","PeriodicalId":313494,"journal":{"name":"Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129804101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 80
On the scalability of Linux kernel maintainers' work 论Linux内核维护者工作的可伸缩性
Minghui Zhou, Qingying Chen, A. Mockus, Fengguang Wu
Open source software ecosystems evolve ways to balance the workload among groups of participants ranging from core groups to peripheral groups. As ecosystems grow, it is not clear whether the mechanisms that previously made them work will continue to be relevant or whether new mechanisms will need to evolve. The impact of failure for critical ecosystems such as Linux is enormous, yet the understanding of why they function and are effective is limited. We, therefore, aim to understand how the Linux kernel sustains its growth, how to characterize the workload of maintainers, and whether or not the existing mechanisms are scalable. We quantify maintainers' work through the files that are maintained, and the change activity and the numbers of contributors in those files. We find systematic differences among modules; these differences are stable over time, which suggests that certain architectural features, commercial interests, or module-specific practices lead to distinct sustainable equilibria. We find that most of the modules have not grown appreciably over the last decade; most growth has been absorbed by a few modules. We also find that the effort per maintainer does not increase, even though the community has hypothesized that required effort might increase. However, the distribution of work among maintainers is highly unbalanced, suggesting that a few maintainers may experience increasing workload. We find that the practice of assigning multiple maintainers to a file yields only a power of 1/2 increase in productivity. We expect that our proposed framework to quantify maintainer practices will help clarify the factors that allow rapidly growing ecosystems to be sustainable.
开源软件生态系统发展出各种方法来平衡从核心组到外围组的参与者之间的工作负载。随着生态系统的发展,目前尚不清楚以前使其发挥作用的机制是否仍然相关,或者是否需要发展新的机制。对于像Linux这样的关键生态系统来说,失败的影响是巨大的,然而人们对它们为什么能起作用和有效的理解是有限的。因此,我们的目标是了解Linux内核如何维持其增长,如何描述维护人员的工作负载,以及现有机制是否具有可伸缩性。我们通过所维护的文件,以及这些文件中的变更活动和贡献者的数量来量化维护者的工作。我们发现模块之间存在系统性差异;随着时间的推移,这些差异是稳定的,这表明某些建筑特征、商业利益或特定模块的实践导致了不同的可持续平衡。我们发现,在过去十年中,大多数模块并没有明显增长;大部分增长都被几个模块吸收了。我们还发现每个维护者的工作量并没有增加,即使社区假设所需的工作量可能会增加。然而,维护者之间的工作分配是高度不平衡的,这表明一些维护者可能会经历工作量的增加。我们发现,为一个文件分配多个维护者的做法只会使生产率提高1/2。我们期望我们提出的量化维护者实践的框架将有助于阐明使快速增长的生态系统具有可持续性的因素。
{"title":"On the scalability of Linux kernel maintainers' work","authors":"Minghui Zhou, Qingying Chen, A. Mockus, Fengguang Wu","doi":"10.1145/3106237.3106287","DOIUrl":"https://doi.org/10.1145/3106237.3106287","url":null,"abstract":"Open source software ecosystems evolve ways to balance the workload among groups of participants ranging from core groups to peripheral groups. As ecosystems grow, it is not clear whether the mechanisms that previously made them work will continue to be relevant or whether new mechanisms will need to evolve. The impact of failure for critical ecosystems such as Linux is enormous, yet the understanding of why they function and are effective is limited. We, therefore, aim to understand how the Linux kernel sustains its growth, how to characterize the workload of maintainers, and whether or not the existing mechanisms are scalable. We quantify maintainers' work through the files that are maintained, and the change activity and the numbers of contributors in those files. We find systematic differences among modules; these differences are stable over time, which suggests that certain architectural features, commercial interests, or module-specific practices lead to distinct sustainable equilibria. We find that most of the modules have not grown appreciably over the last decade; most growth has been absorbed by a few modules. We also find that the effort per maintainer does not increase, even though the community has hypothesized that required effort might increase. However, the distribution of work among maintainers is highly unbalanced, suggesting that a few maintainers may experience increasing workload. We find that the practice of assigning multiple maintainers to a file yields only a power of 1/2 increase in productivity. We expect that our proposed framework to quantify maintainer practices will help clarify the factors that allow rapidly growing ecosystems to be sustainable.","PeriodicalId":313494,"journal":{"name":"Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129842562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
期刊
Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1