首页 > 最新文献

2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)最新文献

英文 中文
View-Centric Performance Optimization for Database-Backed Web Applications 数据库支持的Web应用程序以视图为中心的性能优化
Pub Date : 2019-05-25 DOI: 10.1109/ICSE.2019.00104
Junwen Yang, Cong Yan, Chengcheng Wan, Shan Lu, Alvin Cheung
Web developers face the stringent task of designing informative web pages while keeping the page-load time low. This task has become increasingly challenging as most web contents are now generated by processing ever-growing amount of user data stored in back-end databases. It is difficult for developers to understand the cost of generating every web-page element, not to mention explore and pick the web design with the best trade-off between performance and functionality. In this paper, we present Panorama, a view-centric and database-aware development environment for web developers. Using database-aware program analysis and novel IDE design, Panorama provides developers with intuitive information about the cost and the performance-enhancing opportunities behind every HTML element, as well as suggesting various global code refactorings that enable developers to easily explore a wide spectrum of performance and functionality trade-offs.
Web开发人员面临着设计信息丰富的网页,同时保持页面加载时间低的严峻任务。这项任务变得越来越具有挑战性,因为现在大多数web内容都是通过处理存储在后端数据库中不断增长的用户数据来生成的。开发人员很难理解生成每个网页元素的成本,更不用说探索和选择在性能和功能之间进行最佳权衡的网页设计。在本文中,我们介绍了Panorama,这是一个面向web开发人员的以视图为中心和数据库感知的开发环境。使用数据库感知程序分析和新颖的IDE设计,Panorama为开发人员提供了关于每个HTML元素背后的成本和性能增强机会的直观信息,以及建议各种全局代码重构,使开发人员能够轻松地探索广泛的性能和功能权衡。
{"title":"View-Centric Performance Optimization for Database-Backed Web Applications","authors":"Junwen Yang, Cong Yan, Chengcheng Wan, Shan Lu, Alvin Cheung","doi":"10.1109/ICSE.2019.00104","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00104","url":null,"abstract":"Web developers face the stringent task of designing informative web pages while keeping the page-load time low. This task has become increasingly challenging as most web contents are now generated by processing ever-growing amount of user data stored in back-end databases. It is difficult for developers to understand the cost of generating every web-page element, not to mention explore and pick the web design with the best trade-off between performance and functionality. In this paper, we present Panorama, a view-centric and database-aware development environment for web developers. Using database-aware program analysis and novel IDE design, Panorama provides developers with intuitive information about the cost and the performance-enhancing opportunities behind every HTML element, as well as suggesting various global code refactorings that enable developers to easily explore a wide spectrum of performance and functionality trade-offs.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85830465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Grey-Box Concolic Testing on Binary Code 二进制码的灰盒聚类测试
Pub Date : 2019-05-25 DOI: 10.1109/ICSE.2019.00082
Jaeseung Choi, J. Jang, Choongwoo Han, S. Cha
We present grey-box concolic testing, a novel path-based test case generation method that combines the best of both white-box and grey-box fuzzing. At a high level, our technique systematically explores execution paths of a program under test as in white-box fuzzing, a.k.a. concolic testing, while not giving up the simplicity of grey-box fuzzing: it only uses a lightweight instrumentation, and it does not rely on an SMT solver. We implemented our technique in a system called Eclipser, and compared it to the state-of-the-art grey-box fuzzers (including AFLFast, LAF-intel, Steelix, and VUzzer) as well as a symbolic executor (KLEE). In our experiments, we achieved higher code coverage and found more bugs than the other tools.
我们提出了灰盒集合测试,一种新的基于路径的测试用例生成方法,它结合了白盒和灰盒模糊测试的优点。在高层次上,我们的技术系统地探索被测程序的执行路径,就像在白盒模糊测试(又名concolic测试)中一样,同时不放弃灰盒模糊测试的简单性:它只使用轻量级的工具,并且不依赖于SMT求解器。我们在一个名为Eclipser的系统中实现了我们的技术,并将其与最先进的灰盒模糊器(包括AFLFast、af -intel、Steelix和VUzzer)以及符号执行器(KLEE)进行了比较。在我们的实验中,我们获得了比其他工具更高的代码覆盖率,并且发现了更多的bug。
{"title":"Grey-Box Concolic Testing on Binary Code","authors":"Jaeseung Choi, J. Jang, Choongwoo Han, S. Cha","doi":"10.1109/ICSE.2019.00082","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00082","url":null,"abstract":"We present grey-box concolic testing, a novel path-based test case generation method that combines the best of both white-box and grey-box fuzzing. At a high level, our technique systematically explores execution paths of a program under test as in white-box fuzzing, a.k.a. concolic testing, while not giving up the simplicity of grey-box fuzzing: it only uses a lightweight instrumentation, and it does not rely on an SMT solver. We implemented our technique in a system called Eclipser, and compared it to the state-of-the-art grey-box fuzzers (including AFLFast, LAF-intel, Steelix, and VUzzer) as well as a symbolic executor (KLEE). In our experiments, we achieved higher code coverage and found more bugs than the other tools.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82020136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
Training Binary Classifiers as Data Structure Invariants 训练二元分类器作为数据结构不变量
Pub Date : 2019-05-25 DOI: 10.1109/ICSE.2019.00084
F. Molina, Renzo Degiovanni, Pablo Ponzio, Germán Regis, Nazareno Aguirre, M. Frias
We present a technique to distinguish valid from invalid data structure objects. The technique is based on building an artificial neural network, more precisely a binary classifier, and training it to identify valid and invalid instances of a data structure. The obtained classifier can then be used in place of the data structure's invariant, in order to attempt to identify (in)correct behaviors in programs manipulating the structure. In order to produce the valid objects to train the network, an assumed-correct set of object building routines is randomly executed. Invalid instances are produced by generating values for object fields that "break" the collected valid values, i.e., that assign values to object fields that have not been observed as feasible in the assumed-correct executions that led to the collected valid instances. We experimentally assess this approach, over a benchmark of data structures. We show that this learning technique produces classifiers that achieve significantly better accuracy in classifying valid/invalid objects compared to a technique for dynamic invariant detection, and leads to improved bug finding.
我们提出了一种区分有效和无效数据结构对象的技术。该技术基于构建一个人工神经网络,更准确地说是一个二元分类器,并训练它识别数据结构的有效和无效实例。然后,可以使用获得的分类器来代替数据结构的不变量,以便尝试在操作该结构的程序中识别正确的行为。为了产生有效的对象来训练网络,随机执行一组假设正确的对象构建例程。无效实例是通过为“破坏”收集到的有效值的对象字段生成值而产生的,也就是说,将值赋给在导致收集到的有效实例的假定正确执行中未被观察到可行的对象字段。我们在数据结构的基准上对这种方法进行了实验评估。我们表明,与动态不变量检测技术相比,这种学习技术产生的分类器在对有效/无效对象进行分类方面取得了显著更好的准确性,并导致改进的错误发现。
{"title":"Training Binary Classifiers as Data Structure Invariants","authors":"F. Molina, Renzo Degiovanni, Pablo Ponzio, Germán Regis, Nazareno Aguirre, M. Frias","doi":"10.1109/ICSE.2019.00084","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00084","url":null,"abstract":"We present a technique to distinguish valid from invalid data structure objects. The technique is based on building an artificial neural network, more precisely a binary classifier, and training it to identify valid and invalid instances of a data structure. The obtained classifier can then be used in place of the data structure's invariant, in order to attempt to identify (in)correct behaviors in programs manipulating the structure. In order to produce the valid objects to train the network, an assumed-correct set of object building routines is randomly executed. Invalid instances are produced by generating values for object fields that \"break\" the collected valid values, i.e., that assign values to object fields that have not been observed as feasible in the assumed-correct executions that led to the collected valid instances. We experimentally assess this approach, over a benchmark of data structures. We show that this learning technique produces classifiers that achieve significantly better accuracy in classifying valid/invalid objects compared to a technique for dynamic invariant detection, and leads to improved bug finding.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74918467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
PIVOT: Learning API-Device Correlations to Facilitate Android Compatibility Issue Detection PIVOT:学习api -设备相关性以促进Android兼容性问题检测
Pub Date : 2019-05-25 DOI: 10.1109/ICSE.2019.00094
Lili Wei, Yepang Liu, S. Cheung
The heavily fragmented Android ecosystem has induced various compatibility issues in Android apps. The search space for such fragmentation-induced compatibility issues (FIC issues) is huge, comprising three dimensions: device models, Android OS versions, and Android APIs. FIC issues, especially those arising from device models, evolve quickly with the frequent release of new device models to the market. As a result, an automated technique is desired to maintain timely knowledge of such FIC issues, which are mostly undocumented. In this paper, we propose such a technique, PIVOT, that automatically learns API-device correlations of FIC issues from existing Android apps. PIVOT extracts and prioritizes API-device correlations from a given corpus of Android apps. We evaluated PIVOT with popular Android apps on Google Play. Evaluation results show that PIVOT can effectively prioritize valid API-device correlations for app corpora collected at different time. Leveraging the knowledge in the learned API-device correlations, we further conducted a case study and successfully uncovered ten previously-undetected FIC issues in open-source Android apps.
严重分裂的Android生态系统导致了Android应用的各种兼容性问题。这种由碎片引起的兼容性问题(FIC问题)的搜索空间是巨大的,包括三个维度:设备型号、Android操作系统版本和Android api。FIC问题,特别是那些由设备模型引起的问题,随着新设备模型的频繁发布而迅速发展。因此,需要一种自动化的技术来维护这些FIC问题的及时知识,这些问题大多没有文档记录。在本文中,我们提出了这样一种技术,PIVOT,它可以从现有的Android应用程序中自动学习FIC问题的api -设备相关性。PIVOT从给定的Android应用语料库中提取api -设备相关性并对其进行优先级排序。我们用Google Play上流行的Android应用对PIVOT进行了评估。评估结果表明,PIVOT可以有效地对不同时间收集的应用语料库进行有效的api -设备关联排序。利用所学到的api -设备相关性的知识,我们进一步进行了案例研究,并成功地发现了开源Android应用程序中十个以前未被发现的FIC问题。
{"title":"PIVOT: Learning API-Device Correlations to Facilitate Android Compatibility Issue Detection","authors":"Lili Wei, Yepang Liu, S. Cheung","doi":"10.1109/ICSE.2019.00094","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00094","url":null,"abstract":"The heavily fragmented Android ecosystem has induced various compatibility issues in Android apps. The search space for such fragmentation-induced compatibility issues (FIC issues) is huge, comprising three dimensions: device models, Android OS versions, and Android APIs. FIC issues, especially those arising from device models, evolve quickly with the frequent release of new device models to the market. As a result, an automated technique is desired to maintain timely knowledge of such FIC issues, which are mostly undocumented. In this paper, we propose such a technique, PIVOT, that automatically learns API-device correlations of FIC issues from existing Android apps. PIVOT extracts and prioritizes API-device correlations from a given corpus of Android apps. We evaluated PIVOT with popular Android apps on Google Play. Evaluation results show that PIVOT can effectively prioritize valid API-device correlations for app corpora collected at different time. Leveraging the knowledge in the learned API-device correlations, we further conducted a case study and successfully uncovered ten previously-undetected FIC issues in open-source Android apps.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87621867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Easy Modelling and Verification of Unpredictable and Preemptive Interrupt-Driven Systems 不可预测和抢占式中断驱动系统的简单建模与验证
Pub Date : 2019-05-25 DOI: 10.1109/ICSE.2019.00037
Minxue Pan, Shouyu Chen, Yu Pei, Tian Zhang, Xuandong Li
The widespread real-time and embedded systems are mostly interrupt-driven because their heavy interaction with the environment is often initiated by interrupts. With the interrupt arrival being unpredictable and the interrupt handling being preemptive, a large number of possible system behaviours are generated, which makes the correctness assurance of such systems difficult and costly. Model checking is considered to be one of the effective methods for exhausting behavioural state space for correctness. However, existing modelling approaches for interrupt-driven systems are based on either calculus or automata theory, and have a steep learning curve. To address this problem, we propose a new modelling language called interrupt sequence diagram (ISD). By extending the popular UML sequence diagram notations, the ISD supports the modelling of interrupts' essential features visually and concisely. We also propose an automata-based semantics for ISD, based on which ISD can be transformed to a subset of hybrid automata so as to leverage the abundant off-the-shelf checkers. Experiments on examples from both real-world and existing literature were conducted, and the results demonstrate our approach's usability and effectiveness.
广泛的实时和嵌入式系统大多是中断驱动的,因为它们与环境的大量交互通常是由中断发起的。由于中断到达的不可预测性和中断处理的抢占性,产生了大量可能的系统行为,这给系统的正确性保证带来了困难和成本。模型检查被认为是耗尽行为状态空间以保证正确性的有效方法之一。然而,现有的中断驱动系统建模方法要么基于微积分,要么基于自动机理论,并且具有陡峭的学习曲线。为了解决这个问题,我们提出了一种新的建模语言,称为中断序列图(ISD)。通过扩展流行的UML序列图符号,ISD支持可视化和简洁地对中断的基本特征进行建模。我们还提出了一种基于自动机的ISD语义,在此基础上,ISD可以转换为混合自动机的子集,从而利用丰富的现成检查器。对现实世界和现有文献中的例子进行了实验,结果证明了我们的方法的可用性和有效性。
{"title":"Easy Modelling and Verification of Unpredictable and Preemptive Interrupt-Driven Systems","authors":"Minxue Pan, Shouyu Chen, Yu Pei, Tian Zhang, Xuandong Li","doi":"10.1109/ICSE.2019.00037","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00037","url":null,"abstract":"The widespread real-time and embedded systems are mostly interrupt-driven because their heavy interaction with the environment is often initiated by interrupts. With the interrupt arrival being unpredictable and the interrupt handling being preemptive, a large number of possible system behaviours are generated, which makes the correctness assurance of such systems difficult and costly. Model checking is considered to be one of the effective methods for exhausting behavioural state space for correctness. However, existing modelling approaches for interrupt-driven systems are based on either calculus or automata theory, and have a steep learning curve. To address this problem, we propose a new modelling language called interrupt sequence diagram (ISD). By extending the popular UML sequence diagram notations, the ISD supports the modelling of interrupts' essential features visually and concisely. We also propose an automata-based semantics for ISD, based on which ISD can be transformed to a subset of hybrid automata so as to leverage the abundant off-the-shelf checkers. Experiments on examples from both real-world and existing literature were conducted, and the results demonstrate our approach's usability and effectiveness.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91433790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
ReCDroid: Automatically Reproducing Android Application Crashes from Bug Reports ReCDroid:从Bug报告中自动复制Android应用程序崩溃
Pub Date : 2019-05-25 DOI: 10.1109/ICSE.2019.00030
Yu Zhao, Tingting Yu, Ting Su, Yang Liu, Wei Zheng, Jingzhi Zhang, William G. J. Halfond
The large demand of mobile devices creates significant concerns about the quality of mobile applications (apps). Developers heavily rely on bug reports in issue tracking systems to reproduce failures (e.g., crashes). However, the process of crash reproduction is often manually done by developers, making the resolution of bugs inefficient, especially that bug reports are often written in natural language. To improve the productivity of developers in resolving bug reports, in this paper, we introduce a novel approach, called ReCDroid, that can automatically reproduce crashes from bug reports for Android apps. ReCDroid uses a combination of natural language processing (NLP) and dynamic GUI exploration to synthesize event sequences with the goal of reproducing the reported crash. We have evaluated ReCDroid on 51 original bug reports from 33 Android apps. The results show that ReCDroid successfully reproduced 33 crashes (63.5% success rate) directly from the textual description of bug reports. A user study involving 12 participants demonstrates that ReCDroid can improve the productivity of developers when resolving crash bug reports.
移动设备的巨大需求引起了人们对移动应用程序质量的极大关注。开发人员严重依赖问题跟踪系统中的错误报告来重现失败(例如,崩溃)。然而,崩溃重现的过程通常是由开发人员手动完成的,这使得bug的解决效率低下,特别是bug报告通常是用自然语言编写的。为了提高开发人员解决bug报告的效率,在本文中,我们引入了一种名为ReCDroid的新方法,它可以自动从Android应用程序的bug报告中重现崩溃。ReCDroid结合使用自然语言处理(NLP)和动态GUI探索来合成事件序列,目标是重现报告的崩溃。我们对来自33个Android应用的51个原始漏洞报告进行了ReCDroid评估。结果表明,ReCDroid直接从bug报告的文本描述中成功复制了33次崩溃(成功率为63.5%)。一项涉及12名参与者的用户研究表明,ReCDroid可以在解决崩溃错误报告时提高开发人员的工作效率。
{"title":"ReCDroid: Automatically Reproducing Android Application Crashes from Bug Reports","authors":"Yu Zhao, Tingting Yu, Ting Su, Yang Liu, Wei Zheng, Jingzhi Zhang, William G. J. Halfond","doi":"10.1109/ICSE.2019.00030","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00030","url":null,"abstract":"The large demand of mobile devices creates significant concerns about the quality of mobile applications (apps). Developers heavily rely on bug reports in issue tracking systems to reproduce failures (e.g., crashes). However, the process of crash reproduction is often manually done by developers, making the resolution of bugs inefficient, especially that bug reports are often written in natural language. To improve the productivity of developers in resolving bug reports, in this paper, we introduce a novel approach, called ReCDroid, that can automatically reproduce crashes from bug reports for Android apps. ReCDroid uses a combination of natural language processing (NLP) and dynamic GUI exploration to synthesize event sequences with the goal of reproducing the reported crash. We have evaluated ReCDroid on 51 original bug reports from 33 Android apps. The results show that ReCDroid successfully reproduced 33 crashes (63.5% success rate) directly from the textual description of bug reports. A user study involving 12 participants demonstrates that ReCDroid can improve the productivity of developers when resolving crash bug reports.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89488101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
SafeCheck: Safety Enhancement of Java Unsafe API SafeCheck: Java不安全API的安全增强
Pub Date : 2019-05-25 DOI: 10.1109/ICSE.2019.00095
Shiyou Huang, Jianmei Guo, Sanhong Li, Xiang Li, Yumin Qi, K. Chow, Jeff Huang
Java is a safe programming language by providing bytecode verification and enforcing memory protection. For instance, programmers cannot directly access the memory but have to use object references. Yet, the Java runtime provides an Unsafe API as a backdoor for the developers to access the low- level system code. Whereas the Unsafe API is designed to be used by the Java core library, a growing community of third-party libraries use it to achieve high performance. The Unsafe API is powerful, but dangerous, which leads to data corruption, resource leaks and difficult-to-diagnose JVM crash if used improperly. In this work, we study the Unsafe crash patterns and propose a memory checker to enforce memory safety, thus avoiding the JVM crash caused by the misuse of the Unsafe API at the bytecode level. We evaluate our technique on real crash cases from the openJDK bug system and real-world applications from AJDK. Our tool reduces the efforts from several days to a few minutes for the developers to diagnose the Unsafe related crashes. We also evaluate the runtime overhead of our tool on projects using intensive Unsafe operations, and the result shows that our tool causes a negligible perturbation to the execution of the applications.
通过提供字节码验证和强制内存保护,Java是一种安全的编程语言。例如,程序员不能直接访问内存,而必须使用对象引用。然而,Java运行时提供了一个不安全的API,作为开发人员访问底层系统代码的后门。尽管不安全API被设计为供Java核心库使用,但越来越多的第三方库社区使用它来实现高性能。不安全API功能强大,但也很危险,如果使用不当,会导致数据损坏、资源泄漏和难以诊断的JVM崩溃。在这项工作中,我们研究了不安全的崩溃模式,并提出了一个内存检查器来强制内存安全,从而避免了由于字节码级别上不安全API的误用而导致的JVM崩溃。我们通过openJDK bug系统和AJDK的实际应用程序的实际崩溃案例来评估我们的技术。我们的工具将开发人员诊断不安全相关崩溃的时间从几天减少到几分钟。我们还评估了我们的工具在使用密集的不安全操作的项目上的运行时开销,结果表明我们的工具对应用程序的执行造成了微不足道的干扰。
{"title":"SafeCheck: Safety Enhancement of Java Unsafe API","authors":"Shiyou Huang, Jianmei Guo, Sanhong Li, Xiang Li, Yumin Qi, K. Chow, Jeff Huang","doi":"10.1109/ICSE.2019.00095","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00095","url":null,"abstract":"Java is a safe programming language by providing bytecode verification and enforcing memory protection. For instance, programmers cannot directly access the memory but have to use object references. Yet, the Java runtime provides an Unsafe API as a backdoor for the developers to access the low- level system code. Whereas the Unsafe API is designed to be used by the Java core library, a growing community of third-party libraries use it to achieve high performance. The Unsafe API is powerful, but dangerous, which leads to data corruption, resource leaks and difficult-to-diagnose JVM crash if used improperly. In this work, we study the Unsafe crash patterns and propose a memory checker to enforce memory safety, thus avoiding the JVM crash caused by the misuse of the Unsafe API at the bytecode level. We evaluate our technique on real crash cases from the openJDK bug system and real-world applications from AJDK. Our tool reduces the efforts from several days to a few minutes for the developers to diagnose the Unsafe related crashes. We also evaluate the runtime overhead of our tool on projects using intensive Unsafe operations, and the result shows that our tool causes a negligible perturbation to the execution of the applications.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76184389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Symbolic Repairs for GR(1) Specifications GR(1)规范的符号修复
Pub Date : 2019-05-25 DOI: 10.1109/ICSE.2019.00106
S. Maoz, Jan Oliver Ringert, Rafi Shalom
Unrealizability is a major challenge for GR(1), an expressive assume-guarantee fragment of LTL that enables efficient synthesis. Some works attempt to help engineers deal with unrealizability by generating counter-strategies or computing an unrealizable core. Other works propose to repair the unrealizable specification by suggesting repairs in the form of automatically generated assumptions. In this work we present two novel symbolic algorithms for repairing unrealizable GR(1) specifications. The first algorithm infers new assumptions based on the recently introduced JVTS. The second algorithm infers new assumptions directly from the specification. Both algorithms are sound. The first is incomplete but can be used to suggest many different repairs. The second is complete but suggests a single repair. Both are symbolic and therefore efficient. We implemented our work, validated its correctness, and evaluated it on benchmarks from the literature. The evaluation shows the strength of our algorithms, in their ability to suggest repairs and in their performance and scalability compared to previous solutions.
不可实现性是GR(1)面临的主要挑战,GR是LTL中一种具有表现力的假设保证片段,能够实现高效的合成。一些工作试图通过生成反策略或计算一个不可实现的核心来帮助工程师处理不可实现性。其他工作建议通过自动生成假设的形式提出修复建议来修复无法实现的规范。在这项工作中,我们提出了两种新的符号算法来修复不可实现的GR(1)规范。第一种算法基于最近引入的JVTS来推断新的假设。第二种算法直接从规范中推断出新的假设。这两种算法都是合理的。第一个是不完整的,但可以用来建议许多不同的修复。第二个是完整的,但建议进行一次修复。两者都是象征性的,因此是有效的。我们实现了我们的工作,验证了其正确性,并根据文献中的基准对其进行了评估。与之前的解决方案相比,评估显示了我们算法的优势,包括建议修复的能力、性能和可扩展性。
{"title":"Symbolic Repairs for GR(1) Specifications","authors":"S. Maoz, Jan Oliver Ringert, Rafi Shalom","doi":"10.1109/ICSE.2019.00106","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00106","url":null,"abstract":"Unrealizability is a major challenge for GR(1), an expressive assume-guarantee fragment of LTL that enables efficient synthesis. Some works attempt to help engineers deal with unrealizability by generating counter-strategies or computing an unrealizable core. Other works propose to repair the unrealizable specification by suggesting repairs in the form of automatically generated assumptions. In this work we present two novel symbolic algorithms for repairing unrealizable GR(1) specifications. The first algorithm infers new assumptions based on the recently introduced JVTS. The second algorithm infers new assumptions directly from the specification. Both algorithms are sound. The first is incomplete but can be used to suggest many different repairs. The second is complete but suggests a single repair. Both are symbolic and therefore efficient. We implemented our work, validated its correctness, and evaluated it on benchmarks from the literature. The evaluation shows the strength of our algorithms, in their ability to suggest repairs and in their performance and scalability compared to previous solutions.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79141248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
DLFinder: Characterizing and Detecting Duplicate Logging Code Smells DLFinder:描述和检测重复的日志代码气味
Pub Date : 2019-05-25 DOI: 10.1109/ICSE.2019.00032
Zhenhao Li, T. Chen, Jinqiu Yang, Weiyi Shang
Developers rely on software logs for a wide variety of tasks, such as debugging, testing, program comprehension, verification, and performance analysis. Despite the importance of logs, prior studies show that there is no industrial standard on how to write logging statements. Recent research on logs often only considers the appropriateness of a log as an individual item (e.g., one single logging statement); while logs are typically analyzed in tandem. In this paper, we focus on studying duplicate logging statements, which are logging statements that have the same static text message. Such duplications in the text message are potential indications of logging code smells, which may affect developers' understanding of the dynamic view of the system. We manually studied over 3K duplicate logging statements and their surrounding code in four large-scale open source systems: Hadoop, CloudStack, ElasticSearch, and Cassandra. We uncovered five patterns of duplicate logging code smells. For each instance of the code smell, we further manually identify the problematic (i.e., require fixes) and justifiable (i.e., do not require fixes) cases. Then, we contact developers in order to verify our manual study result. We integrated our manual study result and developers' feedback into our automated static analysis tool, DLFinder, which automatically detects problematic duplicate logging code smells. We evaluated DLFinder on the four manually studied systems and two additional systems: Camel and Wicket. In total, combining the results of DLFinder and our manual analysis, we reported 82 problematic code smell instances to developers and all of them have been fixed.
开发人员依靠软件日志来完成各种各样的任务,例如调试、测试、程序理解、验证和性能分析。尽管日志很重要,但之前的研究表明,没有关于如何编写日志记录语句的工业标准。最近对日志的研究通常只考虑日志作为一个单独项目的适当性(例如,一个单独的日志记录语句);而日志通常是串联分析的。本文主要研究重复日志语句,即具有相同静态文本消息的日志语句。文本消息中的这种重复是日志代码异味的潜在指示,这可能会影响开发人员对系统动态视图的理解。我们在四个大型开源系统(Hadoop、CloudStack、ElasticSearch和Cassandra)中手动研究了超过3K个重复的日志语句及其周围代码。我们发现了重复日志代码气味的五种模式。对于代码气味的每个实例,我们进一步手动识别有问题(即,需要修复)和合理(即,不需要修复)的情况。然后,我们联系开发人员以验证我们的手工研究结果。我们将手工研究结果和开发人员的反馈集成到自动静态分析工具DLFinder中,该工具可以自动检测有问题的重复日志代码气味。我们在四个人工研究的系统和另外两个系统(Camel和Wicket)上评估了DLFinder。总的来说,结合DLFinder的结果和我们的手工分析,我们向开发人员报告了82个有问题的代码气味实例,所有这些都已经修复了。
{"title":"DLFinder: Characterizing and Detecting Duplicate Logging Code Smells","authors":"Zhenhao Li, T. Chen, Jinqiu Yang, Weiyi Shang","doi":"10.1109/ICSE.2019.00032","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00032","url":null,"abstract":"Developers rely on software logs for a wide variety of tasks, such as debugging, testing, program comprehension, verification, and performance analysis. Despite the importance of logs, prior studies show that there is no industrial standard on how to write logging statements. Recent research on logs often only considers the appropriateness of a log as an individual item (e.g., one single logging statement); while logs are typically analyzed in tandem. In this paper, we focus on studying duplicate logging statements, which are logging statements that have the same static text message. Such duplications in the text message are potential indications of logging code smells, which may affect developers' understanding of the dynamic view of the system. We manually studied over 3K duplicate logging statements and their surrounding code in four large-scale open source systems: Hadoop, CloudStack, ElasticSearch, and Cassandra. We uncovered five patterns of duplicate logging code smells. For each instance of the code smell, we further manually identify the problematic (i.e., require fixes) and justifiable (i.e., do not require fixes) cases. Then, we contact developers in order to verify our manual study result. We integrated our manual study result and developers' feedback into our automated static analysis tool, DLFinder, which automatically detects problematic duplicate logging code smells. We evaluated DLFinder on the four manually studied systems and two additional systems: Camel and Wicket. In total, combining the results of DLFinder and our manual analysis, we reported 82 problematic code smell instances to developers and all of them have been fixed.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79803089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Zero-Overhead Path Prediction with Progressive Symbolic Execution 渐进式符号执行的零开销路径预测
Pub Date : 2019-05-25 DOI: 10.1109/ICSE.2019.00039
Richard Rutledge, Sunjae Park, Haider Adnan Khan, A. Orso, Milos Prvulović, A. Zajić
In previous work, we introduced zero-overhead profiling (ZOP), a technique that leverages the electromagnetic emissions generated by the computer hardware to profile a program without instrumenting it. Although effective, ZOP has several shortcomings: it requires test inputs that achieve extensive code coverage for its training phase; it predicts path profiles instead of complete execution traces; and its predictions can suffer unrecoverable accuracy losses. In this paper, we present zero-overhead path prediction (ZOP-2), an approach that extends ZOP and addresses its limitations. First, ZOP-2 achieves high coverage during training through progressive symbolic execution (PSE)-symbolic execution of increasingly small program fragments. Second, ZOP-2 predicts complete execution traces, rather than path profiles. Finally, ZOP-2 mitigates the problem of path mispredictions by using a stateless approach that can recover from prediction errors. We evaluated our approach on a set of benchmarks with promising results; for the cases considered, (1) ZOP-2 achieved over 90% path prediction accuracy, and (2) PSE covered feasible paths missed by traditional symbolic execution, thus boosting ZOP-2's accuracy.
在之前的工作中,我们介绍了零开销分析(zero-overhead profiling, ZOP),这是一种利用计算机硬件产生的电磁发射来分析程序而不使用仪器的技术。虽然ZOP是有效的,但是它有几个缺点:它需要在训练阶段实现广泛的代码覆盖的测试输入;它预测路径配置文件,而不是完整的执行轨迹;而且它的预测可能会遭受无法挽回的准确性损失。在本文中,我们提出了零开销路径预测(ZOP-2),这是一种扩展ZOP并解决其局限性的方法。首先,ZOP-2通过渐进式符号执行(PSE)——对越来越小的程序片段进行符号执行——在训练过程中实现了高覆盖率。其次,ZOP-2预测完整的执行跟踪,而不是路径概要。最后,通过使用可以从预测错误中恢复的无状态方法,ZOP-2减轻了路径错误预测的问题。我们在一系列具有良好结果的基准上评估了我们的方法;对于所考虑的情况,(1)ZOP-2的路径预测精度达到90%以上,(2)PSE覆盖了传统符号执行错过的可行路径,从而提高了ZOP-2的精度。
{"title":"Zero-Overhead Path Prediction with Progressive Symbolic Execution","authors":"Richard Rutledge, Sunjae Park, Haider Adnan Khan, A. Orso, Milos Prvulović, A. Zajić","doi":"10.1109/ICSE.2019.00039","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00039","url":null,"abstract":"In previous work, we introduced zero-overhead profiling (ZOP), a technique that leverages the electromagnetic emissions generated by the computer hardware to profile a program without instrumenting it. Although effective, ZOP has several shortcomings: it requires test inputs that achieve extensive code coverage for its training phase; it predicts path profiles instead of complete execution traces; and its predictions can suffer unrecoverable accuracy losses. In this paper, we present zero-overhead path prediction (ZOP-2), an approach that extends ZOP and addresses its limitations. First, ZOP-2 achieves high coverage during training through progressive symbolic execution (PSE)-symbolic execution of increasingly small program fragments. Second, ZOP-2 predicts complete execution traces, rather than path profiles. Finally, ZOP-2 mitigates the problem of path mispredictions by using a stateless approach that can recover from prediction errors. We evaluated our approach on a set of benchmarks with promising results; for the cases considered, (1) ZOP-2 achieved over 90% path prediction accuracy, and (2) PSE covered feasible paths missed by traditional symbolic execution, thus boosting ZOP-2's accuracy.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79910060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1