Pub Date : 2024-06-14DOI: 10.1016/j.scico.2024.103167
Ying Wang , Tao Zhang , Xiapu Luo , Peng Liang
{"title":"Special Issue on Selected Tools from the Tool Track of the 30th IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER 2023 Tool Track)","authors":"Ying Wang , Tao Zhang , Xiapu Luo , Peng Liang","doi":"10.1016/j.scico.2024.103167","DOIUrl":"10.1016/j.scico.2024.103167","url":null,"abstract":"","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103167"},"PeriodicalIF":1.5,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141394820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graph neural networks have proven their effectiveness across a wide spectrum of graph-based tasks. Despite their successes, they share the same limitations as other deep learning architectures and pose additional challenges for their formal verification. To overcome these problems, we proposed a specification language, , that can be used to program graph neural networks. This language has been implemented in a Python library called libmg that handles the definition, compilation, visualization, and explanation of graph neural network models. We illustrate its usage by showing how it was used to implement a Computation Tree Logic model checker in our previous work, and evaluate its performance on the benchmarks of the Model Checking Contest. In the future, we plan to use to further investigate the issues of explainability and verification of graph neural networks.
{"title":"libmg: A Python library for programming graph neural networks in μG","authors":"Matteo Belenchia, Flavio Corradini, Michela Quadrini, Michele Loreti","doi":"10.1016/j.scico.2024.103165","DOIUrl":"10.1016/j.scico.2024.103165","url":null,"abstract":"<div><p>Graph neural networks have proven their effectiveness across a wide spectrum of graph-based tasks. Despite their successes, they share the same limitations as other deep learning architectures and pose additional challenges for their formal verification. To overcome these problems, we proposed a specification language, <span><math><mi>μ</mi><mi>G</mi></math></span>, that can be used to <em>program</em> graph neural networks. This language has been implemented in a Python library called <span>libmg</span> that handles the definition, compilation, visualization, and explanation of <span><math><mi>μ</mi><mi>G</mi></math></span> graph neural network models. We illustrate its usage by showing how it was used to implement a Computation Tree Logic model checker in our previous work, and evaluate its performance on the benchmarks of the Model Checking Contest. In the future, we plan to use <span><math><mi>μ</mi><mi>G</mi></math></span> to further investigate the issues of explainability and verification of graph neural networks.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103165"},"PeriodicalIF":1.3,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141398951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Enhancing software reliability, dependability, and security requires effective identification and mitigation of defects during early development stages. Software defect prediction (SDP) models have emerged as valuable tools for this purpose. However, there is currently a lack of consensus in evaluating the predictive performance of newly proposed models, which hinders accurate measurement of progress and can lead to misleading conclusions. To tackle this challenge, we present MATTER (a fraMework towArd a consisTenT pErformance compaRison), which aims to provide reliable and consistent performance comparisons for SDP models. MATTER incorporates three key considerations. First, it establishes a global reference point, ONE (glObal baseliNe modEl), which possesses the 3S properties (Simplicity in implementation, Strong predictive ability, and Stable prediction performance), to serve as the baseline for evaluating other models. Second, it proposes using the SQA-effort-aligned threshold setting to ensure fair performance comparisons. Third, it advocates for consistent performance evaluation by adopting a set of core performance indicators that reflect the practical value of prediction models in achieving tangible progress. Through the application of MATTER to the same benchmark data sets, researchers and practitioners can obtain more accurate and meaningful insights into the performance of defect prediction models, thereby facilitating informed decision-making and improving software quality. When evaluating representative SDP models from recent years using MATTER, we surprisingly observed that: none of these models demonstrated a notable enhancement in prediction performance compared to the simple baseline model ONE. In future studies, we strongly recommend the adoption of MATTER to assess the actual usefulness of newly proposed models, promoting reliable scientific progress in defect prediction.
{"title":"Towards a framework for reliable performance evaluation in defect prediction","authors":"Xutong Liu, Shiran Liu, Zhaoqiang Guo, Peng Zhang, Yibiao Yang, Huihui Liu, Hongmin Lu, Yanhui Li, Lin Chen, Yuming Zhou","doi":"10.1016/j.scico.2024.103164","DOIUrl":"10.1016/j.scico.2024.103164","url":null,"abstract":"<div><p>Enhancing software reliability, dependability, and security requires effective identification and mitigation of defects during early development stages. Software defect prediction (SDP) models have emerged as valuable tools for this purpose. However, there is currently a lack of consensus in evaluating the predictive performance of newly proposed models, which hinders accurate measurement of progress and can lead to misleading conclusions. To tackle this challenge, we present MATTER (a fraMework towArd a consisTenT pErformance compaRison), which aims to provide reliable and consistent performance comparisons for SDP models. MATTER incorporates three key considerations. First, it establishes a global reference point, ONE (glObal baseliNe modEl), which possesses the 3S properties (Simplicity in implementation, Strong predictive ability, and Stable prediction performance), to serve as the baseline for evaluating other models. Second, it proposes using the SQA-effort-aligned threshold setting to ensure fair performance comparisons. Third, it advocates for consistent performance evaluation by adopting a set of core performance indicators that reflect the practical value of prediction models in achieving tangible progress. Through the application of MATTER to the same benchmark data sets, researchers and practitioners can obtain more accurate and meaningful insights into the performance of defect prediction models, thereby facilitating informed decision-making and improving software quality. When evaluating representative SDP models from recent years using MATTER, we surprisingly observed that: none of these models demonstrated a notable enhancement in prediction performance compared to the simple baseline model ONE. In future studies, we strongly recommend the adoption of MATTER to assess the actual usefulness of newly proposed models, promoting reliable scientific progress in defect prediction.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103164"},"PeriodicalIF":1.3,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141408043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-07DOI: 10.1016/j.scico.2024.103155
Chi Zhang , Jinfu Chen , Saihua Cai , Wen Zhang , Rexford Nii Ayitey Sosu , Haibo Chen
Compilers play a critical role in current software construction. However, the vulnerabilities or bugs within the compiler can pose significant challenges to ensuring the security of the resultant software. In recent years, many compilers have made use of testing techniques to address and mitigate such concerns. Fuzzing is widely used among these techniques to detect software bugs. However, when fuzzing compilers, there are still shortcomings in terms of the diversity and validity of test cases. This paper introduces TR-Fuzz, a fuzzing tool specifically designed for C compilers based on Transformer. Leveraging position embedding and multi-head attention mechanisms, TR-Fuzz establishes relationships among data, facilitating the generation of well-formed C programs for compiler testing. In addition, we use different generation strategies in the process of program generation to improve the performance of TR-Fuzz. We validate the effectiveness of TR-Fuzz through the comparison with existing fuzzing tools for C compilers. The experimental results show that TR-Fuzz increases the pass rate of the generated C programs by an average of about 12% and improves the coverage of programs under test compared with the existing tools. Benefiting from the improved pass rate and coverage, we found five bugs in GCC-9.
编译器在当前的软件开发中发挥着至关重要的作用。然而,编译器中的漏洞或错误会给确保生成软件的安全性带来巨大挑战。近年来,许多编译器都采用了测试技术来解决和减轻这些问题。在这些技术中,模糊技术被广泛用于检测软件缺陷。然而,在对编译器进行模糊测试时,测试用例的多样性和有效性仍存在不足。本文介绍了 TR-Fuzz,一种专门为基于 Transformer 的 C 语言编译器设计的模糊工具。利用位置嵌入和多头关注机制,TR-Fuzz 建立了数据之间的关系,从而为编译器测试生成格式良好的 C 程序提供了便利。此外,我们还在程序生成过程中使用不同的生成策略,以提高 TR-Fuzz 的性能。我们通过与现有 C 编译器模糊工具的比较,验证了 TR-Fuzz 的有效性。实验结果表明,与现有工具相比,TR-Fuzz 将生成的 C 程序的通过率平均提高了约 12%,并提高了被测程序的覆盖率。得益于通过率和覆盖率的提高,我们在 GCC-9 中发现了五个错误。
{"title":"TR-Fuzz: A syntax valid tool for fuzzing C compilers","authors":"Chi Zhang , Jinfu Chen , Saihua Cai , Wen Zhang , Rexford Nii Ayitey Sosu , Haibo Chen","doi":"10.1016/j.scico.2024.103155","DOIUrl":"10.1016/j.scico.2024.103155","url":null,"abstract":"<div><p>Compilers play a critical role in current software construction. However, the vulnerabilities or bugs within the compiler can pose significant challenges to ensuring the security of the resultant software. In recent years, many compilers have made use of testing techniques to address and mitigate such concerns. Fuzzing is widely used among these techniques to detect software bugs. However, when fuzzing compilers, there are still shortcomings in terms of the diversity and validity of test cases. This paper introduces TR-Fuzz, a fuzzing tool specifically designed for C compilers based on Transformer. Leveraging position embedding and multi-head attention mechanisms, TR-Fuzz establishes relationships among data, facilitating the generation of well-formed C programs for compiler testing. In addition, we use different generation strategies in the process of program generation to improve the performance of TR-Fuzz. We validate the effectiveness of TR-Fuzz through the comparison with existing fuzzing tools for C compilers. The experimental results show that TR-Fuzz increases the pass rate of the generated C programs by an average of about 12% and improves the coverage of programs under test compared with the existing tools. Benefiting from the improved pass rate and coverage, we found five bugs in GCC-9.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103155"},"PeriodicalIF":1.3,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141405384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-06DOI: 10.1016/j.scico.2024.103157
Tom Lauwaerts , Stefan Marr , Christophe Scholliers
Testing is an essential part of the software development cycle. Unfortunately, testing on constrained devices is currently very challenging. First, the limited memory of constrained devices severely restricts the size of test suites. Second, the limited processing power causes test suites to execute slowly, preventing a fast feedback loop. Third, when the constrained device becomes unresponsive, it is impossible to distinguish between the test failing or taking very long, forcing the developer to work with timeouts. Unfortunately, timeouts can cause tests to be flaky, i.e., have unpredictable outcomes independent of code changes. Given these problems, most IoT developers rely on laborious manual testing.
In this paper, we propose the novel testing framework Latch (Large-scale Automated Testing on Constrained Hardware) to overcome the three main challenges of running large test suites on constrained hardware, as well as automate manual testing scenarios through a novel testing methodology based on debugger-like operations—we call this new testing approach managed testing.
The core idea of Latch is to enable testing on constrained devices without those devices maintaining the whole test suite in memory. Therefore, programmers script and run tests on a workstation which then step-wise instructs the constrained device to execute each test, thereby overcoming the memory constraints. Our testing framework further allows developers to mark tests as depending on other tests. This way, Latch can skip tests that depend on previously failing tests resulting in a faster feedback loop. Finally, Latch addresses the issue of timeouts and flaky tests by including an analysis mode that provides feedback on timeouts and the flakiness of tests.
To illustrate the expressiveness of Latch, we present testing scenarios representing unit testing, integration testing, and end-to-end testing. We evaluate the performance of Latch by testing a virtual machine against the WebAssembly specification, with a large test suite consisting of 10,213 tests running on an ESP32 microcontroller. Our experience shows that the testing framework is expressive, reliable and reasonably fast, making it suitable to run large test suites on constrained devices. Furthermore, the debugger-like operations enable to closely mimic manual testing.
{"title":"Latch: Enabling large-scale automated testing on constrained systems","authors":"Tom Lauwaerts , Stefan Marr , Christophe Scholliers","doi":"10.1016/j.scico.2024.103157","DOIUrl":"10.1016/j.scico.2024.103157","url":null,"abstract":"<div><p>Testing is an essential part of the software development cycle. Unfortunately, testing on constrained devices is currently very challenging. First, the limited memory of constrained devices severely restricts the size of test suites. Second, the limited processing power causes test suites to execute slowly, preventing a fast feedback loop. Third, when the constrained device becomes unresponsive, it is impossible to distinguish between the test failing or taking very long, forcing the developer to work with timeouts. Unfortunately, timeouts can cause tests to be flaky, i.e., have unpredictable outcomes independent of code changes. Given these problems, most IoT developers rely on laborious manual testing.</p><p>In this paper, we propose the novel testing framework <em>Latch</em> (Large-scale Automated Testing on Constrained Hardware) to overcome the three main challenges of running large test suites on constrained hardware, as well as automate manual testing scenarios through a novel testing methodology based on debugger-like operations—we call this new testing approach <em>managed testing</em>.</p><p>The core idea of <em>Latch</em> is to enable testing on constrained devices without those devices maintaining the whole test suite in memory. Therefore, programmers script and run tests on a workstation which then step-wise instructs the constrained device to execute each test, thereby overcoming the memory constraints. Our testing framework further allows developers to mark tests as depending on other tests. This way, <em>Latch</em> can skip tests that depend on previously failing tests resulting in a faster feedback loop. Finally, <em>Latch</em> addresses the issue of timeouts and flaky tests by including an analysis mode that provides feedback on timeouts and the flakiness of tests.</p><p>To illustrate the expressiveness of <em>Latch</em>, we present testing scenarios representing unit testing, integration testing, and end-to-end testing. We evaluate the performance of <em>Latch</em> by testing a virtual machine against the WebAssembly specification, with a large test suite consisting of 10,213 tests running on an ESP32 microcontroller. Our experience shows that the testing framework is expressive, reliable and reasonably fast, making it suitable to run large test suites on constrained devices. Furthermore, the debugger-like operations enable to closely mimic manual testing.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103157"},"PeriodicalIF":1.5,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141414909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-06DOI: 10.1016/j.scico.2024.103156
Jinfu Chen , Yemin Yin , Saihua Cai , Weijia Wang , Shengran Wang , Jiming Chen
Software vulnerability detection is a challenging task in the security field, the boom of deep learning technology promotes the development of automatic vulnerability detection. Compared with sequence-based deep learning models, graph neural network (GNN) can learn the structural features of code, it performs well in the field of vulnerability detection for source code. However, different GNNs have different detection results for the same code, and using a single kind of GNN may lead to high false positive rate and false negative rate. In addition, the complex structure of source code causes single GNN model cannot effectively learn their depth feature, thereby leading to low detection accuracy. To solve these limitations, we propose a software vulnerability detection model called iGnnVD based on the integrated graph neural networks. In the proposed iGnnVD model, the base detectors including GCN, GAT and APPNP are first constructed to capture the bidirectional information in the code graph structure with bidirectional structure; And then, the residual connection is used to aggregate the features while retaining the features each time; Finally, the convolutional layer is used to perform the aggregated classification. In addition, an integration module that analyzes the detection results of three detectors for final classification is designed using a voting strategy to solve the problem of high false positive rate and false negative rate caused by using a single kind of base detector. We perform extensive experiments on three datasets and experimental results show that the proposed iGnnVD model can improve the detection accuracy of vulnerabilities in source code as well as reduce the false positive rate and false negative rate compared with existing deep learning-based vulnerability detection models, it also has good stability.
{"title":"iGnnVD: A novel software vulnerability detection model based on integrated graph neural networks","authors":"Jinfu Chen , Yemin Yin , Saihua Cai , Weijia Wang , Shengran Wang , Jiming Chen","doi":"10.1016/j.scico.2024.103156","DOIUrl":"https://doi.org/10.1016/j.scico.2024.103156","url":null,"abstract":"<div><p>Software vulnerability detection is a challenging task in the security field, the boom of deep learning technology promotes the development of automatic vulnerability detection. Compared with sequence-based deep learning models, graph neural network (GNN) can learn the structural features of code, it performs well in the field of vulnerability detection for source code. However, different GNNs have different detection results for the same code, and using a single kind of GNN may lead to high false positive rate and false negative rate. In addition, the complex structure of source code causes single GNN model cannot effectively learn their depth feature, thereby leading to low detection accuracy. To solve these limitations, we propose a software vulnerability detection model called iGnnVD based on the integrated graph neural networks. In the proposed iGnnVD model, the base detectors including GCN, GAT and APPNP are first constructed to capture the bidirectional information in the code graph structure with bidirectional structure; And then, the residual connection is used to aggregate the features while retaining the features each time; Finally, the convolutional layer is used to perform the aggregated classification. In addition, an integration module that analyzes the detection results of three detectors for final classification is designed using a voting strategy to solve the problem of high false positive rate and false negative rate caused by using a single kind of base detector. We perform extensive experiments on three datasets and experimental results show that the proposed iGnnVD model can improve the detection accuracy of vulnerabilities in source code as well as reduce the false positive rate and false negative rate compared with existing deep learning-based vulnerability detection models, it also has good stability.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103156"},"PeriodicalIF":1.3,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141323285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bounded exhaustive testing is a very effective technique for bug finding, which proposes to test a given program under all valid bounded inputs, for a bound provided by the developer. Existing bounded exhaustive testing techniques require the developer to provide a precise specification of the valid inputs. Such specifications are rarely present as part of the software under test, and writing them can be costly and challenging.
To address this situation we propose BEAPI, a tool that given a Java class under test, generates a bounded exhaustive set of objects of the class solely employing the methods of the class, without the need for a specification. BEAPI creates sequences of calls to methods from the class' public API, and executes them to generate inputs. BEAPI implements very effective pruning techniques that allow it to generate inputs efficiently.
We experimentally assessed BEAPI in several case studies from the literature, and showed that it performs comparably to the best existing specification-based bounded exhaustive generation tool (Korat), without requiring a specification of the valid inputs.
有界穷举测试是一种非常有效的错误查找技术,它建议在所有有效的有界输入条件下测试给定程序,测试条件由开发人员提供。现有的有界穷举测试技术要求开发人员提供有效输入的精确说明。针对这种情况,我们提出了 BEAPI,它是一种工具,给定一个被测试的 Java 类,只需使用该类的方法,就能生成该类对象的有界穷举集,而无需说明。BEAPI 从类的公共 API 中创建方法调用序列,并执行这些序列以生成输入。BEAPI 实现了非常有效的剪枝技术,使其能够高效地生成输入。我们在多个文献案例研究中对 BEAPI 进行了实验性评估,结果表明它的性能可与现有最好的基于规范的有界穷举生成工具(Korat)媲美,而无需对有效输入进行规范。
{"title":"BEAPI: A tool for bounded exhaustive input generation from APIs","authors":"Mariano Politano , Valeria Bengolea , Facundo Molina , Nazareno Aguirre , Marcelo Frias , Pablo Ponzio","doi":"10.1016/j.scico.2024.103153","DOIUrl":"https://doi.org/10.1016/j.scico.2024.103153","url":null,"abstract":"<div><p>Bounded exhaustive testing is a very effective technique for bug finding, which proposes to test a given program under all valid bounded inputs, for a bound provided by the developer. Existing bounded exhaustive testing techniques require the developer to provide a precise specification of the valid inputs. Such specifications are rarely present as part of the software under test, and writing them can be costly and challenging.</p><p>To address this situation we propose BEAPI, a tool that given a Java class under test, generates a bounded exhaustive set of objects of the class solely employing the methods of the class, without the need for a specification. BEAPI creates sequences of calls to methods from the class' public API, and executes them to generate inputs. BEAPI implements very effective pruning techniques that allow it to generate inputs efficiently.</p><p>We experimentally assessed BEAPI in several case studies from the literature, and showed that it performs comparably to the best existing specification-based bounded exhaustive generation tool (Korat), without requiring a specification of the valid inputs.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103153"},"PeriodicalIF":1.3,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141294635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Symbolic execution is a software verification technique symbolically running programs and thereby checking for bugs. Ranged symbolic execution performs symbolic execution on program parts, so-called path ranges, in parallel. Due to the parallelism, verification is accelerated and hence scales to larger programs.
In this paper, we discuss a generalization of ranged symbolic execution to arbitrary program analyses. More specifically, we present a verification approach that splits programs into path ranges and then runs arbitrary analyses on the ranges in parallel. Our approach in particular allows to run different analyses on different program parts. We have implemented this generalization on top of the tool CPAchecker and evaluated it on programs from the SV-COMP benchmark. Our evaluation shows that verification can benefit from the parallelization of the verification task, but also needs a form of work stealing (between analyses) to become efficient.
{"title":"Parallel program analysis on path ranges","authors":"Jan Haltermann , Marie-Christine Jakobs , Cedric Richter , Heike Wehrheim","doi":"10.1016/j.scico.2024.103154","DOIUrl":"https://doi.org/10.1016/j.scico.2024.103154","url":null,"abstract":"<div><p>Symbolic execution is a software verification technique symbolically running programs and thereby checking for bugs. Ranged symbolic execution performs symbolic execution on program parts, so-called <em>path ranges</em>, in parallel. Due to the parallelism, verification is accelerated and hence scales to larger programs.</p><p>In this paper, we discuss a generalization of ranged symbolic execution to arbitrary program analyses. More specifically, we present a verification approach that splits programs into path ranges and then runs arbitrary analyses on the ranges in parallel. Our approach in particular allows to run <em>different</em> analyses on different program parts. We have implemented this generalization on top of the tool <span>CPAchecker</span> and evaluated it on programs from the SV-COMP benchmark. Our evaluation shows that verification can benefit from the parallelization of the verification task, but also needs a form of work stealing (between analyses) to become efficient.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103154"},"PeriodicalIF":1.3,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0167642324000777/pdfft?md5=c9721851a6e6fced1e9f8337cb568046&pid=1-s2.0-S0167642324000777-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141294633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1016/j.scico.2024.103152
Jaemin Hong , Sunghwan Shim , Sanguk Park , Tae Woo Kim , Jungwoo Kim , Junsoo Lee , Sukyoung Ryu , Jeehoon Kang
Operating systems (OSs) suffer from pervasive memory bugs. Their primary source is shared mutable states, crucial to low-level control and efficiency. The safety of shared mutable states is not guaranteed by C/C++, in which legacy OSs are typically written. Recently, researchers have adopted Rust into OS development to implement clean-slate OSs with fewer memory bugs. Rust ensures the safety of shared mutable states that follow the “aliasing XOR mutability” discipline via its type system. With the success of Rust in clean-slate OSs, the industry has become interested in rewriting legacy OSs in Rust. However, one of the most significant obstacles to this goal is shared mutable states that are aliased AND mutable (A&M). While they are essential to the performance of legacy OSs, Rust does not guarantee their safety. Instead, programmers have identified A&M states with the same reasoning principle dubbed an A&M pattern and implemented its modular abstraction to facilitate safety reasoning. This paper investigates modular abstractions for A&M patterns in legacy OSs. We present modular abstractions for six A&M patterns in the xv6 OS. Our investigation of Linux and clean-slate Rust OSs shows that the patterns are practical, as all of them are utilized in Linux, and the abstractions are original, as none of them are found in the Rust OSs. Using the abstractions, we implemented xv6, a complete rewrite of xv6 in Rust. The abstractions incur no run-time overhead compared to xv6 while reducing the reasoning cost of xv6 to the level of the clean-slate Rust OSs.
{"title":"Taming shared mutable states of operating systems in Rust","authors":"Jaemin Hong , Sunghwan Shim , Sanguk Park , Tae Woo Kim , Jungwoo Kim , Junsoo Lee , Sukyoung Ryu , Jeehoon Kang","doi":"10.1016/j.scico.2024.103152","DOIUrl":"https://doi.org/10.1016/j.scico.2024.103152","url":null,"abstract":"<div><p>Operating systems (OSs) suffer from pervasive memory bugs. Their primary source is shared mutable states, crucial to low-level control and efficiency. The safety of shared mutable states is not guaranteed by C/C++, in which legacy OSs are typically written. Recently, researchers have adopted Rust into OS development to implement clean-slate OSs with fewer memory bugs. Rust ensures the safety of shared mutable states that follow the “aliasing XOR mutability” discipline via its type system. With the success of Rust in clean-slate OSs, the industry has become interested in rewriting legacy OSs in Rust. However, one of the most significant obstacles to this goal is shared mutable states that are <em>aliased AND mutable</em> (A&M). While they are essential to the performance of legacy OSs, Rust does not guarantee their safety. Instead, programmers have identified A&M states with the same reasoning principle dubbed an <em>A&M pattern</em> and implemented its modular abstraction to facilitate safety reasoning. This paper investigates modular abstractions for A&M patterns in legacy OSs. We present modular abstractions for six A&M patterns in the xv6 OS. Our investigation of Linux and clean-slate Rust OSs shows that the patterns are practical, as all of them are utilized in Linux, and the abstractions are original, as none of them are found in the Rust OSs. Using the abstractions, we implemented xv6<span><math><msub><mrow></mrow><mrow><mi>R</mi><mi>u</mi><mi>s</mi><mi>t</mi></mrow></msub></math></span>, a complete rewrite of xv6 in Rust. The abstractions incur no run-time overhead compared to xv6 while reducing the reasoning cost of xv6<span><math><msub><mrow></mrow><mrow><mi>R</mi><mi>u</mi><mi>s</mi><mi>t</mi></mrow></msub></math></span> to the level of the clean-slate Rust OSs.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103152"},"PeriodicalIF":1.3,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141294634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}