首页 > 最新文献

Science of Computer Programming最新文献

英文 中文
Special Issue on Selected Tools from the Tool Track of the 30th IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER 2023 Tool Track) 第 30 届电气和电子工程师学会软件分析、进化与再工程国际会议工具专题特刊(SANER 2023 工具专题)
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-14 DOI: 10.1016/j.scico.2024.103167
Ying Wang , Tao Zhang , Xiapu Luo , Peng Liang
{"title":"Special Issue on Selected Tools from the Tool Track of the 30th IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER 2023 Tool Track)","authors":"Ying Wang , Tao Zhang , Xiapu Luo , Peng Liang","doi":"10.1016/j.scico.2024.103167","DOIUrl":"10.1016/j.scico.2024.103167","url":null,"abstract":"","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103167"},"PeriodicalIF":1.5,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141394820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
libmg: A Python library for programming graph neural networks in μG libmg: μG 图形神经网络编程 Python 库
IF 1.3 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-14 DOI: 10.1016/j.scico.2024.103165
Matteo Belenchia, Flavio Corradini, Michela Quadrini, Michele Loreti

Graph neural networks have proven their effectiveness across a wide spectrum of graph-based tasks. Despite their successes, they share the same limitations as other deep learning architectures and pose additional challenges for their formal verification. To overcome these problems, we proposed a specification language, μG, that can be used to program graph neural networks. This language has been implemented in a Python library called libmg that handles the definition, compilation, visualization, and explanation of μG graph neural network models. We illustrate its usage by showing how it was used to implement a Computation Tree Logic model checker in our previous work, and evaluate its performance on the benchmarks of the Model Checking Contest. In the future, we plan to use μG to further investigate the issues of explainability and verification of graph neural networks.

图神经网络已经证明了其在各种基于图的任务中的有效性。尽管取得了成功,但它们与其他深度学习架构一样存在局限性,并为其形式验证带来了额外的挑战。为了克服这些问题,我们提出了一种可用于图神经网络编程的规范语言 μG。这种语言已在一个名为 libmg 的 Python 库中实现,该库可处理 μG 图神经网络模型的定义、编译、可视化和解释。我们在之前的工作中使用该语言实现了计算树逻辑模型检查器,并在模型检查竞赛的基准测试中对其性能进行了评估,以此说明该语言的用法。未来,我们计划使用 μG 进一步研究图神经网络的可解释性和验证问题。
{"title":"libmg: A Python library for programming graph neural networks in μG","authors":"Matteo Belenchia,&nbsp;Flavio Corradini,&nbsp;Michela Quadrini,&nbsp;Michele Loreti","doi":"10.1016/j.scico.2024.103165","DOIUrl":"10.1016/j.scico.2024.103165","url":null,"abstract":"<div><p>Graph neural networks have proven their effectiveness across a wide spectrum of graph-based tasks. Despite their successes, they share the same limitations as other deep learning architectures and pose additional challenges for their formal verification. To overcome these problems, we proposed a specification language, <span><math><mi>μ</mi><mi>G</mi></math></span>, that can be used to <em>program</em> graph neural networks. This language has been implemented in a Python library called <span>libmg</span> that handles the definition, compilation, visualization, and explanation of <span><math><mi>μ</mi><mi>G</mi></math></span> graph neural network models. We illustrate its usage by showing how it was used to implement a Computation Tree Logic model checker in our previous work, and evaluate its performance on the benchmarks of the Model Checking Contest. In the future, we plan to use <span><math><mi>μ</mi><mi>G</mi></math></span> to further investigate the issues of explainability and verification of graph neural networks.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103165"},"PeriodicalIF":1.3,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141398951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a framework for reliable performance evaluation in defect prediction 建立可靠的缺陷预测性能评估框架
IF 1.3 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-12 DOI: 10.1016/j.scico.2024.103164
Xutong Liu, Shiran Liu, Zhaoqiang Guo, Peng Zhang, Yibiao Yang, Huihui Liu, Hongmin Lu, Yanhui Li, Lin Chen, Yuming Zhou

Enhancing software reliability, dependability, and security requires effective identification and mitigation of defects during early development stages. Software defect prediction (SDP) models have emerged as valuable tools for this purpose. However, there is currently a lack of consensus in evaluating the predictive performance of newly proposed models, which hinders accurate measurement of progress and can lead to misleading conclusions. To tackle this challenge, we present MATTER (a fraMework towArd a consisTenT pErformance compaRison), which aims to provide reliable and consistent performance comparisons for SDP models. MATTER incorporates three key considerations. First, it establishes a global reference point, ONE (glObal baseliNe modEl), which possesses the 3S properties (Simplicity in implementation, Strong predictive ability, and Stable prediction performance), to serve as the baseline for evaluating other models. Second, it proposes using the SQA-effort-aligned threshold setting to ensure fair performance comparisons. Third, it advocates for consistent performance evaluation by adopting a set of core performance indicators that reflect the practical value of prediction models in achieving tangible progress. Through the application of MATTER to the same benchmark data sets, researchers and practitioners can obtain more accurate and meaningful insights into the performance of defect prediction models, thereby facilitating informed decision-making and improving software quality. When evaluating representative SDP models from recent years using MATTER, we surprisingly observed that: none of these models demonstrated a notable enhancement in prediction performance compared to the simple baseline model ONE. In future studies, we strongly recommend the adoption of MATTER to assess the actual usefulness of newly proposed models, promoting reliable scientific progress in defect prediction.

要提高软件的可靠性、可靠性和安全性,就必须在早期开发阶段有效地识别和减少缺陷。为此,软件缺陷预测(SDP)模型已成为有价值的工具。然而,目前在评估新提出模型的预测性能方面还缺乏共识,这阻碍了对进展的精确测量,并可能导致误导性结论。为了应对这一挑战,我们提出了 MATTER(一种旨在实现一致性能比较的方法),旨在为 SDP 模型提供可靠、一致的性能比较。MATTER 考虑了三个关键因素。首先,它建立了一个具有 3S 特性(实施简单、预测能力强、预测性能稳定)的全球参考点 ONE(全球基准模型),作为评估其他模型的基准。其次,它建议使用 SQA 算法对齐阈值设置,以确保公平的性能比较。第三,它主张通过采用一套核心性能指标来实现一致的性能评估,这些指标反映了预测模型在取得切实进展方面的实用价值。通过将 MATTER 应用于相同的基准数据集,研究人员和从业人员可以更准确、更有意义地了解缺陷预测模型的性能,从而促进知情决策,提高软件质量。在使用 MATTER 评估近年来具有代表性的 SDP 模型时,我们惊讶地发现:与简单的基线模型 ONE 相比,这些模型的预测性能都没有明显提高。在今后的研究中,我们强烈建议采用 MATTER 评估新提出模型的实际效用,以促进缺陷预测领域可靠的科学进步。
{"title":"Towards a framework for reliable performance evaluation in defect prediction","authors":"Xutong Liu,&nbsp;Shiran Liu,&nbsp;Zhaoqiang Guo,&nbsp;Peng Zhang,&nbsp;Yibiao Yang,&nbsp;Huihui Liu,&nbsp;Hongmin Lu,&nbsp;Yanhui Li,&nbsp;Lin Chen,&nbsp;Yuming Zhou","doi":"10.1016/j.scico.2024.103164","DOIUrl":"10.1016/j.scico.2024.103164","url":null,"abstract":"<div><p>Enhancing software reliability, dependability, and security requires effective identification and mitigation of defects during early development stages. Software defect prediction (SDP) models have emerged as valuable tools for this purpose. However, there is currently a lack of consensus in evaluating the predictive performance of newly proposed models, which hinders accurate measurement of progress and can lead to misleading conclusions. To tackle this challenge, we present MATTER (a fraMework towArd a consisTenT pErformance compaRison), which aims to provide reliable and consistent performance comparisons for SDP models. MATTER incorporates three key considerations. First, it establishes a global reference point, ONE (glObal baseliNe modEl), which possesses the 3S properties (Simplicity in implementation, Strong predictive ability, and Stable prediction performance), to serve as the baseline for evaluating other models. Second, it proposes using the SQA-effort-aligned threshold setting to ensure fair performance comparisons. Third, it advocates for consistent performance evaluation by adopting a set of core performance indicators that reflect the practical value of prediction models in achieving tangible progress. Through the application of MATTER to the same benchmark data sets, researchers and practitioners can obtain more accurate and meaningful insights into the performance of defect prediction models, thereby facilitating informed decision-making and improving software quality. When evaluating representative SDP models from recent years using MATTER, we surprisingly observed that: none of these models demonstrated a notable enhancement in prediction performance compared to the simple baseline model ONE. In future studies, we strongly recommend the adoption of MATTER to assess the actual usefulness of newly proposed models, promoting reliable scientific progress in defect prediction.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103164"},"PeriodicalIF":1.3,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141408043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TR-Fuzz: A syntax valid tool for fuzzing C compilers TR-Fuzz:用于模糊 C 编译器的语法有效工具
IF 1.3 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-07 DOI: 10.1016/j.scico.2024.103155
Chi Zhang , Jinfu Chen , Saihua Cai , Wen Zhang , Rexford Nii Ayitey Sosu , Haibo Chen

Compilers play a critical role in current software construction. However, the vulnerabilities or bugs within the compiler can pose significant challenges to ensuring the security of the resultant software. In recent years, many compilers have made use of testing techniques to address and mitigate such concerns. Fuzzing is widely used among these techniques to detect software bugs. However, when fuzzing compilers, there are still shortcomings in terms of the diversity and validity of test cases. This paper introduces TR-Fuzz, a fuzzing tool specifically designed for C compilers based on Transformer. Leveraging position embedding and multi-head attention mechanisms, TR-Fuzz establishes relationships among data, facilitating the generation of well-formed C programs for compiler testing. In addition, we use different generation strategies in the process of program generation to improve the performance of TR-Fuzz. We validate the effectiveness of TR-Fuzz through the comparison with existing fuzzing tools for C compilers. The experimental results show that TR-Fuzz increases the pass rate of the generated C programs by an average of about 12% and improves the coverage of programs under test compared with the existing tools. Benefiting from the improved pass rate and coverage, we found five bugs in GCC-9.

编译器在当前的软件开发中发挥着至关重要的作用。然而,编译器中的漏洞或错误会给确保生成软件的安全性带来巨大挑战。近年来,许多编译器都采用了测试技术来解决和减轻这些问题。在这些技术中,模糊技术被广泛用于检测软件缺陷。然而,在对编译器进行模糊测试时,测试用例的多样性和有效性仍存在不足。本文介绍了 TR-Fuzz,一种专门为基于 Transformer 的 C 语言编译器设计的模糊工具。利用位置嵌入和多头关注机制,TR-Fuzz 建立了数据之间的关系,从而为编译器测试生成格式良好的 C 程序提供了便利。此外,我们还在程序生成过程中使用不同的生成策略,以提高 TR-Fuzz 的性能。我们通过与现有 C 编译器模糊工具的比较,验证了 TR-Fuzz 的有效性。实验结果表明,与现有工具相比,TR-Fuzz 将生成的 C 程序的通过率平均提高了约 12%,并提高了被测程序的覆盖率。得益于通过率和覆盖率的提高,我们在 GCC-9 中发现了五个错误。
{"title":"TR-Fuzz: A syntax valid tool for fuzzing C compilers","authors":"Chi Zhang ,&nbsp;Jinfu Chen ,&nbsp;Saihua Cai ,&nbsp;Wen Zhang ,&nbsp;Rexford Nii Ayitey Sosu ,&nbsp;Haibo Chen","doi":"10.1016/j.scico.2024.103155","DOIUrl":"10.1016/j.scico.2024.103155","url":null,"abstract":"<div><p>Compilers play a critical role in current software construction. However, the vulnerabilities or bugs within the compiler can pose significant challenges to ensuring the security of the resultant software. In recent years, many compilers have made use of testing techniques to address and mitigate such concerns. Fuzzing is widely used among these techniques to detect software bugs. However, when fuzzing compilers, there are still shortcomings in terms of the diversity and validity of test cases. This paper introduces TR-Fuzz, a fuzzing tool specifically designed for C compilers based on Transformer. Leveraging position embedding and multi-head attention mechanisms, TR-Fuzz establishes relationships among data, facilitating the generation of well-formed C programs for compiler testing. In addition, we use different generation strategies in the process of program generation to improve the performance of TR-Fuzz. We validate the effectiveness of TR-Fuzz through the comparison with existing fuzzing tools for C compilers. The experimental results show that TR-Fuzz increases the pass rate of the generated C programs by an average of about 12% and improves the coverage of programs under test compared with the existing tools. Benefiting from the improved pass rate and coverage, we found five bugs in GCC-9.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103155"},"PeriodicalIF":1.3,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141405384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Latch: Enabling large-scale automated testing on constrained systems Latch:实现受限系统的大规模自动测试
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-06 DOI: 10.1016/j.scico.2024.103157
Tom Lauwaerts , Stefan Marr , Christophe Scholliers

Testing is an essential part of the software development cycle. Unfortunately, testing on constrained devices is currently very challenging. First, the limited memory of constrained devices severely restricts the size of test suites. Second, the limited processing power causes test suites to execute slowly, preventing a fast feedback loop. Third, when the constrained device becomes unresponsive, it is impossible to distinguish between the test failing or taking very long, forcing the developer to work with timeouts. Unfortunately, timeouts can cause tests to be flaky, i.e., have unpredictable outcomes independent of code changes. Given these problems, most IoT developers rely on laborious manual testing.

In this paper, we propose the novel testing framework Latch (Large-scale Automated Testing on Constrained Hardware) to overcome the three main challenges of running large test suites on constrained hardware, as well as automate manual testing scenarios through a novel testing methodology based on debugger-like operations—we call this new testing approach managed testing.

The core idea of Latch is to enable testing on constrained devices without those devices maintaining the whole test suite in memory. Therefore, programmers script and run tests on a workstation which then step-wise instructs the constrained device to execute each test, thereby overcoming the memory constraints. Our testing framework further allows developers to mark tests as depending on other tests. This way, Latch can skip tests that depend on previously failing tests resulting in a faster feedback loop. Finally, Latch addresses the issue of timeouts and flaky tests by including an analysis mode that provides feedback on timeouts and the flakiness of tests.

To illustrate the expressiveness of Latch, we present testing scenarios representing unit testing, integration testing, and end-to-end testing. We evaluate the performance of Latch by testing a virtual machine against the WebAssembly specification, with a large test suite consisting of 10,213 tests running on an ESP32 microcontroller. Our experience shows that the testing framework is expressive, reliable and reasonably fast, making it suitable to run large test suites on constrained devices. Furthermore, the debugger-like operations enable to closely mimic manual testing.

测试是软件开发周期的重要组成部分。遗憾的是,目前在受限设备上进行测试非常具有挑战性。首先,受限设备的内存有限,严重限制了测试套件的大小。其次,有限的处理能力导致测试套件执行缓慢,无法形成快速反馈回路。第三,当受限设备反应迟钝时,无法区分是测试失败还是测试耗时过长,开发人员不得不使用超时功能。不幸的是,超时会导致测试不稳定,即测试结果不可预测,与代码变化无关。在本文中,我们提出了新颖的测试框架 Latch(受限硬件上的大规模自动测试),以克服在受限硬件上运行大型测试套件所面临的三大挑战,并通过基于类似调试器操作的新颖测试方法实现手动测试场景的自动化--我们将这种新的测试方法称为托管测试。因此,程序员在工作站上编写脚本并运行测试,然后工作站会逐步指示受限设备执行每个测试,从而克服内存限制。我们的测试框架还允许开发人员将测试标记为依赖于其他测试。这样,Latch 就能跳过依赖于先前失败测试的测试,从而加快反馈循环。最后,Latch 通过分析模式解决了测试超时和不稳定性的问题,该模式可提供测试超时和不稳定性的反馈。我们通过在 ESP32 微控制器上运行由 10,213 个测试组成的大型测试套件,根据 WebAssembly 规范测试虚拟机来评估 Latch 的性能。我们的经验表明,该测试框架具有很强的表现力、可靠性和相当快的速度,因此适合在受限设备上运行大型测试套件。此外,类似调试器的操作还能近似模拟人工测试。
{"title":"Latch: Enabling large-scale automated testing on constrained systems","authors":"Tom Lauwaerts ,&nbsp;Stefan Marr ,&nbsp;Christophe Scholliers","doi":"10.1016/j.scico.2024.103157","DOIUrl":"10.1016/j.scico.2024.103157","url":null,"abstract":"<div><p>Testing is an essential part of the software development cycle. Unfortunately, testing on constrained devices is currently very challenging. First, the limited memory of constrained devices severely restricts the size of test suites. Second, the limited processing power causes test suites to execute slowly, preventing a fast feedback loop. Third, when the constrained device becomes unresponsive, it is impossible to distinguish between the test failing or taking very long, forcing the developer to work with timeouts. Unfortunately, timeouts can cause tests to be flaky, i.e., have unpredictable outcomes independent of code changes. Given these problems, most IoT developers rely on laborious manual testing.</p><p>In this paper, we propose the novel testing framework <em>Latch</em> (Large-scale Automated Testing on Constrained Hardware) to overcome the three main challenges of running large test suites on constrained hardware, as well as automate manual testing scenarios through a novel testing methodology based on debugger-like operations—we call this new testing approach <em>managed testing</em>.</p><p>The core idea of <em>Latch</em> is to enable testing on constrained devices without those devices maintaining the whole test suite in memory. Therefore, programmers script and run tests on a workstation which then step-wise instructs the constrained device to execute each test, thereby overcoming the memory constraints. Our testing framework further allows developers to mark tests as depending on other tests. This way, <em>Latch</em> can skip tests that depend on previously failing tests resulting in a faster feedback loop. Finally, <em>Latch</em> addresses the issue of timeouts and flaky tests by including an analysis mode that provides feedback on timeouts and the flakiness of tests.</p><p>To illustrate the expressiveness of <em>Latch</em>, we present testing scenarios representing unit testing, integration testing, and end-to-end testing. We evaluate the performance of <em>Latch</em> by testing a virtual machine against the WebAssembly specification, with a large test suite consisting of 10,213 tests running on an ESP32 microcontroller. Our experience shows that the testing framework is expressive, reliable and reasonably fast, making it suitable to run large test suites on constrained devices. Furthermore, the debugger-like operations enable to closely mimic manual testing.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103157"},"PeriodicalIF":1.5,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141414909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
iGnnVD: A novel software vulnerability detection model based on integrated graph neural networks iGnnVD:基于集成图神经网络的新型软件漏洞检测模型
IF 1.3 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-06 DOI: 10.1016/j.scico.2024.103156
Jinfu Chen , Yemin Yin , Saihua Cai , Weijia Wang , Shengran Wang , Jiming Chen

Software vulnerability detection is a challenging task in the security field, the boom of deep learning technology promotes the development of automatic vulnerability detection. Compared with sequence-based deep learning models, graph neural network (GNN) can learn the structural features of code, it performs well in the field of vulnerability detection for source code. However, different GNNs have different detection results for the same code, and using a single kind of GNN may lead to high false positive rate and false negative rate. In addition, the complex structure of source code causes single GNN model cannot effectively learn their depth feature, thereby leading to low detection accuracy. To solve these limitations, we propose a software vulnerability detection model called iGnnVD based on the integrated graph neural networks. In the proposed iGnnVD model, the base detectors including GCN, GAT and APPNP are first constructed to capture the bidirectional information in the code graph structure with bidirectional structure; And then, the residual connection is used to aggregate the features while retaining the features each time; Finally, the convolutional layer is used to perform the aggregated classification. In addition, an integration module that analyzes the detection results of three detectors for final classification is designed using a voting strategy to solve the problem of high false positive rate and false negative rate caused by using a single kind of base detector. We perform extensive experiments on three datasets and experimental results show that the proposed iGnnVD model can improve the detection accuracy of vulnerabilities in source code as well as reduce the false positive rate and false negative rate compared with existing deep learning-based vulnerability detection models, it also has good stability.

软件漏洞检测是安全领域一项极具挑战性的任务,深度学习技术的蓬勃发展推动了漏洞自动检测的发展。与基于序列的深度学习模型相比,图神经网络(GNN)可以学习代码的结构特征,在源代码漏洞检测领域表现出色。然而,不同的图神经网络对相同代码的检测结果不同,使用单一类型的图神经网络可能会导致较高的假阳性率和假阴性率。此外,源代码结构复杂,单一的 GNN 模型无法有效学习其深度特征,从而导致检测准确率较低。为了解决这些问题,我们提出了一种基于集成图神经网络的软件漏洞检测模型 iGnnVD。在所提出的 iGnnVD 模型中,首先构建了包括 GCN、GAT 和 APPNP 在内的基础检测器,以捕捉具有双向结构的代码图结构中的双向信息;然后,在保留每次特征的同时,利用残差连接对特征进行聚合;最后,利用卷积层进行聚合分类。此外,我们还设计了一个集成模块,利用投票策略分析三个检测器的检测结果,进行最终分类,以解决使用单一类型的基础检测器导致的高假阳性率和假阴性率问题。我们在三个数据集上进行了大量实验,实验结果表明,与现有的基于深度学习的漏洞检测模型相比,所提出的 iGnnVD 模型可以提高源代码中漏洞的检测精度,降低误报率和误负率,而且具有良好的稳定性。
{"title":"iGnnVD: A novel software vulnerability detection model based on integrated graph neural networks","authors":"Jinfu Chen ,&nbsp;Yemin Yin ,&nbsp;Saihua Cai ,&nbsp;Weijia Wang ,&nbsp;Shengran Wang ,&nbsp;Jiming Chen","doi":"10.1016/j.scico.2024.103156","DOIUrl":"https://doi.org/10.1016/j.scico.2024.103156","url":null,"abstract":"<div><p>Software vulnerability detection is a challenging task in the security field, the boom of deep learning technology promotes the development of automatic vulnerability detection. Compared with sequence-based deep learning models, graph neural network (GNN) can learn the structural features of code, it performs well in the field of vulnerability detection for source code. However, different GNNs have different detection results for the same code, and using a single kind of GNN may lead to high false positive rate and false negative rate. In addition, the complex structure of source code causes single GNN model cannot effectively learn their depth feature, thereby leading to low detection accuracy. To solve these limitations, we propose a software vulnerability detection model called iGnnVD based on the integrated graph neural networks. In the proposed iGnnVD model, the base detectors including GCN, GAT and APPNP are first constructed to capture the bidirectional information in the code graph structure with bidirectional structure; And then, the residual connection is used to aggregate the features while retaining the features each time; Finally, the convolutional layer is used to perform the aggregated classification. In addition, an integration module that analyzes the detection results of three detectors for final classification is designed using a voting strategy to solve the problem of high false positive rate and false negative rate caused by using a single kind of base detector. We perform extensive experiments on three datasets and experimental results show that the proposed iGnnVD model can improve the detection accuracy of vulnerabilities in source code as well as reduce the false positive rate and false negative rate compared with existing deep learning-based vulnerability detection models, it also has good stability.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103156"},"PeriodicalIF":1.3,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141323285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BEAPI: A tool for bounded exhaustive input generation from APIs BEAPI:从应用程序接口生成有界穷举输入的工具
IF 1.3 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-05 DOI: 10.1016/j.scico.2024.103153
Mariano Politano , Valeria Bengolea , Facundo Molina , Nazareno Aguirre , Marcelo Frias , Pablo Ponzio

Bounded exhaustive testing is a very effective technique for bug finding, which proposes to test a given program under all valid bounded inputs, for a bound provided by the developer. Existing bounded exhaustive testing techniques require the developer to provide a precise specification of the valid inputs. Such specifications are rarely present as part of the software under test, and writing them can be costly and challenging.

To address this situation we propose BEAPI, a tool that given a Java class under test, generates a bounded exhaustive set of objects of the class solely employing the methods of the class, without the need for a specification. BEAPI creates sequences of calls to methods from the class' public API, and executes them to generate inputs. BEAPI implements very effective pruning techniques that allow it to generate inputs efficiently.

We experimentally assessed BEAPI in several case studies from the literature, and showed that it performs comparably to the best existing specification-based bounded exhaustive generation tool (Korat), without requiring a specification of the valid inputs.

有界穷举测试是一种非常有效的错误查找技术,它建议在所有有效的有界输入条件下测试给定程序,测试条件由开发人员提供。现有的有界穷举测试技术要求开发人员提供有效输入的精确说明。针对这种情况,我们提出了 BEAPI,它是一种工具,给定一个被测试的 Java 类,只需使用该类的方法,就能生成该类对象的有界穷举集,而无需说明。BEAPI 从类的公共 API 中创建方法调用序列,并执行这些序列以生成输入。BEAPI 实现了非常有效的剪枝技术,使其能够高效地生成输入。我们在多个文献案例研究中对 BEAPI 进行了实验性评估,结果表明它的性能可与现有最好的基于规范的有界穷举生成工具(Korat)媲美,而无需对有效输入进行规范。
{"title":"BEAPI: A tool for bounded exhaustive input generation from APIs","authors":"Mariano Politano ,&nbsp;Valeria Bengolea ,&nbsp;Facundo Molina ,&nbsp;Nazareno Aguirre ,&nbsp;Marcelo Frias ,&nbsp;Pablo Ponzio","doi":"10.1016/j.scico.2024.103153","DOIUrl":"https://doi.org/10.1016/j.scico.2024.103153","url":null,"abstract":"<div><p>Bounded exhaustive testing is a very effective technique for bug finding, which proposes to test a given program under all valid bounded inputs, for a bound provided by the developer. Existing bounded exhaustive testing techniques require the developer to provide a precise specification of the valid inputs. Such specifications are rarely present as part of the software under test, and writing them can be costly and challenging.</p><p>To address this situation we propose BEAPI, a tool that given a Java class under test, generates a bounded exhaustive set of objects of the class solely employing the methods of the class, without the need for a specification. BEAPI creates sequences of calls to methods from the class' public API, and executes them to generate inputs. BEAPI implements very effective pruning techniques that allow it to generate inputs efficiently.</p><p>We experimentally assessed BEAPI in several case studies from the literature, and showed that it performs comparably to the best existing specification-based bounded exhaustive generation tool (Korat), without requiring a specification of the valid inputs.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103153"},"PeriodicalIF":1.3,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141294635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parallel program analysis on path ranges 路径范围上的并行程序分析
IF 1.3 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-31 DOI: 10.1016/j.scico.2024.103154
Jan Haltermann , Marie-Christine Jakobs , Cedric Richter , Heike Wehrheim

Symbolic execution is a software verification technique symbolically running programs and thereby checking for bugs. Ranged symbolic execution performs symbolic execution on program parts, so-called path ranges, in parallel. Due to the parallelism, verification is accelerated and hence scales to larger programs.

In this paper, we discuss a generalization of ranged symbolic execution to arbitrary program analyses. More specifically, we present a verification approach that splits programs into path ranges and then runs arbitrary analyses on the ranges in parallel. Our approach in particular allows to run different analyses on different program parts. We have implemented this generalization on top of the tool CPAchecker and evaluated it on programs from the SV-COMP benchmark. Our evaluation shows that verification can benefit from the parallelization of the verification task, but also needs a form of work stealing (between analyses) to become efficient.

符号执行是一种软件验证技术,它以符号方式运行程序,从而检查错误。范围符号执行对程序部分(即所谓的路径范围)进行并行符号执行。在本文中,我们讨论了范围符号执行对任意程序分析的推广。更具体地说,我们提出了一种验证方法,它将程序分割成路径范围,然后并行运行路径范围上的任意分析。我们的方法尤其允许在不同的程序部分运行不同的分析。我们在工具 CPAchecker 的基础上实现了这种通用方法,并在 SV-COMP 基准程序上对其进行了评估。我们的评估结果表明,验证任务的并行化可以使验证工作受益,但也需要一种形式的工作窃取(在分析之间)才能提高效率。
{"title":"Parallel program analysis on path ranges","authors":"Jan Haltermann ,&nbsp;Marie-Christine Jakobs ,&nbsp;Cedric Richter ,&nbsp;Heike Wehrheim","doi":"10.1016/j.scico.2024.103154","DOIUrl":"https://doi.org/10.1016/j.scico.2024.103154","url":null,"abstract":"<div><p>Symbolic execution is a software verification technique symbolically running programs and thereby checking for bugs. Ranged symbolic execution performs symbolic execution on program parts, so-called <em>path ranges</em>, in parallel. Due to the parallelism, verification is accelerated and hence scales to larger programs.</p><p>In this paper, we discuss a generalization of ranged symbolic execution to arbitrary program analyses. More specifically, we present a verification approach that splits programs into path ranges and then runs arbitrary analyses on the ranges in parallel. Our approach in particular allows to run <em>different</em> analyses on different program parts. We have implemented this generalization on top of the tool <span>CPAchecker</span> and evaluated it on programs from the SV-COMP benchmark. Our evaluation shows that verification can benefit from the parallelization of the verification task, but also needs a form of work stealing (between analyses) to become efficient.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103154"},"PeriodicalIF":1.3,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0167642324000777/pdfft?md5=c9721851a6e6fced1e9f8337cb568046&pid=1-s2.0-S0167642324000777-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141294633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Taming shared mutable states of operating systems in Rust 用 Rust 管理操作系统的共享可变状态
IF 1.3 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-27 DOI: 10.1016/j.scico.2024.103152
Jaemin Hong , Sunghwan Shim , Sanguk Park , Tae Woo Kim , Jungwoo Kim , Junsoo Lee , Sukyoung Ryu , Jeehoon Kang

Operating systems (OSs) suffer from pervasive memory bugs. Their primary source is shared mutable states, crucial to low-level control and efficiency. The safety of shared mutable states is not guaranteed by C/C++, in which legacy OSs are typically written. Recently, researchers have adopted Rust into OS development to implement clean-slate OSs with fewer memory bugs. Rust ensures the safety of shared mutable states that follow the “aliasing XOR mutability” discipline via its type system. With the success of Rust in clean-slate OSs, the industry has become interested in rewriting legacy OSs in Rust. However, one of the most significant obstacles to this goal is shared mutable states that are aliased AND mutable (A&M). While they are essential to the performance of legacy OSs, Rust does not guarantee their safety. Instead, programmers have identified A&M states with the same reasoning principle dubbed an A&M pattern and implemented its modular abstraction to facilitate safety reasoning. This paper investigates modular abstractions for A&M patterns in legacy OSs. We present modular abstractions for six A&M patterns in the xv6 OS. Our investigation of Linux and clean-slate Rust OSs shows that the patterns are practical, as all of them are utilized in Linux, and the abstractions are original, as none of them are found in the Rust OSs. Using the abstractions, we implemented xv6Rust, a complete rewrite of xv6 in Rust. The abstractions incur no run-time overhead compared to xv6 while reducing the reasoning cost of xv6Rust to the level of the clean-slate Rust OSs.

操作系统(OS)普遍存在内存漏洞。它们的主要来源是共享可变状态,这对底层控制和效率至关重要。C/C++ 无法保证共享可变状态的安全性,而传统操作系统通常是用 C/C++ 编写的。最近,研究人员在操作系统开发中采用了 Rust,以实现内存错误更少的干净操作系统。Rust 通过其类型系统确保遵循 "别名 XOR 可变性 "规则的共享可变状态的安全性。随着 Rust 在清洁板操作系统中的成功,业界开始对用 Rust 重写传统操作系统感兴趣。然而,实现这一目标的最大障碍之一是共享的可变状态,即别名和可变状态(A&M)。虽然它们对传统操作系统的性能至关重要,但 Rust 并不保证它们的安全性。相反,程序员们用相同的推理原则识别 A&M 状态,将其称为 A&M 模式,并实现其模块化抽象,以促进安全推理。本文研究了传统操作系统中 A&M 模式的模块化抽象。我们介绍了 xv6 操作系统中六种 A&M 模式的模块化抽象。我们对 Linux 和简洁版 Rust 操作系统的调查表明,这些模式是实用的,因为所有这些模式都在 Linux 中使用,而这些抽象是原创的,因为在 Rust 操作系统中找不到这些模式。利用这些抽象,我们用 Rust 实现了 xv6Rust,这是对 xv6 的完全重写。与 xv6 相比,抽象不产生运行时开销,同时将 xv6Rust 的推理成本降低到了 Rust 操作系统的水平。
{"title":"Taming shared mutable states of operating systems in Rust","authors":"Jaemin Hong ,&nbsp;Sunghwan Shim ,&nbsp;Sanguk Park ,&nbsp;Tae Woo Kim ,&nbsp;Jungwoo Kim ,&nbsp;Junsoo Lee ,&nbsp;Sukyoung Ryu ,&nbsp;Jeehoon Kang","doi":"10.1016/j.scico.2024.103152","DOIUrl":"https://doi.org/10.1016/j.scico.2024.103152","url":null,"abstract":"<div><p>Operating systems (OSs) suffer from pervasive memory bugs. Their primary source is shared mutable states, crucial to low-level control and efficiency. The safety of shared mutable states is not guaranteed by C/C++, in which legacy OSs are typically written. Recently, researchers have adopted Rust into OS development to implement clean-slate OSs with fewer memory bugs. Rust ensures the safety of shared mutable states that follow the “aliasing XOR mutability” discipline via its type system. With the success of Rust in clean-slate OSs, the industry has become interested in rewriting legacy OSs in Rust. However, one of the most significant obstacles to this goal is shared mutable states that are <em>aliased AND mutable</em> (A&amp;M). While they are essential to the performance of legacy OSs, Rust does not guarantee their safety. Instead, programmers have identified A&amp;M states with the same reasoning principle dubbed an <em>A&amp;M pattern</em> and implemented its modular abstraction to facilitate safety reasoning. This paper investigates modular abstractions for A&amp;M patterns in legacy OSs. We present modular abstractions for six A&amp;M patterns in the xv6 OS. Our investigation of Linux and clean-slate Rust OSs shows that the patterns are practical, as all of them are utilized in Linux, and the abstractions are original, as none of them are found in the Rust OSs. Using the abstractions, we implemented xv6<span><math><msub><mrow></mrow><mrow><mi>R</mi><mi>u</mi><mi>s</mi><mi>t</mi></mrow></msub></math></span>, a complete rewrite of xv6 in Rust. The abstractions incur no run-time overhead compared to xv6 while reducing the reasoning cost of xv6<span><math><msub><mrow></mrow><mrow><mi>R</mi><mi>u</mi><mi>s</mi><mi>t</mi></mrow></msub></math></span> to the level of the clean-slate Rust OSs.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103152"},"PeriodicalIF":1.3,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141294634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preface Formal Techniques for Safety-Critical Systems (FTSCS 2022) 前言 安全关键型系统的形式化技术(FTSCS 2022)
IF 1.3 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-21 DOI: 10.1016/j.scico.2024.103149
Cyrille Artho , Peter Csaba Ölveczky
{"title":"Preface Formal Techniques for Safety-Critical Systems (FTSCS 2022)","authors":"Cyrille Artho ,&nbsp;Peter Csaba Ölveczky","doi":"10.1016/j.scico.2024.103149","DOIUrl":"10.1016/j.scico.2024.103149","url":null,"abstract":"","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"237 ","pages":"Article 103149"},"PeriodicalIF":1.3,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141145140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Science of Computer Programming
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1