首页 > 最新文献

Science of Computer Programming最新文献

英文 中文
ScaRLib: Towards a hybrid toolchain for aggregate computing and many-agent reinforcement learning ScaRLib:面向聚合计算和多代理强化学习的混合工具链
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-23 DOI: 10.1016/j.scico.2024.103176
D. Domini, F. Cavallari, G. Aguzzi, M. Viroli

This article introduces ScaRLib, a Scala-based framework that aims to streamline the development cyber-physical swarms scenarios (i.e., systems of many interacting distributed devices that collectively accomplish system-wide tasks) by integrating macroprogramming and multi-agent reinforcement learning to design collective behavior. This framework serves as the starting point for a broader toolchain that will integrate these two approaches at multiple points to harness the capabilities of both, enabling the expression of complex and adaptive collective behavior.

本文介绍了 ScaRLib,这是一个基于 Scala 的框架,旨在通过集成宏观编程和多代理强化学习来设计集体行为,从而简化网络物理蜂群场景(即由许多交互的分布式设备组成的系统,这些设备共同完成全系统的任务)的开发过程。该框架是一个更广泛的工具链的起点,它将在多个点上整合这两种方法,以利用这两种方法的能力,从而实现复杂和自适应的集体行为。
{"title":"ScaRLib: Towards a hybrid toolchain for aggregate computing and many-agent reinforcement learning","authors":"D. Domini,&nbsp;F. Cavallari,&nbsp;G. Aguzzi,&nbsp;M. Viroli","doi":"10.1016/j.scico.2024.103176","DOIUrl":"10.1016/j.scico.2024.103176","url":null,"abstract":"<div><p>This article introduces ScaRLib, a Scala-based framework that aims to streamline the development cyber-physical swarms scenarios (i.e., systems of many interacting distributed devices that collectively accomplish system-wide tasks) by integrating macroprogramming and multi-agent reinforcement learning to design collective behavior. This framework serves as the starting point for a broader toolchain that will integrate these two approaches at multiple points to harness the capabilities of both, enabling the expression of complex and adaptive collective behavior.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103176"},"PeriodicalIF":1.5,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141959980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Encoding TLA+ proof obligations safely for SMT 为 SMT 安全编码 TLA+ 证明义务
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-23 DOI: 10.1016/j.scico.2024.103178
Rosalie Defourné

The TLA+ Proof System (TLAPS) allows users to verify proofs with the support of automated theorem provers, including SMT solvers. To increase trust in TLAPS, we revisited the encoding of TLA+ for SMT, whose implementation had become too complex. Our approach is based on a first-order axiomatization with E-matching patterns. The new encoding is available with TLAPS and achieves performances similar to the previous version, despite its simpler design.

TLA+ 证明系统(TLAPS)允许用户在自动定理证明器(包括 SMT 求解器)的支持下验证证明。为了增加对 TLAPS 的信任,我们重新研究了 TLA+ 对 SMT 的编码,因为它的实现变得过于复杂。我们的方法基于带有 E 匹配模式的一阶公理化。新编码可与 TLAPS 一起使用,尽管设计更简单,但性能却与前一版本相近。
{"title":"Encoding TLA+ proof obligations safely for SMT","authors":"Rosalie Defourné","doi":"10.1016/j.scico.2024.103178","DOIUrl":"10.1016/j.scico.2024.103178","url":null,"abstract":"<div><p>The TLA<sup>+</sup> Proof System (TLAPS) allows users to verify proofs with the support of automated theorem provers, including SMT solvers. To increase trust in TLAPS, we revisited the encoding of TLA<sup>+</sup> for SMT, whose implementation had become too complex. Our approach is based on a first-order axiomatization with E-matching patterns. The new encoding is available with TLAPS and achieves performances similar to the previous version, despite its simpler design.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"239 ","pages":"Article 103178"},"PeriodicalIF":1.5,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141846026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IPFS requested content location service IPFS 请求的内容定位服务
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-18 DOI: 10.1016/j.scico.2024.103174
Pedro Ákos Costa , João Leitão , Yannis Psaras

This paper introduces the IPFS requested content location service, a software service to monitor the operation of IPFS from the perspective of the content requested through IPFS gateways. The software is provided as a docker stack that consumes the logs of one or more IPFS gateways, extracts the CID of the requested content and the IP address of the requester, and queries the IPFS network for the providers of the content. The software also matches the IP addresses of the requesters and providers with their geographic location, and stores the results in a database for later analysis. The software has been used in our previous measurement study, published at DAIS'23, that analyzed the operation of IPFS from the perspective of the content requested through gateways.

本文介绍了 IPFS 请求内容定位服务,这是一种从通过 IPFS 网关请求内容的角度监控 IPFS 运行的软件服务。该软件以 docker 栈的形式提供,它消耗一个或多个 IPFS 网关的日志,提取所请求内容的 CID 和请求者的 IP 地址,并查询 IPFS 网络中的内容提供者。该软件还将请求者和提供者的 IP 地址与其地理位置相匹配,并将结果存储在数据库中,以供日后分析。我们之前在 DAIS'23 上发布的测量研究报告中使用了该软件,该报告从通过网关请求内容的角度分析了 IPFS 的运行情况。
{"title":"IPFS requested content location service","authors":"Pedro Ákos Costa ,&nbsp;João Leitão ,&nbsp;Yannis Psaras","doi":"10.1016/j.scico.2024.103174","DOIUrl":"10.1016/j.scico.2024.103174","url":null,"abstract":"<div><p>This paper introduces the <em>IPFS requested content location service</em>, a software service to monitor the operation of IPFS from the perspective of the content requested through IPFS gateways. The software is provided as a docker stack that consumes the logs of one or more IPFS gateways, extracts the CID of the requested content and the IP address of the requester, and queries the IPFS network for the providers of the content. The software also matches the IP addresses of the requesters and providers with their geographic location, and stores the results in a database for later analysis. The software has been used in our previous measurement study, published at DAIS'23, that analyzed the operation of IPFS from the perspective of the content requested through gateways.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103174"},"PeriodicalIF":1.5,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0167642324000972/pdfft?md5=8e5567e56d377d1ef0aefa94162755cd&pid=1-s2.0-S0167642324000972-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141729440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Subsumption, correctness and relative correctness: Implications for software testing 从属性、正确性和相对正确性:对软件测试的影响
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-17 DOI: 10.1016/j.scico.2024.103177
Samia AlBlwi , Imen Marsit , Besma Khaireddine , Amani Ayad , JiMeng Loh , Ali Mili

Context. Several Research areas emerged and have been proceeding independently when in fact they have much in common. These include: mutant subsumption and mutant set minimization; relative correctness and the semantic definition of faults; differentiator sets and their application to test diversity; generate-and–validate methods of program repair; test suite coverage metrics.

Objective. Highlight their analogies, commonalities and overlaps; explore their potential for synergy and shared research goals; unify several disparate concepts around a minimal set of artifacts.

Method. Introduce and analyze a minimal set of concepts that enable us to model these disparate research efforts, and explore how these models may enable us to share insights between different research directions, and advance their respective goals.

Results. Capturing absolute (total and partial) correctness and relative (total and partial) correctness with a single concept: detector sets. Using the same concept to quantify the effectiveness of test suites, and prove that the proposed measure satisfies appealing monotonicity properties. Using the measure of test suite effectiveness to model mutant set minimization as an optimization problem, characterized by an objective function and a constraint.

Generalizing the concept of mutant subsumption using the concept of differentiator sets. Identifying analogies between detector sets and differentiator sets, and inferring relationships between subsumption and relative correctness.

Conclusion. This paper does not aim to answer any pressing research question as much as it aims to raise research questions that use the insights gained from one research venue to gain a fresh perspective on a related research issue.

背景。有几个研究领域已经出现并独立进行,但实际上它们有很多共同点。这些领域包括:突变子归并和突变集最小化;相对正确性和故障的语义定义;区分集及其在测试多样性中的应用;程序修复的生成和验证方法;测试套件覆盖率度量。突出它们之间的类比、共性和重叠;探索它们协同作用的潜力和共同的研究目标;围绕一组最基本的工件统一几个不同的概念。介绍并分析一套最基本的概念,使我们能够对这些不同的研究工作进行建模,并探索这些模型如何使我们能够在不同的研究方向之间分享见解,并推进各自的目标。用单一概念捕捉绝对(全部和部分)正确性和相对(全部和部分)正确性:探测器集。使用同一概念来量化测试套件的有效性,并证明所提出的测量方法满足有吸引力的单调性属性。使用测试套件有效性的测量方法,将突变集最小化建模为一个优化问题,该问题由目标函数和约束条件表征。确定检测器集和区分器集之间的类比关系,并推断子集和相对正确性之间的关系。本文并不旨在回答任何迫切的研究问题,而是旨在提出一些研究问题,利用从某一研究领域获得的洞察力,以全新的视角看待相关的研究问题。
{"title":"Subsumption, correctness and relative correctness: Implications for software testing","authors":"Samia AlBlwi ,&nbsp;Imen Marsit ,&nbsp;Besma Khaireddine ,&nbsp;Amani Ayad ,&nbsp;JiMeng Loh ,&nbsp;Ali Mili","doi":"10.1016/j.scico.2024.103177","DOIUrl":"10.1016/j.scico.2024.103177","url":null,"abstract":"<div><p><strong>Context</strong>. Several Research areas emerged and have been proceeding independently when in fact they have much in common. These include: mutant subsumption and mutant set minimization; relative correctness and the semantic definition of faults; differentiator sets and their application to test diversity; generate-and–validate methods of program repair; test suite coverage metrics.</p><p><strong>Objective</strong>. Highlight their analogies, commonalities and overlaps; explore their potential for synergy and shared research goals; unify several disparate concepts around a minimal set of artifacts.</p><p><strong>Method</strong>. Introduce and analyze a minimal set of concepts that enable us to model these disparate research efforts, and explore how these models may enable us to share insights between different research directions, and advance their respective goals.</p><p><strong>Results</strong>. Capturing absolute (total and partial) correctness and relative (total and partial) correctness with a single concept: detector sets. Using the same concept to quantify the effectiveness of test suites, and prove that the proposed measure satisfies appealing monotonicity properties. Using the measure of test suite effectiveness to model mutant set minimization as an optimization problem, characterized by an objective function and a constraint.</p><p>Generalizing the concept of mutant subsumption using the concept of differentiator sets. Identifying analogies between detector sets and differentiator sets, and inferring relationships between subsumption and relative correctness.</p><p><strong>Conclusion</strong>. This paper does not aim to answer any pressing research question as much as it aims to raise research questions that use the insights gained from one research venue to gain a fresh perspective on a related research issue.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"239 ","pages":"Article 103177"},"PeriodicalIF":1.5,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141850480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smart contract vulnerability detection using wide and deep neural network 利用广度和深度神经网络检测智能合约漏洞
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-10 DOI: 10.1016/j.scico.2024.103172
Samuel Banning Osei , Zhongchen Ma , Rubing Huang

Smart contracts, integral to blockchain technology, automate agreements without intermediaries, ensuring transparency and security across various sectors. However, the immutable nature of blockchain exposes deployed contracts to potential risks if they contain vulnerabilities. Current approaches, including symbolic execution and graph-based machine learning, aim to ensure smart contract security. However, these methods suffer from limitations such as high false positive rates, heavy reliance on trained data, and over-generalization.

The goal of this paper is to investigate the application of Wide and Deep Neural Networks in identifying vulnerabilities within smart contracts. We introduce WIDENNET, a method based on deep neural networks, designed to detect reentrancy and timestamp dependence vulnerabilities in smart contracts. Our approach involves extracting bytecodes from the contracts and converting them into Operational Codes (OPCODES), which are then transformed into distinct vector representations. These vectors are subsequently fed into the neural network to extract both complex and simple patterns for vulnerability detection. Testing on real-world datasets yielded an average accuracy of 83.07% and a precision of 83.13%. Our method offers a potential solution to mitigate vulnerabilities in blockchain applications.

智能合约是区块链技术不可或缺的一部分,它可以在没有中间人的情况下自动达成协议,确保各行各业的透明度和安全性。然而,区块链不可更改的特性使已部署的合约在包含漏洞时面临潜在风险。当前的方法,包括符号执行和基于图的机器学习,旨在确保智能合约的安全性。然而,这些方法存在误报率高、严重依赖训练数据和过度泛化等局限性。本文旨在研究宽深度神经网络在识别智能合约漏洞方面的应用。我们介绍了基于深度神经网络的 WIDENNET 方法,该方法旨在检测智能合约中的重入性和时间戳依赖性漏洞。我们的方法包括从合约中提取字节码并将其转换为操作码(OPCODES),然后将其转换为不同的向量表示。这些向量随后被输入神经网络,以提取复杂和简单的模式进行漏洞检测。在真实世界数据集上的测试结果显示,平均准确率为 83.07%,精确率为 83.13%。我们的方法为减少区块链应用中的漏洞提供了一种潜在的解决方案。
{"title":"Smart contract vulnerability detection using wide and deep neural network","authors":"Samuel Banning Osei ,&nbsp;Zhongchen Ma ,&nbsp;Rubing Huang","doi":"10.1016/j.scico.2024.103172","DOIUrl":"10.1016/j.scico.2024.103172","url":null,"abstract":"<div><p>Smart contracts, integral to blockchain technology, automate agreements without intermediaries, ensuring transparency and security across various sectors. However, the immutable nature of blockchain exposes deployed contracts to potential risks if they contain vulnerabilities. Current approaches, including symbolic execution and graph-based machine learning, aim to ensure smart contract security. However, these methods suffer from limitations such as high false positive rates, heavy reliance on trained data, and over-generalization.</p><p>The goal of this paper is to investigate the application of Wide and Deep Neural Networks in identifying vulnerabilities within smart contracts. We introduce WIDENNET, a method based on deep neural networks, designed to detect reentrancy and timestamp dependence vulnerabilities in smart contracts. Our approach involves extracting bytecodes from the contracts and converting them into Operational Codes (OPCODES), which are then transformed into distinct vector representations. These vectors are subsequently fed into the neural network to extract both complex and simple patterns for vulnerability detection. Testing on real-world datasets yielded an average accuracy of 83.07% and a precision of 83.13%. Our method offers a potential solution to mitigate vulnerabilities in blockchain applications.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103172"},"PeriodicalIF":1.5,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141623039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DPFuzz: A fuzz testing tool based on the guidance of defect prediction DPFuzz:基于缺陷预测指导的模糊测试工具
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-08 DOI: 10.1016/j.scico.2024.103170
Zhanqi Cui , Haochen Jin , Xiang Chen , Rongcun Wang , Xiulei Liu

Fuzz testing is an automated testing technique that is recognized for its efficiency and scalability. Despite its advantages, the growing complexity and scale of software has made testing software adequately increasingly challenging. If fuzz testing can prioritize resources for modules with higher defect proneness, it can effectively enhance its defect detection performance. In this paper, we introduce DPFuzz, a tool for prioritizing the resource allocation of fuzz testing. DPFuzz guides fuzz testing by calculating the fitness score, which is based on the coverage of modules with different defect proneness. DPFuzz also demonstrates the practicability of using defect prediction in software quality assurance and has confirmed its excellent defect detection performance through experiments.

模糊测试是一种自动化测试技术,因其高效性和可扩展性而广受认可。尽管它有很多优点,但软件的复杂性和规模的不断扩大,使充分测试软件变得越来越具有挑战性。如果模糊测试能将资源优先用于缺陷发生率较高的模块,就能有效提高缺陷检测性能。本文介绍了用于模糊测试资源分配优先级排序的工具 DPFuzz。DPFuzz 通过计算适配度得分来指导模糊测试,而适配度得分则基于不同缺陷易发性模块的覆盖率。DPFuzz 还证明了在软件质量保证中使用缺陷预测的实用性,并通过实验证实了其出色的缺陷检测性能。
{"title":"DPFuzz: A fuzz testing tool based on the guidance of defect prediction","authors":"Zhanqi Cui ,&nbsp;Haochen Jin ,&nbsp;Xiang Chen ,&nbsp;Rongcun Wang ,&nbsp;Xiulei Liu","doi":"10.1016/j.scico.2024.103170","DOIUrl":"10.1016/j.scico.2024.103170","url":null,"abstract":"<div><p>Fuzz testing is an automated testing technique that is recognized for its efficiency and scalability. Despite its advantages, the growing complexity and scale of software has made testing software adequately increasingly challenging. If fuzz testing can prioritize resources for modules with higher defect proneness, it can effectively enhance its defect detection performance. In this paper, we introduce DPFuzz, a tool for prioritizing the resource allocation of fuzz testing. DPFuzz guides fuzz testing by calculating the fitness score, which is based on the coverage of modules with different defect proneness. DPFuzz also demonstrates the practicability of using defect prediction in software quality assurance and has confirmed its excellent defect detection performance through experiments.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103170"},"PeriodicalIF":1.5,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141696570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-objective differential evolution in the generation of adversarial examples 生成对抗性示例的多目标差分进化论
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-08 DOI: 10.1016/j.scico.2024.103169
Antony Bartlett, Cynthia C.S. Liem, Annibale Panichella

Adversarial examples remain a critical concern for the robustness of deep learning models, showcasing vulnerabilities to subtle input manipulations. While earlier research focused on generating such examples using white-box strategies, later research focused on gradient-based black-box strategies, as models' internals often are not accessible to external attackers. This paper extends our prior work by exploring a gradient-free search-based algorithm for adversarial example generation, with particular emphasis on differential evolution (DE). Building on top of the classic DE operators, we propose five variants of gradient-free algorithms: a single-objective approach (

), two multi-objective variations (
and
), and two many-objective strategies (
and
). Our study on five canonical image classification models shows that whilst
variant remains the fastest approach,
consistently produces more minimal adversarial attacks (i.e., with fewer image perturbations). Moreover, we found that applying a post-process minimization to our adversarial images, would further reduce the number of changes and overall delta variation (image noise).

对抗性示例仍然是深度学习模型鲁棒性的一个关键问题,它展示了对微妙输入操作的脆弱性。早期的研究侧重于使用白盒策略生成此类示例,而后来的研究则侧重于基于梯度的黑盒策略,因为外部攻击者通常无法访问模型的内部结构。本文扩展了我们之前的工作,探索了一种基于无梯度搜索的对抗示例生成算法,并特别强调了微分进化(DE)。在经典的微分演化算子基础上,我们提出了五种无梯度算法变体:一种单目标方法()、两种多目标变体(和)以及两种多目标策略(和)。我们对五种典型图像分类模型的研究表明,哪种变体仍然是最快的方法,而且能持续产生更多最小对抗攻击(即图像扰动更少)。此外,我们还发现,对我们的对抗图像进行后处理最小化,可以进一步减少变化的数量和整体 delta 变化(图像噪声)。
{"title":"Multi-objective differential evolution in the generation of adversarial examples","authors":"Antony Bartlett,&nbsp;Cynthia C.S. Liem,&nbsp;Annibale Panichella","doi":"10.1016/j.scico.2024.103169","DOIUrl":"10.1016/j.scico.2024.103169","url":null,"abstract":"<div><p>Adversarial examples remain a critical concern for the robustness of deep learning models, showcasing vulnerabilities to subtle input manipulations. While earlier research focused on generating such examples using white-box strategies, later research focused on gradient-based black-box strategies, as models' internals often are not accessible to external attackers. This paper extends our prior work by exploring a gradient-free search-based algorithm for adversarial example generation, with particular emphasis on differential evolution (DE). Building on top of the classic DE operators, we propose five variants of gradient-free algorithms: a single-objective approach (<figure><img></figure>), two multi-objective variations (<figure><img></figure> and <figure><img></figure>), and two many-objective strategies (<figure><img></figure> and <figure><img></figure>). Our study on five canonical image classification models shows that whilst <figure><img></figure> variant remains the fastest approach, <figure><img></figure> consistently produces more minimal adversarial attacks (i.e., with fewer image perturbations). Moreover, we found that applying a post-process minimization to our adversarial images, would further reduce the number of changes and overall delta variation (image noise).</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103169"},"PeriodicalIF":1.5,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0167642324000923/pdfft?md5=0868cc1132d7cb3394667dc10d9262c7&pid=1-s2.0-S0167642324000923-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141639288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CRAG – a combinatorial testing-based generator of road geometries for ADS testing CRAG - 用于 ADS 测试的基于组合测试的道路几何生成器
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-08 DOI: 10.1016/j.scico.2024.103171
Paolo Arcaini , Ahmet Cetinkaya

Simulation-based testing of autonomous driving systems (ADS) consists in finding scenarios in which the ADS misbehaves, e.g., it leads the car to drive off the road. The road geometry is an important feature of the scenario, as it has a direct impact on the ADS, e.g., its ability to keep the car inside the driving lane. In this paper, we present CRAG, a road generator for ADS testing. CRAG uses combinatorial testing to explore high level road configurations, and search for finding concrete road geometries in these configurations. CRAG has been designed in a way that it can be easily extended in terms of generator of combinatorial test suites, search algorithms, and test goals.

基于仿真的自动驾驶系统(ADS)测试包括寻找自动驾驶系统发生错误行为的场景,例如导致汽车驶离道路。道路的几何形状是场景的一个重要特征,因为它对自动驾驶系统有直接影响,例如,自动驾驶系统将汽车保持在行驶车道内的能力。在本文中,我们介绍了用于 ADS 测试的道路生成器 CRAG。CRAG 使用组合测试来探索高级道路配置,并在这些配置中寻找具体的道路几何形状。CRAG 的设计方式使其可以在组合测试套件生成器、搜索算法和测试目标方面轻松扩展。
{"title":"CRAG – a combinatorial testing-based generator of road geometries for ADS testing","authors":"Paolo Arcaini ,&nbsp;Ahmet Cetinkaya","doi":"10.1016/j.scico.2024.103171","DOIUrl":"https://doi.org/10.1016/j.scico.2024.103171","url":null,"abstract":"<div><p>Simulation-based testing of autonomous driving systems (ADS) consists in finding scenarios in which the ADS misbehaves, e.g., it leads the car to drive off the road. The road geometry is an important feature of the scenario, as it has a direct impact on the ADS, e.g., its ability to keep the car inside the driving lane. In this paper, we present <span>CRAG</span>, a road generator for ADS testing. <span>CRAG</span> uses combinatorial testing to explore high level road configurations, and search for finding concrete road geometries in these configurations. <span>CRAG</span> has been designed in a way that it can be easily extended in terms of generator of combinatorial test suites, search algorithms, and test goals.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103171"},"PeriodicalIF":1.5,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141607190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prescriptive procedure for manual code smell annotation 人工标注代码气味的规定程序
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-26 DOI: 10.1016/j.scico.2024.103168
Simona Prokić, Nikola Luburić, Jelena Slivka, Aleksandar Kovačević

– Code smells are structures in code that present potential software maintainability issues. Manually constructing high-quality datasets to train ML models for code smell detection is challenging. Inconsistent annotations, small size, non-realistic smell-to-non-smell ratio, and poor smell coverage hinder the dataset quality. These issues arise mainly due to the time-consuming nature of manual annotation and annotators’ disagreements caused by ambiguous and vague smell definitions.

To address challenges related to building high-quality datasets suitable for training ML models for smell detection, we designed a prescriptive procedure for manual code smell annotation. The proposed procedure represents an extension of our previous work, aiming to support the annotation of any smell defined by Fowler. We validated the procedure by employing three annotators to annotate smells following the proposed annotation procedure.

The main contribution of this paper is a prescriptive annotation procedure that benefits the following stakeholders: annotators building high-quality smell datasets that can be used to train ML models, ML researchers building ML models for smell detection, and software engineers employing ML models to enhance the software maintainability. Secondary contributions are the code smell dataset containing Data Class, Feature Envy, and Refused Bequest, and DataSet Explorer tool which supports annotators during the annotation procedure.

- 代码气味是指代码中存在潜在软件可维护性问题的结构。手动构建高质量数据集来训练用于代码气味检测的 ML 模型是一项挑战。不一致的注释、较小的规模、非现实的气味与非气味比例以及较低的气味覆盖率都会影响数据集的质量。这些问题的出现主要是由于人工注释耗时以及注释者因气味定义模糊不清而产生分歧。为了解决与构建适合训练气味检测 ML 模型的高质量数据集相关的挑战,我们设计了一种用于人工代码气味注释的规范程序。所提出的程序是我们之前工作的延伸,旨在支持对 Fowler 定义的任何气味进行注释。本文的主要贡献是提出了一种规范性注释程序,它能为以下利益相关者带来益处:注释者建立了可用于训练 ML 模型的高质量气味数据集;ML 研究人员建立了用于气味检测的 ML 模型;软件工程师使用 ML 模型提高了软件的可维护性。该系统的主要贡献是包含数据类、特征嫉妒和拒绝请求的代码气味数据集,以及在注释过程中为注释者提供支持的数据集资源管理器工具。
{"title":"Prescriptive procedure for manual code smell annotation","authors":"Simona Prokić,&nbsp;Nikola Luburić,&nbsp;Jelena Slivka,&nbsp;Aleksandar Kovačević","doi":"10.1016/j.scico.2024.103168","DOIUrl":"https://doi.org/10.1016/j.scico.2024.103168","url":null,"abstract":"<div><p>– Code smells are structures in code that present potential software maintainability issues. Manually constructing high-quality datasets to train ML models for code smell detection is challenging. Inconsistent annotations, small size, non-realistic smell-to-non-smell ratio, and poor smell coverage hinder the dataset quality. These issues arise mainly due to the time-consuming nature of manual annotation and annotators’ disagreements caused by ambiguous and vague smell definitions.</p><p>To address challenges related to building high-quality datasets suitable for training ML models for smell detection, we designed a prescriptive procedure for manual code smell annotation. The proposed procedure represents an extension of our previous work, aiming to support the annotation of any smell defined by Fowler. We validated the procedure by employing three annotators to annotate smells following the proposed annotation procedure.</p><p>The main contribution of this paper is a prescriptive annotation procedure that benefits the following stakeholders: annotators building high-quality smell datasets that can be used to train ML models, ML researchers building ML models for smell detection, and software engineers employing ML models to enhance the software maintainability. Secondary contributions are the code smell dataset containing Data Class, Feature Envy, and Refused Bequest, and DataSet Explorer tool which supports annotators during the annotation procedure.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103168"},"PeriodicalIF":1.5,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GraphPyRec: A novel graph-based approach for fine-grained Python code recommendation GraphPyRec:基于图的新颖方法:细粒度 Python 代码推荐
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-18 DOI: 10.1016/j.scico.2024.103166
Xing Zong, Shang Zheng, Haitao Zou, Hualong Yu, Shang Gao

Artificial intelligence has been widely applied in software engineering areas such as code recommendation. Significant progress has been made in code recommendation for static languages in recent years, but it remains challenging for dynamic languages like Python as accurately determining data flows before runtime is difficult. This limitation hinders data flow analysis, affecting the performance of code recommendation methods that rely on code analysis. In this study, a graph-based Python recommendation approach (GraphPyRec) is proposed by converting source code into a graph representation that captures both semantic and dynamic information. Nodes represent semantic information, with unique rules defined for various code statements. Edges depict control flow and data flow, utilizing a child-sibling-like process and a dedicated algorithm for data transfer extraction. Alongside the graph, a bag of words is created to include essential names, and a pre-trained BERT model transforms it into vectors. These vectors are integrated into a Gated Graph Neural Network (GGNN) process of the code recommendation model, enhancing its effectiveness and accuracy. To validate the proposed method, we crawled over a million lines of code from GitHub. Experimental results show that GraphPyRec outperforms existing mainstream Python code recommendation methods, achieving Top-1, 5, and 10 accuracy rates of 68.52%, 88.92%, and 94.05%, respectively, along with a Mean Reciprocal Rank (MRR) of 0.772.

人工智能已被广泛应用于代码推荐等软件工程领域。近年来,静态语言的代码推荐取得了长足进步,但对于 Python 这样的动态语言来说,准确确定运行前的数据流仍是一项挑战。这一限制阻碍了数据流分析,影响了依赖代码分析的代码推荐方法的性能。本研究提出了一种基于图的 Python 推荐方法 (GraphPyRec),它将源代码转换为一种能捕捉语义和动态信息的图表示法。节点代表语义信息,为各种代码语句定义了独特的规则。边表示控制流和数据流,利用类似同胞兄弟的流程和专用算法进行数据传输提取。除图形外,还创建了一个包含重要名称的词袋,并由预先训练好的 BERT 模型将其转换为向量。这些向量被集成到代码推荐模型的门控图神经网络(GGNN)过程中,从而提高了其有效性和准确性。为了验证所提出的方法,我们从 GitHub 抓取了超过一百万行代码。实验结果表明,GraphPyRec 优于现有的主流 Python 代码推荐方法,Top-1、5 和 10 的准确率分别为 68.52%、88.92% 和 94.05%,平均互易排名 (MRR) 为 0.772。
{"title":"GraphPyRec: A novel graph-based approach for fine-grained Python code recommendation","authors":"Xing Zong,&nbsp;Shang Zheng,&nbsp;Haitao Zou,&nbsp;Hualong Yu,&nbsp;Shang Gao","doi":"10.1016/j.scico.2024.103166","DOIUrl":"https://doi.org/10.1016/j.scico.2024.103166","url":null,"abstract":"<div><p>Artificial intelligence has been widely applied in software engineering areas such as code recommendation. Significant progress has been made in code recommendation for static languages in recent years, but it remains challenging for dynamic languages like Python as accurately determining data flows before runtime is difficult. This limitation hinders data flow analysis, affecting the performance of code recommendation methods that rely on code analysis. In this study, a graph-based Python recommendation approach (GraphPyRec) is proposed by converting source code into a graph representation that captures both semantic and dynamic information. Nodes represent semantic information, with unique rules defined for various code statements. Edges depict control flow and data flow, utilizing a child-sibling-like process and a dedicated algorithm for data transfer extraction. Alongside the graph, a bag of words is created to include essential names, and a pre-trained BERT model transforms it into vectors. These vectors are integrated into a Gated Graph Neural Network (GGNN) process of the code recommendation model, enhancing its effectiveness and accuracy. To validate the proposed method, we crawled over a million lines of code from GitHub. Experimental results show that GraphPyRec outperforms existing mainstream Python code recommendation methods, achieving Top-1, 5, and 10 accuracy rates of 68.52%, 88.92%, and 94.05%, respectively, along with a Mean Reciprocal Rank (MRR) of 0.772.</p></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"238 ","pages":"Article 103166"},"PeriodicalIF":1.5,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141487280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Science of Computer Programming
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1