首页 > 最新文献

Automated Software Engineering最新文献

英文 中文
On the role of search budgets in model-based software refactoring optimization 搜索预算在基于模型的软件重构优化中的作用
IF 3.1 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-18 DOI: 10.1007/s10515-025-00564-y
J. Andres Diaz-Pace, Daniele Di Pompeo, Michele Tucci

Software model optimization is a process that automatically generates design alternatives aimed at improving quantifiable non-functional properties of software systems, such as performance and reliability. Multi-objective evolutionary algorithms effectively help designers identify trade-offs among the desired non-functional properties. To reduce the use of computational resources, this work examines the impact of implementing a search budget to limit the search for design alternatives. In particular, we analyze how time budgets affect the quality of Pareto fronts by utilizing quality indicators and exploring the structural features of the generated design alternatives. This study identifies distinct behavioral differences among evolutionary algorithms when a search budget is implemented. It further reveals that design alternatives generated under a budget are structurally different from those produced without one. Additionally, we offer recommendations for designers on selecting algorithms in relation to time constraints, thereby facilitating the effective application of automated refactoring to improve non-functional properties.

软件模型优化是一个自动生成设计备选方案的过程,旨在改进软件系统的可量化的非功能属性,例如性能和可靠性。多目标进化算法有效地帮助设计人员在期望的非功能属性之间进行权衡。为了减少计算资源的使用,这项工作检查了实施搜索预算以限制对设计替代方案的搜索的影响。特别是,我们通过利用质量指标和探索生成的设计方案的结构特征来分析时间预算如何影响帕累托前沿的质量。本研究确定了当搜索预算实现时,进化算法之间的明显行为差异。它进一步揭示了在预算下产生的设计方案在结构上不同于没有预算的设计方案。此外,我们为设计人员提供了与时间限制相关的算法选择建议,从而促进了自动重构的有效应用,以改进非功能属性。
{"title":"On the role of search budgets in model-based software refactoring optimization","authors":"J. Andres Diaz-Pace,&nbsp;Daniele Di Pompeo,&nbsp;Michele Tucci","doi":"10.1007/s10515-025-00564-y","DOIUrl":"10.1007/s10515-025-00564-y","url":null,"abstract":"<div><p>Software model optimization is a process that automatically generates design alternatives aimed at improving quantifiable non-functional properties of software systems, such as performance and reliability. Multi-objective evolutionary algorithms effectively help designers identify trade-offs among the desired non-functional properties. To reduce the use of computational resources, this work examines the impact of implementing a search budget to limit the search for design alternatives. In particular, we analyze how time budgets affect the quality of Pareto fronts by utilizing quality indicators and exploring the structural features of the generated design alternatives. This study identifies distinct behavioral differences among evolutionary algorithms when a search budget is implemented. It further reveals that design alternatives generated under a budget are structurally different from those produced without one. Additionally, we offer recommendations for designers on selecting algorithms in relation to time constraints, thereby facilitating the effective application of automated refactoring to improve non-functional properties.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"33 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10515-025-00564-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145316541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph neural networks for precise bug localization through structural program analysis 图神经网络通过结构程序分析来精确定位bug
IF 3.1 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-18 DOI: 10.1007/s10515-025-00556-y
Leila Yousofvand, Seyfollah Soleimani, Vahid Rafe, Amin Nikanjam

Bug localization (BL) is known as one of the major steps in the program repair process, which generally seeks to find a set of commands causing a program to crash or fail. At the present time, locating bugs and their sources quickly seems to be impossible as the complexity of modern software development and scaling is soaring. Accordingly, there is a huge demand for BL techniques with minimal human intervention. A graph representing source code typically encodes valuable information about both the syntactic and semantic structures of programs. Many software bugs are associated with these structures, making graphs particularly suitable for bug localization (BL). Therefore, the key contributions of this work involve labeling graph nodes, classifying these nodes, and addressing imbalanced classifications within the graph data structure to effectively locate bugs in code. A graph-based bug classifier is initially introduced in the method proposed in this paper. For this purpose, the program source codes are mapped to a graph representation. Since the graph nodes do not have labels, the Gumtree algorithm is then exploited to label them by comparing the buggy graphs and the corresponding bug-free ones. Afterward, a trained, supervised node classifier, developed based on a graph neural network (GNN), is applied to classify the nodes into buggy or bug-free ones. Given the imbalance in the data, accuracy, precision, recall, and F1-score metrics are used for evaluation. Experimental results on identical datasets show that the proposed method outperforms other related approaches. The proposed approach effectively localizes a broader spectrum of bug types, such as undefined properties, functional bugs, variable naming errors, and variable misuse issues.

错误定位(BL)是程序修复过程中的主要步骤之一,它通常寻求找到一组导致程序崩溃或失败的命令。目前,快速定位bug及其来源似乎是不可能的,因为现代软件开发和扩展的复杂性正在飙升。因此,对人工干预最少的BL技术有巨大的需求。表示源代码的图形通常编码有关程序的语法和语义结构的有价值的信息。许多软件错误都与这些结构相关联,使得图形特别适合于错误定位(BL)。因此,这项工作的关键贡献包括标记图节点,对这些节点进行分类,并解决图数据结构中的不平衡分类,以有效地定位代码中的错误。在本文提出的方法中,首先引入了基于图的bug分类器。为此,将程序源代码映射为图形表示。由于图节点没有标签,然后利用Gumtree算法通过比较有bug的图和相应的无bug的图来标记它们。然后,使用基于图神经网络(GNN)开发的训练监督节点分类器将节点分为有bug的节点和无bug的节点。考虑到数据的不平衡,使用准确性、精密度、召回率和f1得分指标进行评估。在相同数据集上的实验结果表明,该方法优于其他相关方法。所建议的方法有效地定位了更广泛的错误类型,例如未定义的属性、功能错误、变量命名错误和变量滥用问题。
{"title":"Graph neural networks for precise bug localization through structural program analysis","authors":"Leila Yousofvand,&nbsp;Seyfollah Soleimani,&nbsp;Vahid Rafe,&nbsp;Amin Nikanjam","doi":"10.1007/s10515-025-00556-y","DOIUrl":"10.1007/s10515-025-00556-y","url":null,"abstract":"<div><p>Bug localization (BL) is known as one of the major steps in the program repair process, which generally seeks to find a set of commands causing a program to crash or fail. At the present time, locating bugs and their sources quickly seems to be impossible as the complexity of modern software development and scaling is soaring. Accordingly, there is a huge demand for BL techniques with minimal human intervention. A graph representing source code typically encodes valuable information about both the syntactic and semantic structures of programs. Many software bugs are associated with these structures, making graphs particularly suitable for bug localization (BL). Therefore, the key contributions of this work involve labeling graph nodes, classifying these nodes, and addressing imbalanced classifications within the graph data structure to effectively locate bugs in code. A graph-based bug classifier is initially introduced in the method proposed in this paper. For this purpose, the program source codes are mapped to a graph representation. Since the graph nodes do not have labels, the Gumtree algorithm is then exploited to label them by comparing the buggy graphs and the corresponding bug-free ones. Afterward, a trained, supervised node classifier, developed based on a graph neural network (GNN), is applied to classify the nodes into buggy or bug-free ones. Given the imbalance in the data, accuracy, precision, recall, and F1-score metrics are used for evaluation. Experimental results on identical datasets show that the proposed method outperforms other related approaches. The proposed approach effectively localizes a broader spectrum of bug types, such as undefined properties, functional bugs, variable naming errors, and variable misuse issues<b>.</b></p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"33 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10515-025-00556-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145316542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decomposition then watermarking: Enhancing code traceability with dual-channel code watermarking 分解后加水印:利用双通道码水印增强码的可追溯性
IF 3.1 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-10 DOI: 10.1007/s10515-025-00561-1
Haibo Lin, Zhong Li, Ruihua Ji, Minxue Pan, Tian Zhang, Nan Wu, Xuandong Li

Code watermarking has gained increasing attention for tracing the provenance of code with the rapid growth of the open-source community. Existing work on code watermarking has shown promising results yet still falls short, especially when a multi-bit watermark for encoding diverse information is required. In this paper, we propose DWC, a novel code watermarking method with highly watermark capacity. The key idea of DWC is to first decompose the code into natural and formal channels, then embed the watermark separately into each channel based solely on its respective information. As such, DWC reduces the mutual interference between these two channels and the impacts of irrelevant information within the code, thus enabling more effective transformations for embedding watermarks with higher capacity and robustness. Our extensive experiments on source code snippets in four programming languages (C, C++, Java, and Python) demonstrate the effectiveness, efficiency, and capability of DWC in embedding multi-bit watermarks, as well as the utility and robustness of the watermarked code it generates.

随着开源社区的快速发展,代码水印技术在追踪代码来源方面受到越来越多的关注。现有的编码水印研究已经取得了可喜的成果,但仍然存在不足,特别是当需要一个多比特的水印来编码多种信息时。本文提出了一种具有高水印容量的新型编码水印方法DWC。DWC的关键思想是首先将代码分解为自然信道和形式信道,然后仅根据各自的信息将水印分别嵌入到每个信道中。因此,DWC减少了这两个信道之间的相互干扰和代码中不相关信息的影响,从而实现更有效的转换,以更高的容量和鲁棒性嵌入水印。我们对四种编程语言(C、c++、Java和Python)的源代码片段进行了广泛的实验,证明了DWC在嵌入多比特水印方面的有效性、效率和能力,以及它生成的水印代码的实用性和鲁棒性。
{"title":"Decomposition then watermarking: Enhancing code traceability with dual-channel code watermarking","authors":"Haibo Lin,&nbsp;Zhong Li,&nbsp;Ruihua Ji,&nbsp;Minxue Pan,&nbsp;Tian Zhang,&nbsp;Nan Wu,&nbsp;Xuandong Li","doi":"10.1007/s10515-025-00561-1","DOIUrl":"10.1007/s10515-025-00561-1","url":null,"abstract":"<div><p>Code watermarking has gained increasing attention for tracing the provenance of code with the rapid growth of the open-source community. Existing work on code watermarking has shown promising results yet still falls short, especially when a multi-bit watermark for encoding diverse information is required. In this paper, we propose <span>DWC</span>, a novel code watermarking method with highly watermark capacity. The key idea of <span>DWC</span> is to first decompose the code into natural and formal channels, then embed the watermark separately into each channel based solely on its respective information. As such, <span>DWC</span> reduces the mutual interference between these two channels and the impacts of irrelevant information within the code, thus enabling more effective transformations for embedding watermarks with higher capacity and robustness. Our extensive experiments on source code snippets in four programming languages (C, C++, Java, and Python) demonstrate the effectiveness, efficiency, and capability of <span>DWC</span> in embedding multi-bit watermarks, as well as the utility and robustness of the watermarked code it generates.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"33 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145256631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A sign language to SQL query translation system for enhancing database accessibility 一个用于增强数据库可访问性的手语到SQL查询的翻译系统
IF 3.1 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-07 DOI: 10.1007/s10515-025-00558-w
Guocang Yang, Dawei Yuan, Tao Zhang, Zhenghan Chen

Structured Query Language (SQL) is a standard language for interacting with relational databases and is widely used across various information systems, either through direct query execution or via object-relational mapping (ORM) frameworks. Recent approaches have focused on converting natural language into SQL to simplify database development for users without programming expertise. However, these methods overlook direct translation from sign language—an essential modality for users such as the deaf community who may lack experience with SQL syntax. In this paper, we present SIGN2SQL, an innovative end-to-end framework that generates SQL queries from signed input. The system first employs a dedicated gesture recognition module to interpret the visual signals, followed by a convolutional neural network (CNN)-based model that produces the corresponding SQL statements. Trained on a well-annotated dataset, SIGN2SQL is evaluated against multiple pipeline-based baselines. Experimental results demonstrate that SIGN2SQL outperforms existing methods in both effectiveness and efficiency, particularly for SELECT statements with WHERE clauses. It achieves an execution accuracy of 89.8%, highlighting its potential as an accessible and inclusive database interaction interface.

结构化查询语言(SQL)是与关系数据库交互的标准语言,通过直接查询执行或通过对象关系映射(ORM)框架,在各种信息系统中广泛使用。最近的方法侧重于将自然语言转换为SQL,以简化没有编程专业知识的用户的数据库开发。然而,这些方法忽略了直接从手语进行翻译,而手语对于可能缺乏SQL语法经验的聋哑人社区等用户来说是一种必要的方式。在本文中,我们介绍了SIGN2SQL,这是一个创新的端到端框架,可以从签名输入生成SQL查询。该系统首先使用专用的手势识别模块来解释视觉信号,然后使用基于卷积神经网络(CNN)的模型生成相应的SQL语句。SIGN2SQL在一个注释良好的数据集上进行训练,根据多个基于管道的基线进行评估。实验结果表明,SIGN2SQL在有效性和效率方面都优于现有方法,特别是对于带有WHERE子句的SELECT语句。它实现了89.8%的执行精度,突出了其作为可访问和包容性数据库交互界面的潜力。
{"title":"A sign language to SQL query translation system for enhancing database accessibility","authors":"Guocang Yang,&nbsp;Dawei Yuan,&nbsp;Tao Zhang,&nbsp;Zhenghan Chen","doi":"10.1007/s10515-025-00558-w","DOIUrl":"10.1007/s10515-025-00558-w","url":null,"abstract":"<div><p>Structured Query Language (SQL) is a standard language for interacting with relational databases and is widely used across various information systems, either through direct query execution or via object-relational mapping (ORM) frameworks. Recent approaches have focused on converting natural language into SQL to simplify database development for users without programming expertise. However, these methods overlook direct translation from sign language—an essential modality for users such as the deaf community who may lack experience with SQL syntax. In this paper, we present <i>SIGN2SQL</i>, an innovative end-to-end framework that generates SQL queries from signed input. The system first employs a dedicated gesture recognition module to interpret the visual signals, followed by a convolutional neural network (CNN)-based model that produces the corresponding SQL statements. Trained on a well-annotated dataset, SIGN2SQL is evaluated against multiple pipeline-based baselines. Experimental results demonstrate that SIGN2SQL outperforms existing methods in both effectiveness and efficiency, particularly for SELECT statements with WHERE clauses. It achieves an execution accuracy of 89.8%, highlighting its potential as an accessible and inclusive database interaction interface.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"33 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10515-025-00558-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145256302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward efficient testing of graph neural networks via test input prioritization 基于测试输入优先级的图神经网络高效测试研究
IF 3.1 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-07 DOI: 10.1007/s10515-025-00554-0
Lichen Yang, Qiang Wang, Zhonghao Yang, Daojing He, Yu Li

Graph Neural Networks (GNNs) have demonstrated remarkable efficacy in handling graph-structured data; however, they exhibit failures after deployment, which can cause severe consequences. Hence, conducting thorough testing before deployment becomes imperative to ensure the reliability of GNNs. However, thorough testing requires numerous manually annotated test data. To mitigate the annotation cost, strategically prioritizing and labeling high-quality unlabeled inputs for testing becomes crucial, which facilitates uncovering more model failures with a limited labeling budget. Unfortunately, existing test input prioritization techniques either overlook the valuable information contained in graph structures or are overly reliant on attributes extracted from the target model, i.e., model-aware attributes, whose quality can vary significantly. To address these issues, we propose a novel test input prioritization framework, named GraphRank, for GNNs. GraphRank introduces model-agnostic attributes to compensate for the limitations of the model-aware ones. It also leverages the graph structure information to aggregate attributes from neighboring nodes, thereby enhancing the model-aware and model-agnostic attributes. Furthermore, GraphRank combines the above attributes with a binary classifier, using it as a ranking model to prioritize inputs. This classifier undergoes iterative training, which enables it to learn from each round’s feedback and improve its performance accordingly. Extensive experiments demonstrate GraphRank’s superiority over existing techniques.

图神经网络(gnn)在处理图结构数据方面表现出显著的有效性;然而,它们在部署后会出现故障,这可能会导致严重的后果。因此,在部署前进行彻底的测试是确保gnn可靠性的必要条件。然而,彻底的测试需要大量手工注释的测试数据。为了降低标注成本,策略性地对高质量的未标注输入进行优先级排序和标注变得至关重要,这有助于在有限的标注预算下发现更多的模型故障。不幸的是,现有的测试输入优先级技术要么忽略了图结构中包含的有价值的信息,要么过度依赖于从目标模型中提取的属性,即模型感知属性,其质量可能变化很大。为了解决这些问题,我们提出了一个新的测试输入优先级框架,名为GraphRank,用于gnn。GraphRank引入了与模型无关的属性,以弥补模型感知属性的局限性。它还利用图结构信息来聚合来自相邻节点的属性,从而增强了模型感知和模型不可知的属性。此外,GraphRank将上述属性与一个二元分类器结合起来,使用它作为排序模型对输入进行优先排序。该分类器经过迭代训练,使其能够从每一轮的反馈中学习,并相应地提高其性能。大量的实验证明了GraphRank优于现有技术。
{"title":"Toward efficient testing of graph neural networks via test input prioritization","authors":"Lichen Yang,&nbsp;Qiang Wang,&nbsp;Zhonghao Yang,&nbsp;Daojing He,&nbsp;Yu Li","doi":"10.1007/s10515-025-00554-0","DOIUrl":"10.1007/s10515-025-00554-0","url":null,"abstract":"<div><p>Graph Neural Networks (GNNs) have demonstrated remarkable efficacy in handling graph-structured data; however, they exhibit failures after deployment, which can cause severe consequences. Hence, conducting thorough testing before deployment becomes imperative to ensure the reliability of GNNs. However, thorough testing requires numerous manually annotated test data. To mitigate the annotation cost, strategically prioritizing and labeling high-quality unlabeled inputs for testing becomes crucial, which facilitates uncovering more model failures with a limited labeling budget. Unfortunately, existing test input prioritization techniques either overlook the valuable information contained in graph structures or are overly reliant on attributes extracted from the target model, <i>i.e., model-aware attributes</i>, whose quality can vary significantly. To address these issues, we propose a novel test input prioritization framework, named <i>GraphRank</i>, for GNNs. GraphRank introduces model-agnostic attributes to compensate for the limitations of the model-aware ones. It also leverages the graph structure information to aggregate attributes from neighboring nodes, thereby enhancing the model-aware and model-agnostic attributes. Furthermore, GraphRank combines the above attributes with a binary classifier, using it as a ranking model to prioritize inputs. This classifier undergoes iterative training, which enables it to learn from each round’s feedback and improve its performance accordingly. Extensive experiments demonstrate GraphRank’s superiority over existing techniques.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"33 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145256303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph based transfer learning with orthogonal tunning for functionality size insights 基于图的迁移学习与正交调谐的功能大小见解
IF 3.1 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-06 DOI: 10.1007/s10515-025-00562-0
Nevena Ranković, Dragica Ranković, Gonzalo Nápoles, Federico Zamberlan

Function Point Analysis (FPA) is a method in software engineering that focuses on identifying the functions provided by a software system to users, such as data input, processing, output, and database management. These functions are classified according to complexity to quantify the system’s size in functional point units. In this paper, we propose two graph neural networks: a Graph-based Similarity Detection Neural Network (GSDNN) and a Prior-Structural Information Graph Neural Network (PSI-GNN) with a pre-trained layer using transfer learning, to define the best model for functional size prediction and uncover patterns and trends in data. Additionally, the NESMA (Netherlands Software Metrics Users Association) method, from the functional families approach, will be in focus, where the ISBSG (International Software Benchmarking Standards Group) dataset, which provides standardized and relevant data for comparing software performance, was used to analyze 1704 industrial software projects. The goal was to identify the graph architecture with the smallest number of experiments to be performed and the lowest Mean Magnitude Relative Error (MMRE) using orthogonal-array tuning optimization via Latin Square extraction. In the proposed approach, the number of experiments is fewer than 8 for each dataset, and a minimum MMRE value of 0.97% was obtained using PSI-GNN. Additionally, the impact of five input features on the change in MMRE value was analyzed with the top-performing model, employing the SHAP (SHapley Additive exPlanations) feature importance method, visualized through GraphExplainer. The frequency of user-initiated transactions, quantified technically, emerged as the most significant determinant within the NESMA framework.

功能点分析(FPA)是软件工程中的一种方法,侧重于识别软件系统向用户提供的功能,如数据输入、处理、输出和数据库管理。这些功能根据复杂性进行分类,以功能点单位量化系统的大小。在本文中,我们提出了两种图神经网络:基于图的相似性检测神经网络(GSDNN)和基于迁移学习的预训练层的先验结构信息图神经网络(PSI-GNN),以定义功能大小预测的最佳模型,并揭示数据中的模式和趋势。此外,来自功能族方法的NESMA(荷兰软件度量用户协会)方法将成为重点,其中ISBSG(国际软件基准标准组织)数据集提供了用于比较软件性能的标准化和相关数据,用于分析1704个工业软件项目。目标是通过拉丁方提取,使用正交阵列调谐优化,以最少的实验次数和最低的平均幅度相对误差(MMRE)来识别图架构。在所提出的方法中,每个数据集的实验次数少于8次,使用PSI-GNN获得的最小MMRE值为0.97%。此外,通过GraphExplainer可视化的SHAP (SHapley Additive explainer)特征重要性方法,使用表现最好的模型分析了五个输入特征对MMRE值变化的影响。用户发起交易的频率,在技术上量化,成为NESMA框架内最重要的决定因素。
{"title":"Graph based transfer learning with orthogonal tunning for functionality size insights","authors":"Nevena Ranković,&nbsp;Dragica Ranković,&nbsp;Gonzalo Nápoles,&nbsp;Federico Zamberlan","doi":"10.1007/s10515-025-00562-0","DOIUrl":"10.1007/s10515-025-00562-0","url":null,"abstract":"<div><p>Function Point Analysis (FPA) is a method in software engineering that focuses on identifying the functions provided by a software system to users, such as data input, processing, output, and database management. These functions are classified according to complexity to quantify the system’s size in functional point units. In this paper, we propose two graph neural networks: a Graph-based Similarity Detection Neural Network (GSDNN) and a Prior-Structural Information Graph Neural Network (PSI-GNN) with a pre-trained layer using transfer learning, to define the best model for functional size prediction and uncover patterns and trends in data. Additionally, the NESMA (Netherlands Software Metrics Users Association) method, from the functional families approach, will be in focus, where the ISBSG (International Software Benchmarking Standards Group) dataset, which provides standardized and relevant data for comparing software performance, was used to analyze 1704 industrial software projects. The goal was to identify the graph architecture with the smallest number of experiments to be performed and the lowest Mean Magnitude Relative Error (MMRE) using orthogonal-array tuning optimization <i>via Latin Square</i> extraction. In the proposed approach, the number of experiments is fewer than 8 for each dataset, and a minimum MMRE value of 0.97% was obtained using PSI-GNN. Additionally, the impact of five input features on the change in MMRE value was analyzed with the top-performing model, employing the SHAP (SHapley Additive exPlanations) feature importance method, visualized through GraphExplainer. The frequency of user-initiated transactions, quantified technically, emerged as the most significant determinant within the NESMA framework.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"33 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10515-025-00562-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145256587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving anomaly detection in software logs through hybrid language modeling and reduced reliance on parser 通过混合语言建模改进软件日志异常检测,减少对解析器的依赖
IF 3.1 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-29 DOI: 10.1007/s10515-025-00548-y
Yicheng Sun, Jacky Keung, Zhen Yang, Shuo Liu, Hi Kuen Yu

Anomaly detection in software logs is crucial for development and maintenance, allowing timely identification of system failures and ensuring normal operations. Although recent deep learning advancements in log anomaly detection have shown exceptional performance, the reliance on time-consuming log parsers raises concerns about their necessity for quickly identifying anomalies. Standardized preprocessing methods can mishandle or lose important information. Additionally, the significant imbalance between normal and anomalous log data, along with the scarcity of labeled data, presents a persistent challenge in anomaly detection. We first evaluated the impact of omitting a log parser on anomaly detection models. Subsequently, we propose LogRoBERTa, an innovative anomaly detection model that eliminates the need for a parser. LogRoBERTa creates a stable and diverse labeled training set using the Determinantal Point Process (DPP) method, needing only a small amount of labeled data. The hybrid language model is based on RoBERTa’s architecture, combined with an attention-based BiLSTM. This setup leverages RoBERTa’s strong contextual understanding and BiLSTM’s capability to capture sequential dependencies, enhancing performance in complex log sequences. Experiments on four widely used datasets demonstrate that LogRoBERTa outperforms state-of-the-art benchmark models—including three fully supervised approaches—without relying on a dedicated log parser. Furthermore, its consistently strong performance on low-resource datasets highlights its robustness and generalizability across varying data conditions. These results validate the overall effectiveness of LogRoBERTa’s design and offer a thorough evaluation of the implications of bypassing a log parser. Additionally, our ablation studies and training set construction experiments further confirm the contributions of each individual component to the model’s performance. The study empirically validated that a RoBERTa-based approach effectively handles software log anomaly detection in long and complex log sequences, providing a more efficient and robust solution for omitting a parser compared to existing models.

软件日志异常检测对于开发和维护至关重要,可以及时发现系统故障,保证系统正常运行。尽管最近深度学习在日志异常检测方面的进展显示出了卓越的性能,但对耗时的日志解析器的依赖引起了人们对其快速识别异常的必要性的担忧。标准化的预处理方法可能会处理不当或丢失重要信息。此外,正常和异常日志数据之间的显著不平衡以及标记数据的稀缺性给异常检测带来了持续的挑战。我们首先评估了忽略日志解析器对异常检测模型的影响。随后,我们提出了LogRoBERTa,这是一种创新的异常检测模型,它消除了对解析器的需求。LogRoBERTa使用确定性点过程(determinal Point Process, DPP)方法创建一个稳定且多样化的标记训练集,只需要少量的标记数据。混合语言模型基于RoBERTa的体系结构,结合了基于注意力的BiLSTM。这种设置利用了RoBERTa强大的上下文理解能力和BiLSTM捕获顺序依赖关系的能力,增强了复杂日志序列中的性能。在四个广泛使用的数据集上进行的实验表明,LogRoBERTa优于最先进的基准模型(包括三种完全监督的方法),而不依赖于专门的日志解析器。此外,它在低资源数据集上始终如一的强大性能突出了它在不同数据条件下的鲁棒性和泛化性。这些结果验证了LogRoBERTa设计的总体有效性,并对绕过日志解析器的影响进行了全面评估。此外,我们的消融研究和训练集构建实验进一步证实了每个单独组件对模型性能的贡献。该研究经验验证了基于roberta的方法有效地处理长而复杂的日志序列中的软件日志异常检测,与现有模型相比,提供了一个更有效和健壮的解决方案,可以省去解析器。
{"title":"Improving anomaly detection in software logs through hybrid language modeling and reduced reliance on parser","authors":"Yicheng Sun,&nbsp;Jacky Keung,&nbsp;Zhen Yang,&nbsp;Shuo Liu,&nbsp;Hi Kuen Yu","doi":"10.1007/s10515-025-00548-y","DOIUrl":"10.1007/s10515-025-00548-y","url":null,"abstract":"<div><p>Anomaly detection in software logs is crucial for development and maintenance, allowing timely identification of system failures and ensuring normal operations. Although recent deep learning advancements in log anomaly detection have shown exceptional performance, the reliance on time-consuming log parsers raises concerns about their necessity for quickly identifying anomalies. Standardized preprocessing methods can mishandle or lose important information. Additionally, the significant imbalance between normal and anomalous log data, along with the scarcity of labeled data, presents a persistent challenge in anomaly detection. We first evaluated the impact of omitting a log parser on anomaly detection models. Subsequently, we propose LogRoBERTa, an innovative anomaly detection model that eliminates the need for a parser. LogRoBERTa creates a stable and diverse labeled training set using the Determinantal Point Process (DPP) method, needing only a small amount of labeled data. The hybrid language model is based on RoBERTa’s architecture, combined with an attention-based BiLSTM. This setup leverages RoBERTa’s strong contextual understanding and BiLSTM’s capability to capture sequential dependencies, enhancing performance in complex log sequences. Experiments on four widely used datasets demonstrate that LogRoBERTa outperforms state-of-the-art benchmark models—including three fully supervised approaches—without relying on a dedicated log parser. Furthermore, its consistently strong performance on low-resource datasets highlights its robustness and generalizability across varying data conditions. These results validate the overall effectiveness of LogRoBERTa’s design and offer a thorough evaluation of the implications of bypassing a log parser. Additionally, our ablation studies and training set construction experiments further confirm the contributions of each individual component to the model’s performance. The study empirically validated that a RoBERTa-based approach effectively handles software log anomaly detection in long and complex log sequences, providing a more efficient and robust solution for omitting a parser compared to existing models.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"33 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145210298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BRMDS: an LLM-based multi-dimensional summary generation approach for bug reports BRMDS:基于llm的多维总结生成方法,用于生成bug报告
IF 3.1 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-23 DOI: 10.1007/s10515-025-00553-1
Yayun Zhang, Yuying Li, Minying Fang, Xing Yuan, Junwei Du

Bug report summarization aims to generate concise and accurate descriptions to help developers understand and maintain. The existing methodologies prioritize simplifying reporting content but fail to provide a structured and well-rounded description of bugs, limiting developers’ understanding efficiency. In this paper, we leverage large language models (LLMs) to generate detailed, multi-dimensional summaries. Our intuition is based on the following facts: (1) LLMs establish robust semantic connections through extensive pre-training on paired data; (2) Real-world bug reports contain multi-dimensional information. We propose the Bug Report Multi-Dimensional Summary (BRMDS) approach, defining five dimensions: environment, actual behavior, expected behavior, bug category, and solution suggestions, and use specific instructions for each dimension to guide LLM in Parameter Efficient Fine-Tuning (PEFT). We construct a dataset in multi-dimensional information for PEFT and experimental evaluation, thereby addressing the gaps in existing datasets within this domain. The experimental results show that multi-dimensional summaries enhance developers’ understanding of bug reports. BRMDS approach outperforms baseline approaches in both automatic and human evaluations. Our datasets are publicly available at https://github.com/yunjua/bug-reports-multi-dimensional.

Bug报告摘要旨在生成简洁准确的描述,以帮助开发人员理解和维护。现有的方法优先简化报告内容,但不能提供结构化和全面的错误描述,限制了开发人员的理解效率。在本文中,我们利用大型语言模型(llm)来生成详细的、多维的摘要。我们的直觉基于以下事实:(1)llm通过对成对数据进行广泛的预训练,建立了鲁棒的语义连接;(2)真实世界的bug报告包含多维信息。我们提出了Bug报告多维总结(BRMDS)方法,定义了环境、实际行为、预期行为、Bug类别和解决方案建议五个维度,并针对每个维度使用具体的说明来指导LLM进行参数高效微调(PEFT)。我们构建了一个多维信息的数据集,用于PEFT和实验评估,从而解决了该领域现有数据集的空白。实验结果表明,多维摘要增强了开发人员对bug报告的理解。BRMDS方法在自动和人工评估中都优于基线方法。我们的数据集可以在https://github.com/yunjua/bug-reports-multi-dimensional上公开获取。
{"title":"BRMDS: an LLM-based multi-dimensional summary generation approach for bug reports","authors":"Yayun Zhang,&nbsp;Yuying Li,&nbsp;Minying Fang,&nbsp;Xing Yuan,&nbsp;Junwei Du","doi":"10.1007/s10515-025-00553-1","DOIUrl":"10.1007/s10515-025-00553-1","url":null,"abstract":"<div><p>Bug report summarization aims to generate concise and accurate descriptions to help developers understand and maintain. The existing methodologies prioritize simplifying reporting content but fail to provide a structured and well-rounded description of bugs, limiting developers’ understanding efficiency. In this paper, we leverage large language models (LLMs) to generate detailed, multi-dimensional summaries. Our intuition is based on the following facts: (1) LLMs establish robust semantic connections through extensive pre-training on paired data; (2) Real-world bug reports contain multi-dimensional information. We propose the Bug Report Multi-Dimensional Summary (BRMDS) approach, defining five dimensions: environment, actual behavior, expected behavior, bug category, and solution suggestions, and use specific instructions for each dimension to guide LLM in Parameter Efficient Fine-Tuning (PEFT). We construct a dataset in multi-dimensional information for PEFT and experimental evaluation, thereby addressing the gaps in existing datasets within this domain. The experimental results show that multi-dimensional summaries enhance developers’ understanding of bug reports. BRMDS approach outperforms baseline approaches in both automatic and human evaluations. Our datasets are publicly available at https://github.com/yunjua/bug-reports-multi-dimensional.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"33 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145110647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PIONEER: improving the robustness of student models when compressing pre-trained models of code PIONEER:在压缩预训练的代码模型时,提高学生模型的鲁棒性
IF 3.1 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-23 DOI: 10.1007/s10515-025-00560-2
Xiangyue Liu, Xinwei Liu, Lili Bo, Xiaoxue Wu, Yun Yang, Xiaobing Sun, Feng Zhou

Pre-trained models of code have shown significant effectiveness in a variety of software engineering tasks, but they are difficult for local deployment due to their large size. Existing works mainly focus on compressing these large models into small models to achieve similar performance and efficient inference. However, it is ignored that the small models should be robust enough to deal with adversarial examples that make incorrect predictions to users. Knowledge distillation techniques typically transform the model compression problem into a combinatorial optimization problem of the student architecture space to achieve the best student model performance. But they can only improve the robustness of the student model to a limited extent through traditional adversarial training. This paper proposes PIONEER (ImProvIng the RObustness of StudeNt ModEls WhEn CompRessing Code Models), a novel knowledge distillation technique that enhances the robustness of the student model without requiring adversarial training. PIONEER incorporates robustness evaluation during distillation to guide the optimization of the student model architecture. By using the probability distributions of original examples and adversarial examples as soft labels, the student model learns the features of both the original samples and adversarial examples during training. We conduct experimental evaluations on two downstream tasks (vulnerability prediction and clone detection) for the three models (CodeBERT, GraphCodeBERT, and CodeT5). We utilize PIONEER to compress six downstream task models to small (3 MB) models that are 206(times) smaller than the original size. The results show that compressed models reduce the inference latency (76(times)) and improve the robustness of the model (87.54%) with negligible loss of effectiveness (1.67%).

预训练的代码模型在各种软件工程任务中显示出显著的有效性,但是由于它们的规模太大,很难在本地部署。现有的工作主要集中在将这些大模型压缩成小模型,以达到相似的性能和高效的推理。然而,它忽略了小模型应该足够健壮,以处理对用户做出错误预测的对抗性示例。知识蒸馏技术通常将模型压缩问题转化为学生体系结构空间的组合优化问题,以获得最佳的学生模型性能。但通过传统的对抗性训练,只能在有限程度上提高学生模型的鲁棒性。本文提出了一种新的知识蒸馏技术PIONEER (improved the鲁棒性of StudeNt ModEls WhEn compressed Code ModEls),它可以在不需要对抗性训练的情况下增强学生模型的鲁棒性。先锋在蒸馏过程中纳入鲁棒性评估,以指导学生模型架构的优化。通过使用原始样本和对抗样本的概率分布作为软标签,学生模型在训练过程中学习原始样本和对抗样本的特征。我们对三个模型(CodeBERT、GraphCodeBERT和CodeT5)的两个下游任务(漏洞预测和克隆检测)进行了实验评估。我们利用PIONEER将6个下游任务模型压缩为比原始大小小206 (times)的小(3 MB)模型。结果表明,压缩模型减少了推理延迟(76 (times)),提高了模型的鲁棒性(87.54)%) with negligible loss of effectiveness (1.67%).
{"title":"PIONEER: improving the robustness of student models when compressing pre-trained models of code","authors":"Xiangyue Liu,&nbsp;Xinwei Liu,&nbsp;Lili Bo,&nbsp;Xiaoxue Wu,&nbsp;Yun Yang,&nbsp;Xiaobing Sun,&nbsp;Feng Zhou","doi":"10.1007/s10515-025-00560-2","DOIUrl":"10.1007/s10515-025-00560-2","url":null,"abstract":"<div><p>Pre-trained models of code have shown significant effectiveness in a variety of software engineering tasks, but they are difficult for local deployment due to their large size. Existing works mainly focus on compressing these large models into small models to achieve similar performance and efficient inference. However, it is ignored that the small models should be robust enough to deal with adversarial examples that make incorrect predictions to users. Knowledge distillation techniques typically transform the model compression problem into a combinatorial optimization problem of the student architecture space to achieve the best student model performance. But they can only improve the robustness of the student model to a limited extent through traditional adversarial training. This paper proposes PIONEER (Im<b>P</b>rov<b>I</b>ng the R<b>O</b>bustness of Stude<b>N</b>t Mod<b>E</b>ls Wh<b>E</b>n Comp<b>R</b>essing Code Models), a novel knowledge distillation technique that enhances the robustness of the student model without requiring adversarial training. PIONEER incorporates robustness evaluation during distillation to guide the optimization of the student model architecture. By using the probability distributions of original examples and adversarial examples as soft labels, the student model learns the features of both the original samples and adversarial examples during training. We conduct experimental evaluations on two downstream tasks (vulnerability prediction and clone detection) for the three models (CodeBERT, GraphCodeBERT, and CodeT5). We utilize PIONEER to compress six downstream task models to small (3 MB) models that are 206<span>(times)</span> smaller than the original size. The results show that compressed models reduce the inference latency (76<span>(times)</span>) and improve the robustness of the model (87.54%) with negligible loss of effectiveness (1.67%).</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"33 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145110489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the bugs in reinforcement learning programs: Insights from Stack Overflow and GitHub 调查强化学习程序中的bug:来自Stack Overflow和GitHub的见解
IF 3.1 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-23 DOI: 10.1007/s10515-025-00555-z
Jiayin Song, Yike Li, Yunzhe Tian, Haoxuan Ma, Honglei Li, Jie Zuo, Jiqiang Liu, Wenjia Niu

Reinforcement learning (RL) is increasingly applied in areas such as gaming, robotic control, and autonomous driving. Like to deep learning, RL systems also encounter failures during operation. However, RL differs from deep learning in terms of its error causes and symptom manifestations. What are the differences in error causes and symptoms between RL and deep learning? How are RL errors and their symptoms related? Understanding the symptoms and causes of RL failures can advance research on RL failure detection and repair. In this paper, we conducted a comprehensive empirical study by collecting 1,155 error reports from the popular Q&A forum Stack Overflow and four GitHub repositories: baselines, stable-baselines3, tianshou and keras-rl. We analyzed the root causes and symptoms of these failures and examined the differences in resolution times across various root causes. Additionally, we analyzed the correlations between causes and symptoms. Our study yielded 14 key findings, and six implications for developing RL detection and failure repair tools. Our work is the first to integrate LLM-based analysis with manual validation for RL bug studies, providing actionable insights for tool development and testing strategies.

强化学习(RL)越来越多地应用于游戏、机器人控制和自动驾驶等领域。与深度学习一样,强化学习系统在运行过程中也会遇到故障。然而,强化学习的错误原因和症状表现与深度学习不同。强化学习和深度学习在错误原因和症状上有什么不同?RL错误及其症状是如何相关的?了解RL故障的症状和原因可以促进RL故障检测和修复的研究。在本文中,我们通过收集来自热门Q&; a论坛Stack Overflow和四个GitHub存储库(baselines, stable-baselines3, tianshou和keras-rl)的1155份错误报告进行了全面的实证研究。我们分析了这些故障的根本原因和症状,并检查了各种根本原因在解决时间上的差异。此外,我们还分析了病因与症状之间的相关性。我们的研究产生了14个关键发现,以及开发RL检测和故障修复工具的6个含义。我们的工作是第一个将基于llm的分析与RL错误研究的手动验证集成在一起,为工具开发和测试策略提供可操作的见解。
{"title":"Investigating the bugs in reinforcement learning programs: Insights from Stack Overflow and GitHub","authors":"Jiayin Song,&nbsp;Yike Li,&nbsp;Yunzhe Tian,&nbsp;Haoxuan Ma,&nbsp;Honglei Li,&nbsp;Jie Zuo,&nbsp;Jiqiang Liu,&nbsp;Wenjia Niu","doi":"10.1007/s10515-025-00555-z","DOIUrl":"10.1007/s10515-025-00555-z","url":null,"abstract":"<div><p>Reinforcement learning (RL) is increasingly applied in areas such as gaming, robotic control, and autonomous driving. Like to deep learning, RL systems also encounter failures during operation. However, RL differs from deep learning in terms of its error causes and symptom manifestations. What are the differences in error causes and symptoms between RL and deep learning? How are RL errors and their symptoms related? Understanding the symptoms and causes of RL failures can advance research on RL failure detection and repair. In this paper, we conducted a comprehensive empirical study by collecting 1,155 error reports from the popular Q&amp;A forum <i>Stack Overflow</i> and four <i>GitHub</i> repositories: baselines, stable-baselines3, tianshou and keras-rl. We analyzed the root causes and symptoms of these failures and examined the differences in resolution times across various root causes. Additionally, we analyzed the correlations between causes and symptoms. Our study yielded 14 key findings, and six implications for developing RL detection and failure repair tools. Our work is the first to integrate LLM-based analysis with manual validation for RL bug studies, providing actionable insights for tool development and testing strategies.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"33 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145110646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Automated Software Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1