首页 > 最新文献

Science of Computer Programming最新文献

英文 中文
Securing LLM code generation: Leveraging prompt engineering to mitigate vulnerabilities across models and languages 保护LLM代码生成:利用提示工程来减轻模型和语言之间的漏洞
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-06-01 Epub Date: 2026-01-19 DOI: 10.1016/j.scico.2026.103446
Shaykhah S. Aldosari , Layla S. Aldawsari
Large Language Models (LLMs) represent a significant advancement in artificial intelligence (AI) capabilities, enabling natural and intuitive human-machine interactions. One rapidly evolving AI application involves LLM code generation, which can expedite software development by automating code writing, debugging, and optimizing. However, despite these enhanced capabilities, essential questions remain regarding the security implications of code generated by these models. This study addresses three key research questions to examine the security risks in LLM-generated code. It examines whether code generated by different open-source LLMs exhibits measurable variation in vulnerability prevalence. Furthermore, it also investigates how the choice of programming languages influences the security of LLM-generated code. Finally, it explores the degree to which prompt specificity and construction shape the security of the generated code. Our findings demonstrate differences across all dimensions: LLMs exhibited a variance of up to 136.06, programming languages showed a maximum performance gap of 56%, and prompt engineering achieved up to 77% improvement in security.
大型语言模型(llm)代表了人工智能(AI)能力的重大进步,实现了自然和直观的人机交互。一个快速发展的AI应用程序涉及LLM代码生成,它可以通过自动化代码编写、调试和优化来加快软件开发。然而,尽管有这些增强的功能,关于这些模型生成的代码的安全性问题仍然存在。本研究解决了三个关键的研究问题,以检查法学硕士生成的代码中的安全风险。它检查了由不同的开源llm生成的代码是否在漏洞流行方面表现出可测量的差异。此外,它还研究了编程语言的选择如何影响法学硕士生成代码的安全性。最后,它探讨了提示的特异性和结构在多大程度上塑造了生成代码的安全性。我们的研究结果显示了所有维度的差异:llm表现出高达136.06的差异,编程语言表现出56%的最大性能差距,而提示工程在安全性方面实现了高达77%的改进。
{"title":"Securing LLM code generation: Leveraging prompt engineering to mitigate vulnerabilities across models and languages","authors":"Shaykhah S. Aldosari ,&nbsp;Layla S. Aldawsari","doi":"10.1016/j.scico.2026.103446","DOIUrl":"10.1016/j.scico.2026.103446","url":null,"abstract":"<div><div>Large Language Models (LLMs) represent a significant advancement in artificial intelligence (AI) capabilities, enabling natural and intuitive human-machine interactions. One rapidly evolving AI application involves LLM code generation, which can expedite software development by automating code writing, debugging, and optimizing. However, despite these enhanced capabilities, essential questions remain regarding the security implications of code generated by these models. This study addresses three key research questions to examine the security risks in LLM-generated code. It examines whether code generated by different open-source LLMs exhibits measurable variation in vulnerability prevalence. Furthermore, it also investigates how the choice of programming languages influences the security of LLM-generated code. Finally, it explores the degree to which prompt specificity and construction shape the security of the generated code. Our findings demonstrate differences across all dimensions: LLMs exhibited a variance of up to 136.06, programming languages showed a maximum performance gap of 56%, and prompt engineering achieved up to 77% improvement in security.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"251 ","pages":"Article 103446"},"PeriodicalIF":1.4,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MulAD: A log-based anomaly detection approach for distributed systems using multi-pattern and multi-model fusion MulAD:一种基于日志的分布式系统异常检测方法,采用多模式和多模型融合
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-06-01 Epub Date: 2025-12-23 DOI: 10.1016/j.scico.2025.103433
Xinjie Wei , Chang-Ai Sun , Xiaoyi Zhang , Dave Towey
Context:Log-based anomaly detection (LAD) techniques examine whether or not continuously-generated logs match historically-normal patterns: This helps to ensure reliability in distributed systems using DevOps. However, complex anomalies can span multiple log-pattern types and thus may only be detected by combining these patterns: Relying only on any single pattern may cause anomalies to be missed. These are false negatives in anomaly detection.
Objective:In this paper, we propose an Anomaly-Detection approach based on Multi-type log-pattern fusion and Multi-model integration (MulAD): MulAD fuses multi-type log patterns into a synthetic representation to detect complex anomalies.
Method:MulAD first rearranges logs by source parameters to decouple interleaving logs and isolate relevant events. It then derives log patterns across five dimensions — semantic, sequential, quantitative, temporal (chronological), and parametric — and fuses them into a unified synthesized pattern. Finally, to detect anomalies, MulAD integrates the MABi-LSTM, Transformer, and graph neural network (GNN) models together: Each of these models is specifically designed to capture temporal and sequential dependencies, contextual information, and structural dependencies.
Result:We evaluated MulAD on three public datasets (HDFS, BGL, and ThunderBird) and one industrial one, from the Ray system. Experimental results show that MulAD outperforms all state-of-the-art techniques.
Conclusion:We conclude that MulAD is a promising anomaly-detection technique for complex anomalies in distributed systems.
上下文:基于日志的异常检测(LAD)技术检查连续生成的日志是否与历史正常模式匹配:这有助于确保使用DevOps的分布式系统的可靠性。然而,复杂的异常可以跨越多个日志模式类型,因此可能只能通过组合这些模式来检测:仅依赖任何单一模式可能会导致错过异常。这些是异常检测中的假阴性。目的:本文提出了一种基于多类型日志模式融合和多模型集成(MulAD)的异常检测方法:MulAD将多类型日志模式融合成一个综合表示来检测复杂异常。方法:MulAD首先按源参数重新排列日志,以解耦交错的日志并隔离相关事件。然后,它从五个维度(语义、顺序、数量、时间(时间顺序)和参数)派生日志模式,并将它们融合到统一的合成模式中。最后,为了检测异常,MulAD将MABi-LSTM、Transformer和图形神经网络(GNN)模型集成在一起:这些模型中的每一个都专门用于捕获时间和顺序依赖关系、上下文信息和结构依赖关系。结果:我们在来自Ray系统的三个公共数据集(HDFS、BGL和ThunderBird)和一个工业数据集上评估了MulAD。实验结果表明,MulAD优于所有最先进的技术。结论:我们认为MulAD是一种很有前途的分布式系统复杂异常检测技术。
{"title":"MulAD: A log-based anomaly detection approach for distributed systems using multi-pattern and multi-model fusion","authors":"Xinjie Wei ,&nbsp;Chang-Ai Sun ,&nbsp;Xiaoyi Zhang ,&nbsp;Dave Towey","doi":"10.1016/j.scico.2025.103433","DOIUrl":"10.1016/j.scico.2025.103433","url":null,"abstract":"<div><div><strong>Context:</strong>Log-based anomaly detection (LAD) techniques examine whether or not continuously-generated logs match historically-normal patterns: This helps to ensure reliability in distributed systems using DevOps. However, complex anomalies can span multiple log-pattern types and thus may only be detected by combining these patterns: Relying only on any single pattern may cause anomalies to be missed. These are false negatives in anomaly detection.</div><div><strong>Objective:</strong>In this paper, we propose an Anomaly-Detection approach based on Multi-type log-pattern fusion and Multi-model integration (MulAD): MulAD fuses multi-type log patterns into a synthetic representation to detect complex anomalies.</div><div><strong>Method:</strong>MulAD first rearranges logs by source parameters to decouple interleaving logs and isolate relevant events. It then derives log patterns across five dimensions — semantic, sequential, quantitative, temporal (chronological), and parametric — and fuses them into a unified <em>synthesized pattern</em>. Finally, to detect anomalies, MulAD integrates the MABi-LSTM, Transformer, and graph neural network (GNN) models together: Each of these models is specifically designed to capture temporal and sequential dependencies, contextual information, and structural dependencies.</div><div><strong>Result:</strong>We evaluated MulAD on three public datasets (HDFS, BGL, and ThunderBird) and one industrial one, from the Ray system. Experimental results show that MulAD outperforms all state-of-the-art techniques.</div><div><strong>Conclusion:</strong>We conclude that MulAD is a promising anomaly-detection technique for complex anomalies in distributed systems.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"251 ","pages":"Article 103433"},"PeriodicalIF":1.4,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cost-adaptive multi-level semantic feature learning for source code based bug severity prediction 基于成本自适应多级语义特征学习的源代码bug严重性预测
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-06-01 Epub Date: 2026-01-17 DOI: 10.1016/j.scico.2026.103444
Xiaoke Zhu , Yufeng Shi , Xiaopan Chen , Caihong Yuan , Fumin Qi , Xiao-Yuan Jing
Bug severity prediction plays a crucial role in software development by enabling timely defect management. Traditional approaches that rely on bug reports are prone to subjective bias, often leading to inaccurate severity assessments. In contrast, source code-based methods can directly learn code representations to more accurately identify potential defects. However, existing source code-based models don’t make full use of the hierarchical deep semantic information, and don’t pay enough attention on the intrinsic class imbalance issue. To overcome these challenges, this paper presents the Cost-Adaptive Multi-level sEmantic feature Learning (CAMEL) framework for bug severity prediction. The framework comprises three core modules: the feature extraction module, the Multi-level Semantic Information Fusion (MSIF) module, and the Cost Weight Optimization (CWO) module. Specifically, the feature extraction module leverages CodeBERT to capture multi-level semantic information from source code. The MSIF then dynamically aggregates layer-specific features from each CodeBERT layer using an LSTM combined with a hierarchical attention mechanism, thereby preserving global semantic integrity. Finally, the CWO module mitigates the influence of class imbalance issue by dynamically adjusting class weight parameters. Experiments conducted on a dataset of 3342 method-level code snippets with varying bug severity levels demonstrate that CAMEL significantly outperforms state-of-the-art methods across key metrics, including F1-Weighted, Precision, Recall, and MCC.
Bug严重性预测在软件开发中起着至关重要的作用,它支持及时的缺陷管理。依赖bug报告的传统方法容易产生主观偏见,经常导致不准确的严重性评估。相比之下,基于源代码的方法可以直接学习代码表示,从而更准确地识别潜在的缺陷。然而,现有的基于源代码的模型没有充分利用分层深层语义信息,对固有的类不平衡问题重视不够。为了克服这些挑战,本文提出了用于漏洞严重性预测的成本自适应多层次语义特征学习(CAMEL)框架。该框架包括三个核心模块:特征提取模块、多层语义信息融合(MSIF)模块和成本权重优化(CWO)模块。具体来说,特征提取模块利用CodeBERT从源代码中捕获多层次语义信息。然后,MSIF使用LSTM和分层注意机制来动态聚合来自每个CodeBERT层的特定层的特征,从而保持全局语义完整性。最后,CWO模块通过动态调整类权重参数来缓解类不平衡问题的影响。在3342个方法级代码片段的数据集上进行的实验表明,CAMEL在关键指标上明显优于最先进的方法,包括f1加权、精度、召回率和MCC。
{"title":"Cost-adaptive multi-level semantic feature learning for source code based bug severity prediction","authors":"Xiaoke Zhu ,&nbsp;Yufeng Shi ,&nbsp;Xiaopan Chen ,&nbsp;Caihong Yuan ,&nbsp;Fumin Qi ,&nbsp;Xiao-Yuan Jing","doi":"10.1016/j.scico.2026.103444","DOIUrl":"10.1016/j.scico.2026.103444","url":null,"abstract":"<div><div>Bug severity prediction plays a crucial role in software development by enabling timely defect management. Traditional approaches that rely on bug reports are prone to subjective bias, often leading to inaccurate severity assessments. In contrast, source code-based methods can directly learn code representations to more accurately identify potential defects. However, existing source code-based models don’t make full use of the hierarchical deep semantic information, and don’t pay enough attention on the intrinsic class imbalance issue. To overcome these challenges, this paper presents the Cost-Adaptive Multi-level sEmantic feature Learning (CAMEL) framework for bug severity prediction. The framework comprises three core modules: the feature extraction module, the Multi-level Semantic Information Fusion (MSIF) module, and the Cost Weight Optimization (CWO) module. Specifically, the feature extraction module leverages CodeBERT to capture multi-level semantic information from source code. The MSIF then dynamically aggregates layer-specific features from each CodeBERT layer using an LSTM combined with a hierarchical attention mechanism, thereby preserving global semantic integrity. Finally, the CWO module mitigates the influence of class imbalance issue by dynamically adjusting class weight parameters. Experiments conducted on a dataset of 3342 method-level code snippets with varying bug severity levels demonstrate that CAMEL significantly outperforms state-of-the-art methods across key metrics, including F1-Weighted, Precision, Recall, and MCC.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"251 ","pages":"Article 103444"},"PeriodicalIF":1.4,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CFFitST: Classification few-shot fit sentence transformer CFFitST:分级少射匹配句子变压器
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-05-01 Epub Date: 2025-12-05 DOI: 10.1016/j.scico.2025.103420
Daniel Fernando Gómez-Barrera, Luccas Rojas Becerra, Juan Pinzón Roncancio, David Ortiz Almanza, Juan Arboleda, Mario Linares-Vásquez, Rubén Francisco Manrique
This paper presents CFFitST, a novel strategy for iteratively fine-tuning sentence embeddings using a pre-trained sentence transformer to enhance classification performance in few-shot settings. The method dynamically adjusts the number and composition of training samples based on internal assessments over the training data. CFFitST was evaluated in the “NLBSE 2024” tool competition, which focused on multi-class classification of GitHub issues. The competition required robust few-shot learning models to classify 300 issues across five different repositories. Our approach achieved an F1 score of 84.2 %, representing a 2.44 % statistical significant improvement over the SetFit baseline.
本文提出了一种新的CFFitST策略,该策略使用预训练的句子转换器来迭代微调句子嵌入,以提高在少数镜头设置下的分类性能。该方法基于对训练数据的内部评估,动态调整训练样本的数量和组成。CFFitST在“NLBSE 2024”工具竞赛中进行了评估,该竞赛侧重于对GitHub问题进行多类分类。这项竞赛需要强大的少量学习模型来对5个不同存储库中的300个问题进行分类。我们的方法获得了84.2%的F1分数,比SetFit基线提高了2.44%。
{"title":"CFFitST: Classification few-shot fit sentence transformer","authors":"Daniel Fernando Gómez-Barrera,&nbsp;Luccas Rojas Becerra,&nbsp;Juan Pinzón Roncancio,&nbsp;David Ortiz Almanza,&nbsp;Juan Arboleda,&nbsp;Mario Linares-Vásquez,&nbsp;Rubén Francisco Manrique","doi":"10.1016/j.scico.2025.103420","DOIUrl":"10.1016/j.scico.2025.103420","url":null,"abstract":"<div><div>This paper presents CFFitST, a novel strategy for iteratively fine-tuning sentence embeddings using a pre-trained sentence transformer to enhance classification performance in few-shot settings. The method dynamically adjusts the number and composition of training samples based on internal assessments over the training data. CFFitST was evaluated in the “NLBSE 2024” tool competition, which focused on multi-class classification of GitHub issues. The competition required robust few-shot learning models to classify 300 issues across five different repositories. Our approach achieved an F1 score of 84.2 %, representing a 2.44 % statistical significant improvement over the SetFit baseline.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"250 ","pages":"Article 103420"},"PeriodicalIF":1.4,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145796894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inferring non-failure conditions for declarative programs 推断声明性程序的非故障条件
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-05-01 Epub Date: 2025-11-21 DOI: 10.1016/j.scico.2025.103416
Michael Hanus
Unintended failures during a computation are painful but frequent during software development. Failures due to external reasons (e.g., missing files, no permissions, etc.) can be caught by exception handlers. Programming failures, such as calling a partially defined operation with unintended arguments, are often not caught due to the assumption that the software is correct. This paper presents an approach to verify such assumptions. For this purpose, non-failure conditions for operations are inferred and then checked in all uses of partially defined operations. In the positive case, the absence of such failures is ensured. In the negative case, the programmer could adapt the program to handle possibly failing situations and check the program again. Our method is fully automatic and can be applied to larger declarative programs. The results of an implementation for functional logic Curry programs are presented.
计算过程中的意外故障是痛苦的,但在软件开发过程中经常发生。由于外部原因导致的失败(例如,丢失文件,没有权限等)可以被异常处理程序捕获。由于假定软件是正确的,编程失败(例如调用带有意外参数的部分定义的操作)通常不会被捕获。本文提出了一种验证这种假设的方法。为此,推断操作的非故障条件,然后在部分定义操作的所有使用中检查。在积极的情况下,可以确保没有这种失败。在消极的情况下,程序员可以调整程序来处理可能失败的情况,并再次检查程序。我们的方法是全自动的,可以应用于更大的声明性程序。给出了一个功能逻辑Curry程序的实现结果。
{"title":"Inferring non-failure conditions for declarative programs","authors":"Michael Hanus","doi":"10.1016/j.scico.2025.103416","DOIUrl":"10.1016/j.scico.2025.103416","url":null,"abstract":"<div><div>Unintended failures during a computation are painful but frequent during software development. Failures due to external reasons (e.g., missing files, no permissions, etc.) can be caught by exception handlers. Programming failures, such as calling a partially defined operation with unintended arguments, are often not caught due to the assumption that the software is correct. This paper presents an approach to verify such assumptions. For this purpose, non-failure conditions for operations are inferred and then checked in all uses of partially defined operations. In the positive case, the absence of such failures is ensured. In the negative case, the programmer could adapt the program to handle possibly failing situations and check the program again. Our method is fully automatic and can be applied to larger declarative programs. The results of an implementation for functional logic Curry programs are presented.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"250 ","pages":"Article 103416"},"PeriodicalIF":1.4,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MetaOCaml: ten years later – System description 元ocaml:十年后-系统描述
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-05-01 Epub Date: 2025-11-15 DOI: 10.1016/j.scico.2025.103397
Oleg Kiselyov
MetaOCaml is a superset of OCaml for convenient code generation with static guarantees: the generated code is well-formed, well-typed and well-scoped, by construction. Not only the produced code always compiles; code fragments with a variable escaping its scope are detected already during code generation. MetaOCaml has been employed for compiling domain-specific languages, generic programming, automating tedious specializations in high-performance computing, generating efficient computational kernels and embedded programming. It is used in education, and served as inspiration for several other metaprogramming systems.
Most well-known in MetaOCaml are the types for values representing generated code and the template-based mechanism to produce such values, a.k.a., brackets and escapes. MetaOCaml also features cross-stage persistence, generating ordinary and mutually-recursive definitions, first-class pattern-matching and heterogeneous metaprogramming.
The extant implementation of MetaOCaml, first presented at FLOPS 2014, has been continuously evolving. We describe the current design and implementation, stressing particularly notable additions. Among them is a new and efficient translation from typed code templates to code combinators. Scope extrusion detection unexpectedly brought let- insertion, and a conclusive solution to the 20-year–old vexing problem of cross-stage persistence.
MetaOCaml是OCaml的超集,用于方便地生成具有静态保证的代码:通过构造生成的代码格式良好、类型良好、作用域良好。不仅生成的代码总是可以编译;在代码生成过程中已经检测到具有转义其作用域的变量的代码片段。MetaOCaml已被用于编译特定领域的语言、泛型编程、自动化繁琐的高性能计算专门化、生成高效的计算内核和嵌入式编程。它用于教育,并为其他几个元编程系统提供了灵感。MetaOCaml中最著名的是表示生成代码的值的类型和生成这些值的基于模板的机制,也就是括号和转义。MetaOCaml还具有跨阶段持久性,生成普通的和相互递归的定义,一流的模式匹配和异构元编程。MetaOCaml的现有实现在FLOPS 2014上首次提出,一直在不断发展。我们描述当前的设计和实现,强调特别值得注意的补充。其中包括一种新的、高效的从类型化代码模板到代码组合器的转换。范围挤压检测出人意料地带来了容插入,彻底解决了困扰了20多年的跨段持续问题。
{"title":"MetaOCaml: ten years later – System description","authors":"Oleg Kiselyov","doi":"10.1016/j.scico.2025.103397","DOIUrl":"10.1016/j.scico.2025.103397","url":null,"abstract":"<div><div>MetaOCaml is a superset of OCaml for convenient code generation with static guarantees: the generated code is well-formed, well-typed and well-scoped, by construction. Not only the produced code always compiles; code fragments with a variable escaping its scope are detected already during code generation. MetaOCaml has been employed for compiling domain-specific languages, generic programming, automating tedious specializations in high-performance computing, generating efficient computational kernels and embedded programming. It is used in education, and served as inspiration for several other metaprogramming systems.</div><div>Most well-known in MetaOCaml are the types for values representing generated code and the template-based mechanism to produce such values, a.k.a., brackets and escapes. MetaOCaml also features cross-stage persistence, generating ordinary and mutually-recursive definitions, first-class pattern-matching and heterogeneous metaprogramming.</div><div>The extant implementation of MetaOCaml, first presented at FLOPS 2014, has been continuously evolving. We describe the current design and implementation, stressing particularly notable additions. Among them is a new and efficient translation from typed code templates to code combinators. Scope extrusion detection unexpectedly brought let- insertion, and a conclusive solution to the 20-year–old vexing problem of cross-stage persistence.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"250 ","pages":"Article 103397"},"PeriodicalIF":1.4,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145796893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Evaluation of Coconut: Typestates for C++ 椰子的设计与评估:c++的类型状态
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-05-01 Epub Date: 2025-10-25 DOI: 10.1016/j.scico.2025.103398
Arwa Hameed Alsubhi, Ornela Dardha, Simon J. Gay
This paper introduces Coconut, a C++ tool that uses templates for defining object behaviours and validates them with typestate checking. Coconut employs the GIMPLE intermediate representation (IR) from the GCC compiler’s middle-end phase for static checks, ensuring objects follow valid state transitions as defined in typestate templates. It supports features like branching, recursion, aliasing, inheritance, and typestate visualisation. We illustrate Coconut’s application in embedded systems, validating their behaviour pre-deployment. We present an experimental study, showing that Coconut improves performance and reduces code complexity wrt the original code, highlighting the benefits of typestate-based verification.
本文介绍了Coconut,这是一个c++工具,它使用模板来定义对象行为,并通过类型状态检查来验证它们。Coconut使用来自GCC编译器中间阶段的GIMPLE中间表示(IR)进行静态检查,确保对象遵循类型状态模板中定义的有效状态转换。它支持分支、递归、混叠、继承和类型状态可视化等特性。我们举例说明了Coconut在嵌入式系统中的应用,验证了它们在部署前的行为。我们提出了一项实验研究,表明Coconut提高了性能并降低了原始代码的代码复杂性,突出了基于类型状态验证的好处。
{"title":"Design and Evaluation of Coconut: Typestates for C++","authors":"Arwa Hameed Alsubhi,&nbsp;Ornela Dardha,&nbsp;Simon J. Gay","doi":"10.1016/j.scico.2025.103398","DOIUrl":"10.1016/j.scico.2025.103398","url":null,"abstract":"<div><div>This paper introduces Coconut, a C++ tool that uses templates for defining object behaviours and validates them with typestate checking. Coconut employs the GIMPLE intermediate representation (IR) from the GCC compiler’s middle-end phase for static checks, ensuring objects follow valid state transitions as defined in typestate templates. It supports features like branching, recursion, aliasing, inheritance, and typestate visualisation. We illustrate Coconut’s application in embedded systems, validating their behaviour pre-deployment. We present an experimental study, showing that Coconut improves performance and reduces code complexity wrt the original code, highlighting the benefits of typestate-based verification.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"250 ","pages":"Article 103398"},"PeriodicalIF":1.4,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145747796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Code clone classification based on multi-dimension feature entropy 基于多维特征熵的代码克隆分类
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-05-01 Epub Date: 2025-11-22 DOI: 10.1016/j.scico.2025.103419
Bin Hu , Lizhi Zheng , Dongjin Yu , Yijian Wu , Jie Chen , Tianyi Hu
Code clones have been a hot topic in software engineering for decades. Due to the rapid development of clone detection techniques, it is not difficult to find code clones in software systems, while managing the vast amounts of clones remains an open problem. Typically, we should adopt refactoring approaches to eliminate clones, thereby mitigating the threat to software maintenance. In some situations, the clone group may contain several different code variants that reside in different locations, thus making refactoring too complicated, as their differences must be analyzed and reconciled before refactoring. Therefore, we should find an approach to recognize clone groups that are easy to refactor or eliminate. In this paper, we first collected large-scale datasets from three different domains and studied the distribution of four different metrics of code clones. We found that the distribution of each metric follows a certain pattern, the number of inner file clone accounts for approximately 50 %, the number of Type3 clone accounts for above 45 %. But we cannot judge the complexity of code clone groups based solely on these metrics. Based on our findings, we propose a classification approach to assist developers to find clone groups that are easy to eliminate by refactoring from those that are hard to refactor. We propose four different clone feature entropy measures based on information entropy theory, including variant entropy, distribution entropy, relation entropy, and syntactic entropy. Then, we calculate fused clone entropy, which is the weighted summation of the above four clone feature entropy. Finally, we use the four types of feature entropy and the fused feature entropy to classify or rank code clone groups. Experiments on three different application domains show that the proposed clone feature entropy can help developers identify clone groups that are easy to eliminate by refactoring. Manual validation also reveals that the complexity of clone groups is not solely dependent on the number of clone instances. This approach provides a new way to manage code clones and offers some useful ideas for future clone maintenance research.
几十年来,代码克隆一直是软件工程中的热门话题。由于克隆检测技术的快速发展,在软件系统中发现代码克隆并不困难,而管理大量的克隆仍然是一个悬而未决的问题。通常,我们应该采用重构方法来消除克隆,从而减轻对软件维护的威胁。在某些情况下,克隆组可能包含位于不同位置的几个不同的代码变体,从而使重构过于复杂,因为在重构之前必须分析和协调它们的差异。因此,我们应该找到一种方法来识别易于重构或消除的克隆组。在本文中,我们首先收集了来自三个不同领域的大规模数据集,研究了四种不同的代码克隆度量的分布。我们发现,各指标的分布遵循一定的规律,内部文件克隆的数量约占50%,Type3克隆的数量占45%以上。但是我们不能仅仅根据这些指标来判断代码克隆组的复杂性。基于我们的发现,我们提出了一种分类方法,以帮助开发人员从难以重构的组中找到易于重构的克隆组。基于信息熵理论,提出了四种不同的克隆特征熵度量方法,包括变异熵、分布熵、关系熵和句法熵。然后,我们计算融合克隆熵,它是上述四个克隆特征熵的加权总和。最后,利用四种类型的特征熵和融合特征熵对代码克隆组进行分类或排序。在三个不同应用领域的实验表明,所提出的克隆特征熵可以帮助开发人员识别出易于通过重构消除的克隆组。手动验证还表明,克隆组的复杂性并不仅仅取决于克隆实例的数量。这种方法提供了一种管理代码克隆的新方法,并为今后克隆维护研究提供了一些有用的思路。
{"title":"Code clone classification based on multi-dimension feature entropy","authors":"Bin Hu ,&nbsp;Lizhi Zheng ,&nbsp;Dongjin Yu ,&nbsp;Yijian Wu ,&nbsp;Jie Chen ,&nbsp;Tianyi Hu","doi":"10.1016/j.scico.2025.103419","DOIUrl":"10.1016/j.scico.2025.103419","url":null,"abstract":"<div><div>Code clones have been a hot topic in software engineering for decades. Due to the rapid development of clone detection techniques, it is not difficult to find code clones in software systems, while managing the vast amounts of clones remains an open problem. Typically, we should adopt refactoring approaches to eliminate clones, thereby mitigating the threat to software maintenance. In some situations, the clone group may contain several different code variants that reside in different locations, thus making refactoring too complicated, as their differences must be analyzed and reconciled before refactoring. Therefore, we should find an approach to recognize clone groups that are easy to refactor or eliminate. In this paper, we first collected large-scale datasets from three different domains and studied the distribution of four different metrics of code clones. We found that the distribution of each metric follows a certain pattern, the number of inner file clone accounts for approximately 50 %, the number of Type3 clone accounts for above 45 %. But we cannot judge the complexity of code clone groups based solely on these metrics. Based on our findings, we propose a classification approach to assist developers to find clone groups that are easy to eliminate by refactoring from those that are hard to refactor. We propose four different clone feature entropy measures based on information entropy theory, including variant entropy, distribution entropy, relation entropy, and syntactic entropy. Then, we calculate fused clone entropy, which is the weighted summation of the above four clone feature entropy. Finally, we use the four types of feature entropy and the fused feature entropy to classify or rank code clone groups. Experiments on three different application domains show that the proposed clone feature entropy can help developers identify clone groups that are easy to eliminate by refactoring. Manual validation also reveals that the complexity of clone groups is not solely dependent on the number of clone instances. This approach provides a new way to manage code clones and offers some useful ideas for future clone maintenance research.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"250 ","pages":"Article 103419"},"PeriodicalIF":1.4,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining sequential feature test cases to generate sound tests for concurrent features 结合顺序的特性测试用例,为并发特性生成可靠的测试
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-05-01 Epub Date: 2025-11-14 DOI: 10.1016/j.scico.2025.103414
Rafaela Almeida , Sidney Nogueira , Augusto Sampaio
Testing concurrent systems is challenging due to their complex interactions and behaviours, along with the difficulty in reproducing failures. We propose a sound strategy for testing concurrent mobile applications by extracting use cases that capture interleavings of behaviours of existing test cases for individual features. These use cases are then used to create a formal model that is the input for a refinement checking approach to generate test cases that are still sequential but exercise the execution of concurrent features. We introduce a conformance relation, cspioq, which considers quiescent behaviour (absence of output). This relation is based on cspio (which is itself inspired by ioco); cspio does not take quiescence behaviour into account. While ioco as well as cspioco (a denotational semantics for ioco based on CSP) rely on suspension traces, our approach adopts the traces model annotated with a special event to represent quiescence. This allowed us to reuse our previous theory and test case generation strategy for sequential systems in a conservative way. We also analyse the complexity of automatically generating test cases. For implementation efficiency, we optimise the strategy by directly interleaving steps of existing test cases and show that this preserves soundness. Moreover, we provide tool support for every phase of the approach. Finally, we present the results of an empirical evaluation designed to measure the effectiveness of the overall strategy in terms of test coverage and bug detection. The results indicate that our approach yields higher coverage and higher bug detection rates compared to the set of tests originally developed by our industrial partner (Motorola) engineers.
由于并发系统复杂的交互和行为,以及再现故障的困难,测试并发系统是具有挑战性的。我们提出了一种测试并发移动应用程序的合理策略,通过提取用例来捕获单个功能的现有测试用例的行为交叉。然后,这些用例被用来创建一个正式的模型,该模型是精化检查方法的输入,以生成仍然是顺序的但执行并发特性的测试用例。我们引入了一个一致性关系,cspioq,它考虑了静态行为(没有输出)。这个关系是基于cspio的(它本身是受ioco启发的);Cspio不考虑静止行为。ioco和cspioco(基于CSP的ioco的指义语义)依赖于悬浮轨迹,而我们的方法采用带有特殊事件注释的轨迹模型来表示静止。这允许我们以保守的方式为顺序系统重用以前的理论和测试用例生成策略。我们还分析了自动生成测试用例的复杂性。为了提高实现效率,我们通过直接交叉现有测试用例的步骤来优化策略,并表明这保持了可靠性。此外,我们为方法的每个阶段提供工具支持。最后,我们给出了一个经验评估的结果,该评估旨在根据测试覆盖率和缺陷检测来衡量整体策略的有效性。结果表明,与我们的工业合作伙伴(摩托罗拉)工程师最初开发的一组测试相比,我们的方法产生了更高的覆盖率和更高的错误检测率。
{"title":"Combining sequential feature test cases to generate sound tests for concurrent features","authors":"Rafaela Almeida ,&nbsp;Sidney Nogueira ,&nbsp;Augusto Sampaio","doi":"10.1016/j.scico.2025.103414","DOIUrl":"10.1016/j.scico.2025.103414","url":null,"abstract":"<div><div>Testing concurrent systems is challenging due to their complex interactions and behaviours, along with the difficulty in reproducing failures. We propose a sound strategy for testing concurrent mobile applications by extracting use cases that capture interleavings of behaviours of existing test cases for individual features. These use cases are then used to create a formal model that is the input for a refinement checking approach to generate test cases that are still sequential but exercise the execution of concurrent features. We introduce a conformance relation, <strong>cspio</strong><sub><strong>q</strong></sub>, which considers quiescent behaviour (absence of output). This relation is based on <strong>cspio</strong> (which is itself inspired by <strong>ioco</strong>); <strong>cspio</strong> does not take quiescence behaviour into account. While <strong>ioco</strong> as well as <strong>cspioco</strong> (a denotational semantics for <strong>ioco</strong> based on CSP) rely on suspension traces, our approach adopts the traces model annotated with a special event to represent quiescence. This allowed us to reuse our previous theory and test case generation strategy for sequential systems in a conservative way. We also analyse the complexity of automatically generating test cases. For implementation efficiency, we optimise the strategy by directly interleaving steps of existing test cases and show that this preserves soundness. Moreover, we provide tool support for every phase of the approach. Finally, we present the results of an empirical evaluation designed to measure the effectiveness of the overall strategy in terms of test coverage and bug detection. The results indicate that our approach yields higher coverage and higher bug detection rates compared to the set of tests originally developed by our industrial partner (Motorola) engineers.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"250 ","pages":"Article 103414"},"PeriodicalIF":1.4,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal information fusion for software vulnerability detection based on both source and binary codes 基于源码和二进制码的多模态信息融合软件漏洞检测
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-05-01 Epub Date: 2025-11-13 DOI: 10.1016/j.scico.2025.103411
Yuzhou Liu , Qi Wang , Shuang Jiang , Runze Wu , Hongxu Tian , Peng Zhang
Context: Many researchers have proposed vulnerability detection methods to enhance software reliability by analyzing the program. However, some vulnerabilities are difficult to be identified only from the source codes, especially the ones related to the execution.
Objectives: To solve this problem, this paper introduces extra binary codes and proposes a novel solution for software vulnerability detection based on the multimodal information fusion.
Methods: The approach treats the source and binary codes as different modalities, and uses two pre-trained models as feature extractors to analyze them separately. Then, we design an attention-based information fusion strategy that taking the information from source codes as the main body while the one from binary codes as the supplement. It could not only capture the correlations among features across different modalities, but also filter the redundancy from the binary codes in the fusion process. In this way, a more comprehensive representation of software is gained and finally taken as the basis for the vulnerability detection.
Results: Our method was comprehensively evaluated on three widely-used datasets in different languages, that is Reveal in C, Devign in C++, and Code_vulnerability_java in Java: (1) For vulnerability detection performance, the Accuracy reached 86.09 %, 84.58 %, and 80.43 % across the three datasets, with F1-scores of 82.87 %, 84.62 %, and 79.58 % respectively; (2) Compared with seven state-of-the-art baseline methods, our approach achieved Accuracy improvements of 2.38 %-3.01 % and F1-score enhancements of 2.32 %-8.47 % across the datasets; (3) Moreover, the ablation experiment shows when combining binary codes with source codes (versus using source codes alone), the Accuracy improved by 6.83 %-13.76 % and F1-score increased by 5.36 %-9.86 %, demonstrating the significant performance gains from multimodal data integration.
Conclusion: The results show that our approach can achieve good performance for the task of software vulnerability detection. Meanwhile, ablation experiments confirm the contributions of binary codes to the detection and indicate the effectiveness of our fusion strategy. We have released the codes and datasets (https://github.com/Wangqxn/Vul-detection) to facilitate follow-up research.
背景:许多研究者提出了漏洞检测方法,通过分析程序来提高软件的可靠性。然而,有些漏洞很难仅从源代码中识别出来,特别是与执行相关的漏洞。为了解决这一问题,本文引入了额外的二进制码,提出了一种基于多模态信息融合的软件漏洞检测新方案。方法:该方法将源码和二进制码视为不同的模态,并使用两个预训练模型作为特征提取器分别对其进行分析。然后,我们设计了一种以源代码信息为主体,以二进制码信息为补充的基于注意力的信息融合策略。该方法不仅可以捕获不同模态特征之间的相关性,而且可以在融合过程中过滤掉二进制码中的冗余。这样可以获得更全面的软件表征,并最终作为漏洞检测的依据。结果:我们的方法在不同语言的3个广泛使用的数据集(Reveal in C、design in c++和Code_vulnerability_java)上进行了综合评价:(1)在漏洞检测性能上,3个数据集的准确率分别达到86.09%、84.58%和80.43%,f1得分分别为82.87%、84.62%和79.58%;(2)与7种最先进的基线方法相比,我们的方法在数据集上的准确率提高了2.38% - 3.01%,f1分数提高了2.32% - 8.47%;(3)此外,烧蚀实验表明,当二进制码与源代码结合使用时(与单独使用源代码相比),准确率提高了6.83% ~ 13.76%,f1分数提高了5.36% ~ 9.86%,显示了多模态数据集成带来的显著性能提升。结论:该方法能够较好地完成软件漏洞检测任务。同时,烧蚀实验证实了二进制码对检测的贡献,表明了我们的融合策略的有效性。我们已经发布了代码和数据集(https://github.com/Wangqxn/Vul-detection),以方便后续研究。
{"title":"Multimodal information fusion for software vulnerability detection based on both source and binary codes","authors":"Yuzhou Liu ,&nbsp;Qi Wang ,&nbsp;Shuang Jiang ,&nbsp;Runze Wu ,&nbsp;Hongxu Tian ,&nbsp;Peng Zhang","doi":"10.1016/j.scico.2025.103411","DOIUrl":"10.1016/j.scico.2025.103411","url":null,"abstract":"<div><div>Context: Many researchers have proposed vulnerability detection methods to enhance software reliability by analyzing the program. However, some vulnerabilities are difficult to be identified only from the source codes, especially the ones related to the execution.</div><div>Objectives: To solve this problem, this paper introduces extra binary codes and proposes a novel solution for software vulnerability detection based on the multimodal information fusion.</div><div>Methods: The approach treats the source and binary codes as different modalities, and uses two pre-trained models as feature extractors to analyze them separately. Then, we design an attention-based information fusion strategy that taking the information from source codes as the main body while the one from binary codes as the supplement. It could not only capture the correlations among features across different modalities, but also filter the redundancy from the binary codes in the fusion process. In this way, a more comprehensive representation of software is gained and finally taken as the basis for the vulnerability detection.</div><div>Results: Our method was comprehensively evaluated on three widely-used datasets in different languages, that is Reveal in C, Devign in C++, and Code_vulnerability_java in Java: (1) For vulnerability detection performance, the Accuracy reached 86.09 %, 84.58 %, and 80.43 % across the three datasets, with F1-scores of 82.87 %, 84.62 %, and 79.58 % respectively; (2) Compared with seven state-of-the-art baseline methods, our approach achieved Accuracy improvements of 2.38 %-3.01 % and F1-score enhancements of 2.32 %-8.47 % across the datasets; (3) Moreover, the ablation experiment shows when combining binary codes with source codes (versus using source codes alone), the Accuracy improved by 6.83 %-13.76 % and F1-score increased by 5.36 %-9.86 %, demonstrating the significant performance gains from multimodal data integration.</div><div>Conclusion: The results show that our approach can achieve good performance for the task of software vulnerability detection. Meanwhile, ablation experiments confirm the contributions of binary codes to the detection and indicate the effectiveness of our fusion strategy. We have released the codes and datasets <span><span>(https://github.com/Wangqxn/Vul-detection)</span><svg><path></path></svg></span> to facilitate follow-up research.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"250 ","pages":"Article 103411"},"PeriodicalIF":1.4,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Science of Computer Programming
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1