首页 > 最新文献

Science of Computer Programming最新文献

英文 中文
Code clone classification based on multi-dimension feature entropy 基于多维特征熵的代码克隆分类
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-22 DOI: 10.1016/j.scico.2025.103419
Bin Hu , Lizhi Zheng , Dongjin Yu , Yijian Wu , Jie Chen , Tianyi Hu
Code clones have been a hot topic in software engineering for decades. Due to the rapid development of clone detection techniques, it is not difficult to find code clones in software systems, while managing the vast amounts of clones remains an open problem. Typically, we should adopt refactoring approaches to eliminate clones, thereby mitigating the threat to software maintenance. In some situations, the clone group may contain several different code variants that reside in different locations, thus making refactoring too complicated, as their differences must be analyzed and reconciled before refactoring. Therefore, we should find an approach to recognize clone groups that are easy to refactor or eliminate. In this paper, we first collected large-scale datasets from three different domains and studied the distribution of four different metrics of code clones. We found that the distribution of each metric follows a certain pattern, the number of inner file clone accounts for approximately 50 %, the number of Type3 clone accounts for above 45 %. But we cannot judge the complexity of code clone groups based solely on these metrics. Based on our findings, we propose a classification approach to assist developers to find clone groups that are easy to eliminate by refactoring from those that are hard to refactor. We propose four different clone feature entropy measures based on information entropy theory, including variant entropy, distribution entropy, relation entropy, and syntactic entropy. Then, we calculate fused clone entropy, which is the weighted summation of the above four clone feature entropy. Finally, we use the four types of feature entropy and the fused feature entropy to classify or rank code clone groups. Experiments on three different application domains show that the proposed clone feature entropy can help developers identify clone groups that are easy to eliminate by refactoring. Manual validation also reveals that the complexity of clone groups is not solely dependent on the number of clone instances. This approach provides a new way to manage code clones and offers some useful ideas for future clone maintenance research.
几十年来,代码克隆一直是软件工程中的热门话题。由于克隆检测技术的快速发展,在软件系统中发现代码克隆并不困难,而管理大量的克隆仍然是一个悬而未决的问题。通常,我们应该采用重构方法来消除克隆,从而减轻对软件维护的威胁。在某些情况下,克隆组可能包含位于不同位置的几个不同的代码变体,从而使重构过于复杂,因为在重构之前必须分析和协调它们的差异。因此,我们应该找到一种方法来识别易于重构或消除的克隆组。在本文中,我们首先收集了来自三个不同领域的大规模数据集,研究了四种不同的代码克隆度量的分布。我们发现,各指标的分布遵循一定的规律,内部文件克隆的数量约占50%,Type3克隆的数量占45%以上。但是我们不能仅仅根据这些指标来判断代码克隆组的复杂性。基于我们的发现,我们提出了一种分类方法,以帮助开发人员从难以重构的组中找到易于重构的克隆组。基于信息熵理论,提出了四种不同的克隆特征熵度量方法,包括变异熵、分布熵、关系熵和句法熵。然后,我们计算融合克隆熵,它是上述四个克隆特征熵的加权总和。最后,利用四种类型的特征熵和融合特征熵对代码克隆组进行分类或排序。在三个不同应用领域的实验表明,所提出的克隆特征熵可以帮助开发人员识别出易于通过重构消除的克隆组。手动验证还表明,克隆组的复杂性并不仅仅取决于克隆实例的数量。这种方法提供了一种管理代码克隆的新方法,并为今后克隆维护研究提供了一些有用的思路。
{"title":"Code clone classification based on multi-dimension feature entropy","authors":"Bin Hu ,&nbsp;Lizhi Zheng ,&nbsp;Dongjin Yu ,&nbsp;Yijian Wu ,&nbsp;Jie Chen ,&nbsp;Tianyi Hu","doi":"10.1016/j.scico.2025.103419","DOIUrl":"10.1016/j.scico.2025.103419","url":null,"abstract":"<div><div>Code clones have been a hot topic in software engineering for decades. Due to the rapid development of clone detection techniques, it is not difficult to find code clones in software systems, while managing the vast amounts of clones remains an open problem. Typically, we should adopt refactoring approaches to eliminate clones, thereby mitigating the threat to software maintenance. In some situations, the clone group may contain several different code variants that reside in different locations, thus making refactoring too complicated, as their differences must be analyzed and reconciled before refactoring. Therefore, we should find an approach to recognize clone groups that are easy to refactor or eliminate. In this paper, we first collected large-scale datasets from three different domains and studied the distribution of four different metrics of code clones. We found that the distribution of each metric follows a certain pattern, the number of inner file clone accounts for approximately 50 %, the number of Type3 clone accounts for above 45 %. But we cannot judge the complexity of code clone groups based solely on these metrics. Based on our findings, we propose a classification approach to assist developers to find clone groups that are easy to eliminate by refactoring from those that are hard to refactor. We propose four different clone feature entropy measures based on information entropy theory, including variant entropy, distribution entropy, relation entropy, and syntactic entropy. Then, we calculate fused clone entropy, which is the weighted summation of the above four clone feature entropy. Finally, we use the four types of feature entropy and the fused feature entropy to classify or rank code clone groups. Experiments on three different application domains show that the proposed clone feature entropy can help developers identify clone groups that are easy to eliminate by refactoring. Manual validation also reveals that the complexity of clone groups is not solely dependent on the number of clone instances. This approach provides a new way to manage code clones and offers some useful ideas for future clone maintenance research.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"250 ","pages":"Article 103419"},"PeriodicalIF":1.4,"publicationDate":"2025-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inferring non-failure conditions for declarative programs 推断声明性程序的非故障条件
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-21 DOI: 10.1016/j.scico.2025.103416
Michael Hanus
Unintended failures during a computation are painful but frequent during software development. Failures due to external reasons (e.g., missing files, no permissions, etc.) can be caught by exception handlers. Programming failures, such as calling a partially defined operation with unintended arguments, are often not caught due to the assumption that the software is correct. This paper presents an approach to verify such assumptions. For this purpose, non-failure conditions for operations are inferred and then checked in all uses of partially defined operations. In the positive case, the absence of such failures is ensured. In the negative case, the programmer could adapt the program to handle possibly failing situations and check the program again. Our method is fully automatic and can be applied to larger declarative programs. The results of an implementation for functional logic Curry programs are presented.
计算过程中的意外故障是痛苦的,但在软件开发过程中经常发生。由于外部原因导致的失败(例如,丢失文件,没有权限等)可以被异常处理程序捕获。由于假定软件是正确的,编程失败(例如调用带有意外参数的部分定义的操作)通常不会被捕获。本文提出了一种验证这种假设的方法。为此,推断操作的非故障条件,然后在部分定义操作的所有使用中检查。在积极的情况下,可以确保没有这种失败。在消极的情况下,程序员可以调整程序来处理可能失败的情况,并再次检查程序。我们的方法是全自动的,可以应用于更大的声明性程序。给出了一个功能逻辑Curry程序的实现结果。
{"title":"Inferring non-failure conditions for declarative programs","authors":"Michael Hanus","doi":"10.1016/j.scico.2025.103416","DOIUrl":"10.1016/j.scico.2025.103416","url":null,"abstract":"<div><div>Unintended failures during a computation are painful but frequent during software development. Failures due to external reasons (e.g., missing files, no permissions, etc.) can be caught by exception handlers. Programming failures, such as calling a partially defined operation with unintended arguments, are often not caught due to the assumption that the software is correct. This paper presents an approach to verify such assumptions. For this purpose, non-failure conditions for operations are inferred and then checked in all uses of partially defined operations. In the positive case, the absence of such failures is ensured. In the negative case, the programmer could adapt the program to handle possibly failing situations and check the program again. Our method is fully automatic and can be applied to larger declarative programs. The results of an implementation for functional logic Curry programs are presented.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"250 ","pages":"Article 103416"},"PeriodicalIF":1.4,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining sequential feature test cases to generate sound tests for concurrent features 结合顺序的特性测试用例,为并发特性生成可靠的测试
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-14 DOI: 10.1016/j.scico.2025.103414
Rafaela Almeida , Sidney Nogueira , Augusto Sampaio
Testing concurrent systems is challenging due to their complex interactions and behaviours, along with the difficulty in reproducing failures. We propose a sound strategy for testing concurrent mobile applications by extracting use cases that capture interleavings of behaviours of existing test cases for individual features. These use cases are then used to create a formal model that is the input for a refinement checking approach to generate test cases that are still sequential but exercise the execution of concurrent features. We introduce a conformance relation, cspioq, which considers quiescent behaviour (absence of output). This relation is based on cspio (which is itself inspired by ioco); cspio does not take quiescence behaviour into account. While ioco as well as cspioco (a denotational semantics for ioco based on CSP) rely on suspension traces, our approach adopts the traces model annotated with a special event to represent quiescence. This allowed us to reuse our previous theory and test case generation strategy for sequential systems in a conservative way. We also analyse the complexity of automatically generating test cases. For implementation efficiency, we optimise the strategy by directly interleaving steps of existing test cases and show that this preserves soundness. Moreover, we provide tool support for every phase of the approach. Finally, we present the results of an empirical evaluation designed to measure the effectiveness of the overall strategy in terms of test coverage and bug detection. The results indicate that our approach yields higher coverage and higher bug detection rates compared to the set of tests originally developed by our industrial partner (Motorola) engineers.
由于并发系统复杂的交互和行为,以及再现故障的困难,测试并发系统是具有挑战性的。我们提出了一种测试并发移动应用程序的合理策略,通过提取用例来捕获单个功能的现有测试用例的行为交叉。然后,这些用例被用来创建一个正式的模型,该模型是精化检查方法的输入,以生成仍然是顺序的但执行并发特性的测试用例。我们引入了一个一致性关系,cspioq,它考虑了静态行为(没有输出)。这个关系是基于cspio的(它本身是受ioco启发的);Cspio不考虑静止行为。ioco和cspioco(基于CSP的ioco的指义语义)依赖于悬浮轨迹,而我们的方法采用带有特殊事件注释的轨迹模型来表示静止。这允许我们以保守的方式为顺序系统重用以前的理论和测试用例生成策略。我们还分析了自动生成测试用例的复杂性。为了提高实现效率,我们通过直接交叉现有测试用例的步骤来优化策略,并表明这保持了可靠性。此外,我们为方法的每个阶段提供工具支持。最后,我们给出了一个经验评估的结果,该评估旨在根据测试覆盖率和缺陷检测来衡量整体策略的有效性。结果表明,与我们的工业合作伙伴(摩托罗拉)工程师最初开发的一组测试相比,我们的方法产生了更高的覆盖率和更高的错误检测率。
{"title":"Combining sequential feature test cases to generate sound tests for concurrent features","authors":"Rafaela Almeida ,&nbsp;Sidney Nogueira ,&nbsp;Augusto Sampaio","doi":"10.1016/j.scico.2025.103414","DOIUrl":"10.1016/j.scico.2025.103414","url":null,"abstract":"<div><div>Testing concurrent systems is challenging due to their complex interactions and behaviours, along with the difficulty in reproducing failures. We propose a sound strategy for testing concurrent mobile applications by extracting use cases that capture interleavings of behaviours of existing test cases for individual features. These use cases are then used to create a formal model that is the input for a refinement checking approach to generate test cases that are still sequential but exercise the execution of concurrent features. We introduce a conformance relation, <strong>cspio</strong><sub><strong>q</strong></sub>, which considers quiescent behaviour (absence of output). This relation is based on <strong>cspio</strong> (which is itself inspired by <strong>ioco</strong>); <strong>cspio</strong> does not take quiescence behaviour into account. While <strong>ioco</strong> as well as <strong>cspioco</strong> (a denotational semantics for <strong>ioco</strong> based on CSP) rely on suspension traces, our approach adopts the traces model annotated with a special event to represent quiescence. This allowed us to reuse our previous theory and test case generation strategy for sequential systems in a conservative way. We also analyse the complexity of automatically generating test cases. For implementation efficiency, we optimise the strategy by directly interleaving steps of existing test cases and show that this preserves soundness. Moreover, we provide tool support for every phase of the approach. Finally, we present the results of an empirical evaluation designed to measure the effectiveness of the overall strategy in terms of test coverage and bug detection. The results indicate that our approach yields higher coverage and higher bug detection rates compared to the set of tests originally developed by our industrial partner (Motorola) engineers.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"250 ","pages":"Article 103414"},"PeriodicalIF":1.4,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal information fusion for software vulnerability detection based on both source and binary codes 基于源码和二进制码的多模态信息融合软件漏洞检测
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-13 DOI: 10.1016/j.scico.2025.103411
Yuzhou Liu , Qi Wang , Shuang Jiang , Runze Wu , Hongxu Tian , Peng Zhang
Context: Many researchers have proposed vulnerability detection methods to enhance software reliability by analyzing the program. However, some vulnerabilities are difficult to be identified only from the source codes, especially the ones related to the execution.
Objectives: To solve this problem, this paper introduces extra binary codes and proposes a novel solution for software vulnerability detection based on the multimodal information fusion.
Methods: The approach treats the source and binary codes as different modalities, and uses two pre-trained models as feature extractors to analyze them separately. Then, we design an attention-based information fusion strategy that taking the information from source codes as the main body while the one from binary codes as the supplement. It could not only capture the correlations among features across different modalities, but also filter the redundancy from the binary codes in the fusion process. In this way, a more comprehensive representation of software is gained and finally taken as the basis for the vulnerability detection.
Results: Our method was comprehensively evaluated on three widely-used datasets in different languages, that is Reveal in C, Devign in C++, and Code_vulnerability_java in Java: (1) For vulnerability detection performance, the Accuracy reached 86.09 %, 84.58 %, and 80.43 % across the three datasets, with F1-scores of 82.87 %, 84.62 %, and 79.58 % respectively; (2) Compared with seven state-of-the-art baseline methods, our approach achieved Accuracy improvements of 2.38 %-3.01 % and F1-score enhancements of 2.32 %-8.47 % across the datasets; (3) Moreover, the ablation experiment shows when combining binary codes with source codes (versus using source codes alone), the Accuracy improved by 6.83 %-13.76 % and F1-score increased by 5.36 %-9.86 %, demonstrating the significant performance gains from multimodal data integration.
Conclusion: The results show that our approach can achieve good performance for the task of software vulnerability detection. Meanwhile, ablation experiments confirm the contributions of binary codes to the detection and indicate the effectiveness of our fusion strategy. We have released the codes and datasets (https://github.com/Wangqxn/Vul-detection) to facilitate follow-up research.
背景:许多研究者提出了漏洞检测方法,通过分析程序来提高软件的可靠性。然而,有些漏洞很难仅从源代码中识别出来,特别是与执行相关的漏洞。为了解决这一问题,本文引入了额外的二进制码,提出了一种基于多模态信息融合的软件漏洞检测新方案。方法:该方法将源码和二进制码视为不同的模态,并使用两个预训练模型作为特征提取器分别对其进行分析。然后,我们设计了一种以源代码信息为主体,以二进制码信息为补充的基于注意力的信息融合策略。该方法不仅可以捕获不同模态特征之间的相关性,而且可以在融合过程中过滤掉二进制码中的冗余。这样可以获得更全面的软件表征,并最终作为漏洞检测的依据。结果:我们的方法在不同语言的3个广泛使用的数据集(Reveal in C、design in c++和Code_vulnerability_java)上进行了综合评价:(1)在漏洞检测性能上,3个数据集的准确率分别达到86.09%、84.58%和80.43%,f1得分分别为82.87%、84.62%和79.58%;(2)与7种最先进的基线方法相比,我们的方法在数据集上的准确率提高了2.38% - 3.01%,f1分数提高了2.32% - 8.47%;(3)此外,烧蚀实验表明,当二进制码与源代码结合使用时(与单独使用源代码相比),准确率提高了6.83% ~ 13.76%,f1分数提高了5.36% ~ 9.86%,显示了多模态数据集成带来的显著性能提升。结论:该方法能够较好地完成软件漏洞检测任务。同时,烧蚀实验证实了二进制码对检测的贡献,表明了我们的融合策略的有效性。我们已经发布了代码和数据集(https://github.com/Wangqxn/Vul-detection),以方便后续研究。
{"title":"Multimodal information fusion for software vulnerability detection based on both source and binary codes","authors":"Yuzhou Liu ,&nbsp;Qi Wang ,&nbsp;Shuang Jiang ,&nbsp;Runze Wu ,&nbsp;Hongxu Tian ,&nbsp;Peng Zhang","doi":"10.1016/j.scico.2025.103411","DOIUrl":"10.1016/j.scico.2025.103411","url":null,"abstract":"<div><div>Context: Many researchers have proposed vulnerability detection methods to enhance software reliability by analyzing the program. However, some vulnerabilities are difficult to be identified only from the source codes, especially the ones related to the execution.</div><div>Objectives: To solve this problem, this paper introduces extra binary codes and proposes a novel solution for software vulnerability detection based on the multimodal information fusion.</div><div>Methods: The approach treats the source and binary codes as different modalities, and uses two pre-trained models as feature extractors to analyze them separately. Then, we design an attention-based information fusion strategy that taking the information from source codes as the main body while the one from binary codes as the supplement. It could not only capture the correlations among features across different modalities, but also filter the redundancy from the binary codes in the fusion process. In this way, a more comprehensive representation of software is gained and finally taken as the basis for the vulnerability detection.</div><div>Results: Our method was comprehensively evaluated on three widely-used datasets in different languages, that is Reveal in C, Devign in C++, and Code_vulnerability_java in Java: (1) For vulnerability detection performance, the Accuracy reached 86.09 %, 84.58 %, and 80.43 % across the three datasets, with F1-scores of 82.87 %, 84.62 %, and 79.58 % respectively; (2) Compared with seven state-of-the-art baseline methods, our approach achieved Accuracy improvements of 2.38 %-3.01 % and F1-score enhancements of 2.32 %-8.47 % across the datasets; (3) Moreover, the ablation experiment shows when combining binary codes with source codes (versus using source codes alone), the Accuracy improved by 6.83 %-13.76 % and F1-score increased by 5.36 %-9.86 %, demonstrating the significant performance gains from multimodal data integration.</div><div>Conclusion: The results show that our approach can achieve good performance for the task of software vulnerability detection. Meanwhile, ablation experiments confirm the contributions of binary codes to the detection and indicate the effectiveness of our fusion strategy. We have released the codes and datasets <span><span>(https://github.com/Wangqxn/Vul-detection)</span><svg><path></path></svg></span> to facilitate follow-up research.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"250 ","pages":"Article 103411"},"PeriodicalIF":1.4,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic identification of extrinsic bug reports for just-in-time bug prediction 自动识别外部错误报告,以便及时预测错误
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-05 DOI: 10.1016/j.scico.2025.103410
Guisheng Fan , Yuguo Liang , Longfei Zu , Huiqun Yu , Zijie Huang , Wentao Chen
In software development, developers create bug reports within an Issue Tracking System (ITS) to describe the cause, symptoms, severity, and other technical details of bugs. The ITS includes reports of both intrinsic bugs (i.e., those originating within the software itself) and extrinsic bugs (i.e., those arising from third-party dependencies). Although extrinsic bugs are not recorded in the Version Control System (VCS), they can still affect Just-In-Time (JIT) bug prediction models that rely on VCS-derived information.
Previous research has shown that excluding extrinsic bugs can significantly improve JIT bug prediction model’s performance. However, manually classifying intrinsic and extrinsic bugs is time-consuming and prone to errors. To address this issue, we propose a CAN model that integrates the local feature extraction capability of TextCNN with the nonlinear approximation advantage of the Kolmogorov-Arnold Network (KAN). Experiments on 1880 labeled data samples from the OpenStack project demonstrate that the CAN model outperforms benchmark models such as BERT and CodeBERT, achieving an accuracy of 0.7492 and an F1-score of 0.8072. By comparing datasets with and without source code, we find that incorporating source code information enhances model performance. Finally, using the Local Interpretable Model-agnostic Explanations (LIME), an explainable artificial intelligence technique, we identify that keywords such as “test” and “api” in bug reports significantly contribute to the prediction of extrinsic bugs.
在软件开发中,开发人员在问题跟踪系统(ITS)中创建错误报告,以描述错误的原因、症状、严重程度和其他技术细节。ITS包括内部错误(即,源自软件本身的错误)和外部错误(即,源自第三方依赖的错误)的报告。尽管外部错误没有记录在版本控制系统(VCS)中,但它们仍然可以影响依赖于VCS派生信息的即时(JIT)错误预测模型。已有研究表明,排除外部bug可以显著提高JIT bug预测模型的性能。然而,手工地对内在和外在的bug进行分类既耗时又容易出错。为了解决这一问题,我们提出了一种CAN模型,该模型将TextCNN的局部特征提取能力与Kolmogorov-Arnold网络(KAN)的非线性逼近优势相结合。在OpenStack项目1880个标记数据样本上的实验表明,CAN模型优于BERT和CodeBERT等基准模型,准确率为0.7492,f1分数为0.8072。通过比较有源代码和没有源代码的数据集,我们发现合并源代码信息可以提高模型的性能。最后,使用局部可解释模型不可知论解释(LIME),一种可解释的人工智能技术,我们发现bug报告中的关键字,如“测试”和“api”,对外部bug的预测有显著贡献。
{"title":"Automatic identification of extrinsic bug reports for just-in-time bug prediction","authors":"Guisheng Fan ,&nbsp;Yuguo Liang ,&nbsp;Longfei Zu ,&nbsp;Huiqun Yu ,&nbsp;Zijie Huang ,&nbsp;Wentao Chen","doi":"10.1016/j.scico.2025.103410","DOIUrl":"10.1016/j.scico.2025.103410","url":null,"abstract":"<div><div>In software development, developers create bug reports within an Issue Tracking System (ITS) to describe the cause, symptoms, severity, and other technical details of bugs. The ITS includes reports of both intrinsic bugs (i.e., those originating within the software itself) and extrinsic bugs (i.e., those arising from third-party dependencies). Although extrinsic bugs are not recorded in the Version Control System (VCS), they can still affect Just-In-Time (JIT) bug prediction models that rely on VCS-derived information.</div><div>Previous research has shown that excluding extrinsic bugs can significantly improve JIT bug prediction model’s performance. However, manually classifying intrinsic and extrinsic bugs is time-consuming and prone to errors. To address this issue, we propose a CAN model that integrates the local feature extraction capability of TextCNN with the nonlinear approximation advantage of the Kolmogorov-Arnold Network (KAN). Experiments on 1880 labeled data samples from the OpenStack project demonstrate that the CAN model outperforms benchmark models such as BERT and CodeBERT, achieving an accuracy of 0.7492 and an F1-score of 0.8072. By comparing datasets with and without source code, we find that incorporating source code information enhances model performance. Finally, using the Local Interpretable Model-agnostic Explanations (LIME), an explainable artificial intelligence technique, we identify that keywords such as “test” and “api” in bug reports significantly contribute to the prediction of extrinsic bugs.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"249 ","pages":"Article 103410"},"PeriodicalIF":1.4,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Evaluation of Coconut: Typestates for C++ 椰子的设计与评估:c++的类型状态
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-25 DOI: 10.1016/j.scico.2025.103398
Arwa Hameed Alsubhi, Ornela Dardha, Simon J. Gay
This paper introduces Coconut, a C++ tool that uses templates for defining object behaviours and validates them with typestate checking. Coconut employs the GIMPLE intermediate representation (IR) from the GCC compiler’s middle-end phase for static checks, ensuring objects follow valid state transitions as defined in typestate templates. It supports features like branching, recursion, aliasing, inheritance, and typestate visualisation. We illustrate Coconut’s application in embedded systems, validating their behaviour pre-deployment. We present an experimental study, showing that Coconut improves performance and reduces code complexity wrt the original code, highlighting the benefits of typestate-based verification.
本文介绍了Coconut,这是一个c++工具,它使用模板来定义对象行为,并通过类型状态检查来验证它们。Coconut使用来自GCC编译器中间阶段的GIMPLE中间表示(IR)进行静态检查,确保对象遵循类型状态模板中定义的有效状态转换。它支持分支、递归、混叠、继承和类型状态可视化等特性。我们举例说明了Coconut在嵌入式系统中的应用,验证了它们在部署前的行为。我们提出了一项实验研究,表明Coconut提高了性能并降低了原始代码的代码复杂性,突出了基于类型状态验证的好处。
{"title":"Design and Evaluation of Coconut: Typestates for C++","authors":"Arwa Hameed Alsubhi,&nbsp;Ornela Dardha,&nbsp;Simon J. Gay","doi":"10.1016/j.scico.2025.103398","DOIUrl":"10.1016/j.scico.2025.103398","url":null,"abstract":"<div><div>This paper introduces Coconut, a C++ tool that uses templates for defining object behaviours and validates them with typestate checking. Coconut employs the GIMPLE intermediate representation (IR) from the GCC compiler’s middle-end phase for static checks, ensuring objects follow valid state transitions as defined in typestate templates. It supports features like branching, recursion, aliasing, inheritance, and typestate visualisation. We illustrate Coconut’s application in embedded systems, validating their behaviour pre-deployment. We present an experimental study, showing that Coconut improves performance and reduces code complexity wrt the original code, highlighting the benefits of typestate-based verification.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"250 ","pages":"Article 103398"},"PeriodicalIF":1.4,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145747796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QualCode: A Data-Driven Framework for Predicting Software Maintainability Based on ISO/IEC 25010 基于ISO/IEC 25010的软件可维护性预测的数据驱动框架
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-17 DOI: 10.1016/j.scico.2025.103399
Elham Azhir , Morteza Zakeri , Yasaman Abedini , Mojtaba Mostafavi Ghahfarokhi
This paper presents a comprehensive study on evaluating and predicting software maintainability, leveraging the ISO/IEC 25010 standard as a foundation for software quality assessment. The standard defines eight primary characteristics, including maintainability, which is further divided into subcharacteristics to enable a detailed assessment of software systems. In this context, the QualCode framework is proposed as an efficient solution based on ISO/IEC 25010 principles for calculating the maintainability metric, which involves utilizing an efficient combination of submetrics and harnessing machine learning techniques to enhance the precision of predictions. The QualCode system introduces a comprehensive data-driven and automated approach to software maintainability evaluation, allowing developers and quality assurance teams to gauge the modularity, reusability, analyzability, modifiability, and testability of their software products more effectively. Through an extensive evaluation of prediction models and comparative analyses with existing tools for a diverse set of Java projects, the findings highlight the superior performance of QualCode in predicting software maintainability, reinforcing its significance in the software engineering domain.
本文对软件可维护性的评估和预测进行了全面的研究,利用ISO/IEC 25010标准作为软件质量评估的基础。该标准定义了八个主要特征,包括可维护性,它被进一步划分为子特征,以实现对软件系统的详细评估。在这种情况下,QualCode框架被提议作为一种基于ISO/IEC 25010原则的有效解决方案,用于计算可维护性度量,这涉及到利用子度量的有效组合和利用机器学习技术来提高预测的精度。QualCode系统为软件可维护性评估引入了一种全面的数据驱动和自动化的方法,允许开发人员和质量保证团队更有效地评估其软件产品的模块化、可重用性、可分析性、可修改性和可测试性。通过对预测模型进行广泛的评估,并与针对多种Java项目的现有工具进行比较分析,这些发现突出了QualCode在预测软件可维护性方面的优越性能,加强了它在软件工程领域的重要性。
{"title":"QualCode: A Data-Driven Framework for Predicting Software Maintainability Based on ISO/IEC 25010","authors":"Elham Azhir ,&nbsp;Morteza Zakeri ,&nbsp;Yasaman Abedini ,&nbsp;Mojtaba Mostafavi Ghahfarokhi","doi":"10.1016/j.scico.2025.103399","DOIUrl":"10.1016/j.scico.2025.103399","url":null,"abstract":"<div><div>This paper presents a comprehensive study on evaluating and predicting software maintainability, leveraging the ISO/IEC 25010 standard as a foundation for software quality assessment. The standard defines eight primary characteristics, including maintainability, which is further divided into subcharacteristics to enable a detailed assessment of software systems. In this context, the QualCode framework is proposed as an efficient solution based on ISO/IEC 25010 principles for calculating the maintainability metric, which involves utilizing an efficient combination of submetrics and harnessing machine learning techniques to enhance the precision of predictions. The QualCode system introduces a comprehensive data-driven and automated approach to software maintainability evaluation, allowing developers and quality assurance teams to gauge the modularity, reusability, analyzability, modifiability, and testability of their software products more effectively. Through an extensive evaluation of prediction models and comparative analyses with existing tools for a diverse set of Java projects, the findings highlight the superior performance of QualCode in predicting software maintainability, reinforcing its significance in the software engineering domain.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"250 ","pages":"Article 103399"},"PeriodicalIF":1.4,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social debt in software development environments: A systematic literature review 软件开发环境中的社会债务:系统的文献回顾
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-14 DOI: 10.1016/j.scico.2025.103396
Eydy Suárez-Brieva , César Jésus Pardo Calvache , Ricardo Pérez-Castillo
Context: Lack of communication and coordination in a software development community can lead to short and long-term social problems. This can result in the misalignment of socio-technical congruence, understood as the disconnect between social and technical factors, which in turn leads to suboptimal decisions. The absence of adequate strategies to manage these problems, together with deficient organizational structures, favors the accumulation of social debt. Objective: This paper collects and analyzes studies related to the causes, effects, consequences, methods, patterns, domains, and prevention and management strategies of social debt in software development. While agile environments are included in the analysis, the overall focus covers a broader range of organizational and methodological contexts, including distributed, hybrid, and other team models. Method: A systematic literature review was conducted through a parameterized search in different databases. This allowed us to identify and filters 231 papers, of which 85 were considered relevant and 45 selected as primary studies. Results: The main socio-technical factors in which social debt is generated and exerts its impact were identified, along with a limited number of tools -mainly conceptual models and automated mechanisms- that facilitate its detection by defining potential causes affecting the well-being of the team and the companies. Conclusions: Based on the findings, it is important to further study other causes that allow identifying the presence of social debt, as well as the development of strategies to mitigate its effects on the social and emotional well-being of professionals.
上下文:在软件开发社区中缺乏沟通和协调可能导致短期和长期的社会问题。这可能导致社会技术一致性的不一致,被理解为社会和技术因素之间的脱节,从而导致次优决策。由于缺乏管理这些问题的适当战略,加上组织结构不足,导致社会债务的积累。目的:收集和分析软件开发中社会债务的成因、影响、后果、方法、模式、领域以及预防和管理策略等相关研究。虽然分析中包含了敏捷环境,但总体焦点涵盖了更广泛的组织和方法上下文,包括分布式、混合和其他团队模型。方法:通过对不同数据库进行参数化检索,进行系统的文献综述。这使我们能够识别和筛选231篇论文,其中85篇被认为是相关的,45篇被选为主要研究。结果:确定了产生社会债务并发挥其影响的主要社会技术因素,以及有限数量的工具-主要是概念模型和自动化机制-通过定义影响团队和公司福祉的潜在原因来促进其检测。结论:基于研究结果,重要的是进一步研究其他原因,以确定社会债务的存在,以及制定策略来减轻其对专业人员的社会和情感健康的影响。
{"title":"Social debt in software development environments: A systematic literature review","authors":"Eydy Suárez-Brieva ,&nbsp;César Jésus Pardo Calvache ,&nbsp;Ricardo Pérez-Castillo","doi":"10.1016/j.scico.2025.103396","DOIUrl":"10.1016/j.scico.2025.103396","url":null,"abstract":"<div><div>Context: Lack of communication and coordination in a software development community can lead to short and long-term social problems. This can result in the misalignment of socio-technical congruence, understood as the disconnect between social and technical factors, which in turn leads to suboptimal decisions. The absence of adequate strategies to manage these problems, together with deficient organizational structures, favors the accumulation of social debt. Objective: This paper collects and analyzes studies related to the causes, effects, consequences, methods, patterns, domains, and prevention and management strategies of social debt in software development. While agile environments are included in the analysis, the overall focus covers a broader range of organizational and methodological contexts, including distributed, hybrid, and other team models. Method: A systematic literature review was conducted through a parameterized search in different databases. This allowed us to identify and filters 231 papers, of which 85 were considered relevant and 45 selected as primary studies. Results: The main socio-technical factors in which social debt is generated and exerts its impact were identified, along with a limited number of tools -mainly conceptual models and automated mechanisms- that facilitate its detection by defining potential causes affecting the well-being of the team and the companies. Conclusions: Based on the findings, it is important to further study other causes that allow identifying the presence of social debt, as well as the development of strategies to mitigate its effects on the social and emotional well-being of professionals.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"249 ","pages":"Article 103396"},"PeriodicalIF":1.4,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145364450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CtxFuzz: Discovering heap-based memory vulnerabilities through context heap operation sequence guided fuzzing CtxFuzz:通过上下文堆操作序列引导模糊检测发现基于堆的内存漏洞
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-10 DOI: 10.1016/j.scico.2025.103395
Jiacheng Jiang , Cheng Wen , Zhiyuan Fu , Shengchao Qin
Heap-based memory vulnerabilities are critical to software security and reliability. The presence of these vulnerabilities is affected by various factors, including code coverage, the frequency of heap operations, and the order of execution. Current fuzzing solutions strive to effectively identify these vulnerabilities by employing static analysis or incorporating feedback on the sequence of heap operations. However, these solutions exhibit limited practical applicability and fail to comprehensively address the temporal and spatial dimensions of heap operations. In this paper, we propose a dedicated fuzzing technique called CtxFuzz that efficiently discovers heap-based temporal and spatial memory vulnerabilities without necessitating domain specific knowledge. CtxFuzz employs context heap operation sequences (CHOS) as a novel feedback mechanism to guide the fuzzing process. CHOS comprises sequences of heap operations, including allocation, deallocation, read, and write, that are associated with their corresponding heap memory addresses and identified within the current context during the execution of the target program. By doing so, CtxFuzz can explore more heap states and trigger more heap-based memory vulnerabilities, both temporal and spatial. We evaluate CtxFuzz on 9 real-world open-source programs and compare its performance against 7 state-of-the-art fuzzers. The results indicate that CtxFuzz outperforms most of these fuzzers in terms of discovering heap-based memory vulnerabilities. Furthermore, our experiments led to the identification of ten zero-day vulnerabilities (10 CVEs).
基于堆的内存漏洞对软件的安全性和可靠性至关重要。这些漏洞的存在受到各种因素的影响,包括代码覆盖率、堆操作的频率和执行顺序。当前的模糊测试解决方案努力通过使用静态分析或结合对堆操作序列的反馈来有效地识别这些漏洞。然而,这些解决方案表现出有限的实际适用性,并且不能全面地处理堆操作的时间和空间维度。在本文中,我们提出了一种称为CtxFuzz的专用模糊测试技术,该技术可以有效地发现基于堆的时间和空间内存漏洞,而无需特定领域的知识。CtxFuzz采用上下文堆操作序列(CHOS)作为一种新的反馈机制来指导模糊过程。CHOS包含堆操作序列,包括分配、释放、读取和写入,这些操作与相应的堆内存地址相关联,并在目标程序执行期间在当前上下文中标识。通过这样做,CtxFuzz可以探索更多堆状态,并触发更多基于堆的内存漏洞,包括时间和空间漏洞。我们在9个真实世界的开源程序上评估CtxFuzz,并将其性能与7个最先进的fuzzers进行比较。结果表明,CtxFuzz在发现基于堆的内存漏洞方面优于大多数这些fuzzers。此外,我们的实验还发现了10个零日漏洞(10个cve)。
{"title":"CtxFuzz: Discovering heap-based memory vulnerabilities through context heap operation sequence guided fuzzing","authors":"Jiacheng Jiang ,&nbsp;Cheng Wen ,&nbsp;Zhiyuan Fu ,&nbsp;Shengchao Qin","doi":"10.1016/j.scico.2025.103395","DOIUrl":"10.1016/j.scico.2025.103395","url":null,"abstract":"<div><div>Heap-based memory vulnerabilities are critical to software security and reliability. The presence of these vulnerabilities is affected by various factors, including code coverage, the frequency of heap operations, and the order of execution. Current fuzzing solutions strive to effectively identify these vulnerabilities by employing static analysis or incorporating feedback on the sequence of heap operations. However, these solutions exhibit limited practical applicability and fail to comprehensively address the temporal and spatial dimensions of heap operations. In this paper, we propose a dedicated fuzzing technique called <span>CtxFuzz</span> that efficiently discovers heap-based temporal and spatial memory vulnerabilities without necessitating domain specific knowledge. <span>CtxFuzz</span> employs context heap operation sequences (CHOS) as a novel feedback mechanism to guide the fuzzing process. CHOS comprises sequences of heap operations, including allocation, deallocation, read, and write, that are associated with their corresponding heap memory addresses and identified within the current context during the execution of the target program. By doing so, <span>CtxFuzz</span> can explore more heap states and trigger more heap-based memory vulnerabilities, both temporal and spatial. We evaluate <span>CtxFuzz</span> on 9 real-world open-source programs and compare its performance against 7 state-of-the-art fuzzers. The results indicate that <span>CtxFuzz</span> outperforms most of these fuzzers in terms of discovering heap-based memory vulnerabilities. Furthermore, our experiments led to the identification of ten zero-day vulnerabilities (10 CVEs).</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"249 ","pages":"Article 103395"},"PeriodicalIF":1.4,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145322750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The sampling threat when mining generalizable inter-library usage patterns 挖掘通用库间使用模式时的抽样威胁
IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-27 DOI: 10.1016/j.scico.2025.103393
Yunior Pacheco Correa , Coen De Roover , Johannes Härtel
Tool support in software engineering often relies on relationships, regularities, patterns, or rules mined from other users’ code. Examples include approaches to bug prediction, code recommendation, and code autocompletion. Mining is typically performed on samples of code rather than the entirety of available software projects. While sampling is crucial for scaling data analysis, it can affect the generalization of the mined patterns.
This paper focuses on sampling software projects filtered for specific libraries and frameworks, and on mining patterns that connect different libraries. We call these inter-library patterns. We observe that limiting the sample to a specific library may hinder the generalization of inter-library patterns, posing a threat to their use or interpretation. Using a simulation and a real case study, we show this threat for different sampling methods. Our simulation shows that only when sampling for the disjunction of both libraries involved in the implication of a pattern, the implication generalizes well. Additionally, we show that real empirical data sampled using the GitHub search API does not behave as expected from our simulation. This identifies a potential threat relevant for many studies that use the GitHub search API for studying inter-library patterns.
软件工程中的工具支持通常依赖于从其他用户代码中挖掘的关系、规则、模式或规则。示例包括bug预测、代码推荐和代码自动完成的方法。挖掘通常在代码样本上执行,而不是在整个可用的软件项目上执行。虽然采样对于扩展数据分析至关重要,但它会影响挖掘模式的泛化。本文关注的是为特定库和框架筛选的软件项目抽样,以及挖掘连接不同库的模式。我们称之为库间模式。我们观察到,将样本限制在特定的库中可能会阻碍库间模式的推广,对它们的使用或解释构成威胁。通过模拟和实际案例研究,我们展示了不同采样方法的这种威胁。我们的模拟表明,只有在对两个库的分离进行采样时,隐含的模式才能很好地泛化。此外,我们表明,使用GitHub搜索API采样的真实经验数据并不像我们的模拟所期望的那样。这对许多使用GitHub搜索API研究库间模式的研究来说是一个潜在的威胁。
{"title":"The sampling threat when mining generalizable inter-library usage patterns","authors":"Yunior Pacheco Correa ,&nbsp;Coen De Roover ,&nbsp;Johannes Härtel","doi":"10.1016/j.scico.2025.103393","DOIUrl":"10.1016/j.scico.2025.103393","url":null,"abstract":"<div><div>Tool support in software engineering often relies on relationships, regularities, patterns, or rules mined from other users’ code. Examples include approaches to bug prediction, code recommendation, and code autocompletion. Mining is typically performed on samples of code rather than the entirety of available software projects. While sampling is crucial for scaling data analysis, it can affect the generalization of the mined patterns.</div><div>This paper focuses on sampling software projects filtered for specific libraries and frameworks, and on mining patterns that connect different libraries. We call these inter-library patterns. We observe that limiting the sample to a specific library may hinder the generalization of inter-library patterns, posing a threat to their use or interpretation. Using a simulation and a real case study, we show this threat for different sampling methods. Our simulation shows that only when sampling for the disjunction of both libraries involved in the implication of a pattern, the implication generalizes well. Additionally, we show that real empirical data sampled using the GitHub search API does not behave as expected from our simulation. This identifies a potential threat relevant for many studies that use the GitHub search API for studying inter-library patterns.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"248 ","pages":"Article 103393"},"PeriodicalIF":1.4,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145219185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Science of Computer Programming
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1