首页 > 最新文献

Journal of Systems and Software最新文献

英文 中文
A novel seed scheduling scheme using Thompson sampling for coverage-guided greybox fuzzing 一种基于汤普森采样的灰盒模糊算法
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-23 DOI: 10.1016/j.jss.2026.112794
Wen Zhang , Jinfu Chen , Saihua Cai , Kun Wang , Yisong Liu , Haotong Ding
Coverage-guided Greybox Fuzzing (CGF) aims to maximize code area exploration within limited time, achieving higher code coverage. Current methods generally estimate seed potential through attributes like execution speed and size, but often ignore the distribution of explored program space and seed category potential in detecting new coverage, resulting in unbalanced code area exploration and limited detection of complex code. This paper proposes TMS-Fuzz, a new fuzzing seed scheduling method that balances code area exploration by distinguishing execution coverage features of seed inputs. By computing the path similarity between the execution coverage of different seed inputs, TMS-Fuzz dynamically and adaptively clusters them. Additionally, to improve the return on investment (ROI) of fuzzing, TMS-Fuzz uses a customized Thompson sampling algorithm to statistically select a seed group with the highest ROI, meaning the mutations of seeds in this group are most likely to discover new unique paths and crashes. Finally, TMS-Fuzz performs fuzzing on the target program by mutating the seed files in the selected seed group. Evaluations on eight real-world programs, compared with state-of-the-art open-source fuzzers, show that TMS-Fuzz improves edge coverage and crash detection capabilities in real programs.
覆盖率引导的灰盒模糊(Greybox Fuzzing, CGF)旨在在有限的时间内最大限度地探索代码区域,实现更高的代码覆盖率。目前的方法一般通过执行速度和大小等属性来估计种子潜力,但在检测新覆盖时往往忽略了被探测程序空间的分布和种子类别潜力,导致代码区域勘探不平衡,对复杂代码的检测有限。本文提出了一种新的模糊种子调度方法TMS-Fuzz,该方法通过区分种子输入的执行覆盖特征来平衡代码区域探索。通过计算不同种子输入的执行覆盖率之间的路径相似度,TMS-Fuzz动态自适应聚类。此外,为了提高模糊的投资回报率(ROI), TMS-Fuzz使用定制的Thompson采样算法统计选择具有最高ROI的种子组,这意味着该组种子的突变最有可能发现新的唯一路径和崩溃。最后,TMS-Fuzz通过改变所选种子组中的种子文件对目标程序执行模糊测试。与最先进的开源fuzzers相比,对八个现实世界程序的评估表明,TMS-Fuzz提高了真实程序中的边缘覆盖和崩溃检测能力。
{"title":"A novel seed scheduling scheme using Thompson sampling for coverage-guided greybox fuzzing","authors":"Wen Zhang ,&nbsp;Jinfu Chen ,&nbsp;Saihua Cai ,&nbsp;Kun Wang ,&nbsp;Yisong Liu ,&nbsp;Haotong Ding","doi":"10.1016/j.jss.2026.112794","DOIUrl":"10.1016/j.jss.2026.112794","url":null,"abstract":"<div><div>Coverage-guided Greybox Fuzzing (CGF) aims to maximize code area exploration within limited time, achieving higher code coverage. Current methods generally estimate seed potential through attributes like execution speed and size, but often ignore the distribution of explored program space and seed category potential in detecting new coverage, resulting in unbalanced code area exploration and limited detection of complex code. This paper proposes TMS-Fuzz, a new fuzzing seed scheduling method that balances code area exploration by distinguishing execution coverage features of seed inputs. By computing the path similarity between the execution coverage of different seed inputs, TMS-Fuzz dynamically and adaptively clusters them. Additionally, to improve the return on investment (ROI) of fuzzing, TMS-Fuzz uses a customized Thompson sampling algorithm to statistically select a seed group with the highest ROI, meaning the mutations of seeds in this group are most likely to discover new unique paths and crashes. Finally, TMS-Fuzz performs fuzzing on the target program by mutating the seed files in the selected seed group. Evaluations on eight real-world programs, compared with state-of-the-art open-source fuzzers, show that TMS-Fuzz improves edge coverage and crash detection capabilities in real programs.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"236 ","pages":"Article 112794"},"PeriodicalIF":4.1,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An empirical assessment of go linters on real-world issues 对现实世界问题的经验评估
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-23 DOI: 10.1016/j.jss.2026.112797
Jianwei Wu , James Clause
Lightweight static code analysis tools (linters) are commonly used to inspect complex code, locate format violations, detect software vulnerabilities, and fix bugs. However, developers often lack a good understanding of the capabilities of linters for newer languages like Golang. In this paper, we evaluated existing Go linters by surveying professional developers about real-world issues in the industrial workflow at MathWorks. Because of the early adoption of Go linters, we continued to observe issues that disrupted our development workflow. This paper presents our practical experience with Go linters, highlighting specific issues that often escaped detection and the consequences of these gaps. The results of the evaluation show that the linters are often unable to detect issues and, even when they are able to, they are insufficient to guide developers to valid solutions. These results provide a better understanding of the capabilities of Go linters and facilitate the development of better tools in the future.
轻量级静态代码分析工具(lint)通常用于检查复杂的代码、定位格式违规、检测软件漏洞和修复错误。但是,开发人员通常对像Golang这样的新语言的编译器的功能缺乏很好的理解。在本文中,我们通过调查专业开发人员在MathWorks的工业工作流程中的现实问题来评估现有的Go脚本。由于早期采用了Go语言,我们继续观察到一些问题,这些问题打乱了我们的开发工作流程。本文介绍了我们使用Go语言漏洞的实践经验,强调了经常未被检测到的特定问题以及这些漏洞的后果。评估的结果表明,测试人员通常无法检测到问题,即使他们能够检测到问题,也不足以指导开发人员找到有效的解决方案。这些结果有助于更好地理解Go脚本的功能,并有助于将来开发更好的工具。
{"title":"An empirical assessment of go linters on real-world issues","authors":"Jianwei Wu ,&nbsp;James Clause","doi":"10.1016/j.jss.2026.112797","DOIUrl":"10.1016/j.jss.2026.112797","url":null,"abstract":"<div><div>Lightweight static code analysis tools (linters) are commonly used to inspect complex code, locate format violations, detect software vulnerabilities, and fix bugs. However, developers often lack a good understanding of the capabilities of linters for newer languages like Golang. In this paper, we evaluated existing Go linters by surveying professional developers about real-world issues in the industrial workflow at MathWorks. Because of the early adoption of Go linters, we continued to observe issues that disrupted our development workflow. This paper presents our practical experience with Go linters, highlighting specific issues that often escaped detection and the consequences of these gaps. The results of the evaluation show that the linters are often unable to detect issues and, even when they are able to, they are insufficient to guide developers to valid solutions. These results provide a better understanding of the capabilities of Go linters and facilitate the development of better tools in the future.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"236 ","pages":"Article 112797"},"PeriodicalIF":4.1,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards the automated extraction and refactoring of NoSQL schemas from application code 从应用程序代码中自动提取和重构NoSQL模式
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-23 DOI: 10.1016/j.jss.2026.112787
Carlos J. Fernandez-Candel , Anthony Cleve , Jesus J. Garcia-Molina
Most NoSQL systems adopt a schema-on-read approach to promote flexibility and agility: the structure of the stored data is not constrained by predefined schemas. However, the absence of explicit schema declarations does not imply the absence of schemas themselves. In practice, schemas are implicit in both the application code and the stored data, and are essential for building tools such as data modelers, query optimizers, data migrators, or for performing database refactorings. As a result, NoSQL schema inference (also known as schema extraction or discovery) has gained attention from the database community, with most approaches focusing on extracting schemas from data. In contrast, the source code analysis remains less explored for this purpose.
In this paper, we present a static code analysis strategy to extract logical schemas from NoSQL applications. Our solution is based on a model-driven reverse engineering process composed of a chain of platform-independent model transformations. The extracted schema conforms to the U-Schema unified metamodel, which can represent both NoSQL and relational schemas. To support this process, we define a metamodel capable of representing the core elements of object-oriented languages. Application code is first injected into a code model, from which a control flow model is derived. This, in turn, enables the generation of a model representing both data access operations and the structure of stored data. From these models, the U-Schema logical schema is inferred. Additionally, the extracted information can be used to identify refactoring opportunities. We illustrate this capability through the detection of join-like query patterns and the automated application of field duplication strategies to eliminate expensive joins. All stages of the process are described in detail, and the approach is validated through a round-trip experiment in which a application using a MongoDB store is automatically generated from a predefined schema. The inferred schema is then compared to the original to assess the accuracy of the extraction process.
大多数NoSQL系统采用读时模式(schema-on-read)方法来提高灵活性和敏捷性:存储数据的结构不受预定义模式的约束。但是,没有显式的模式声明并不意味着没有模式本身。在实践中,模式隐含在应用程序代码和存储的数据中,对于构建数据建模器、查询优化器、数据迁移器或执行数据库重构等工具是必不可少的。因此,NoSQL模式推断(也称为模式提取或发现)受到了数据库社区的关注,大多数方法都侧重于从数据中提取模式。相比之下,源代码分析在这方面的探索较少。本文提出了一种从NoSQL应用程序中提取逻辑模式的静态代码分析策略。我们的解决方案基于模型驱动的逆向工程过程,该过程由一系列与平台无关的模型转换组成。提取的模式符合U-Schema统一元模型,它既可以表示NoSQL模式,也可以表示关系模式。为了支持这个过程,我们定义了一个能够表示面向对象语言核心元素的元模型。首先将应用程序代码注入到代码模型中,然后从中派生出控制流模型。这反过来又支持生成既表示数据访问操作又表示存储数据结构的模型。从这些模型中推断出U-Schema逻辑模式。此外,提取的信息可用于识别重构机会。我们通过检测类似连接的查询模式和自动应用字段复制策略来消除昂贵的连接来说明这种能力。详细描述了该过程的所有阶段,并通过往返实验验证了该方法,其中使用MongoDB存储的应用程序从预定义的模式自动生成。然后将推断的模式与原始模式进行比较,以评估提取过程的准确性。
{"title":"Towards the automated extraction and refactoring of NoSQL schemas from application code","authors":"Carlos J. Fernandez-Candel ,&nbsp;Anthony Cleve ,&nbsp;Jesus J. Garcia-Molina","doi":"10.1016/j.jss.2026.112787","DOIUrl":"10.1016/j.jss.2026.112787","url":null,"abstract":"<div><div>Most NoSQL systems adopt a schema-on-read approach to promote flexibility and agility: the structure of the stored data is not constrained by predefined schemas. However, the absence of explicit schema declarations does not imply the absence of schemas themselves. In practice, schemas are implicit in both the application code and the stored data, and are essential for building tools such as data modelers, query optimizers, data migrators, or for performing database refactorings. As a result, NoSQL schema inference (also known as schema extraction or discovery) has gained attention from the database community, with most approaches focusing on extracting schemas from data. In contrast, the source code analysis remains less explored for this purpose.</div><div>In this paper, we present a static code analysis strategy to extract logical schemas from NoSQL applications. Our solution is based on a model-driven reverse engineering process composed of a chain of platform-independent model transformations. The extracted schema conforms to the U-Schema unified metamodel, which can represent both NoSQL and relational schemas. To support this process, we define a metamodel capable of representing the core elements of object-oriented languages. Application code is first injected into a code model, from which a control flow model is derived. This, in turn, enables the generation of a model representing both data access operations and the structure of stored data. From these models, the U-Schema logical schema is inferred. Additionally, the extracted information can be used to identify refactoring opportunities. We illustrate this capability through the detection of join-like query patterns and the automated application of field duplication strategies to eliminate expensive joins. All stages of the process are described in detail, and the approach is validated through a round-trip experiment in which a application using a MongoDB store is automatically generated from a predefined schema. The inferred schema is then compared to the original to assess the accuracy of the extraction process.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"236 ","pages":"Article 112787"},"PeriodicalIF":4.1,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ARFT-Transformer: Modeling metric dependencies for cross-project aging-related bug prediction ARFT-Transformer:为跨项目老化相关的bug预测建模度量依赖
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-22 DOI: 10.1016/j.jss.2026.112795
Shuning Ge , Fangyun Qin , Xiaohui Wan , Yang Liu , Qian Dai , Zheng Zheng
Software systems that run for long periods often suffer from software aging, which is typically caused by Aging-Related Bugs (ARBs). To mitigate the risk of ARBs early in the development phase, ARB prediction has been introduced into software aging research. However, due to the difficulty of collecting ARBs, within-project ARB prediction faces the challenge of data scarcity, leading to the proposal of cross-project ARB prediction. This task faces two major challenges: 1) domain adaptation issue caused by distribution difference between source and target projects; and 2) severe class imbalance between ARB-prone and ARB-free samples. Although various methods have been proposed for cross-project ARB prediction, existing approaches treat the input metrics independently and often neglect the rich inter-metric dependencies, which can lead to overlapping information and misjudgment of metric importance, potentially affecting the model’s performance. Moreover, they typically use cross-entropy as the loss function during training, which cannot distinguish the difficulty of sample classification. To overcome these limitations, we propose ARFT-Transformer, a transformer-based cross-project ARB prediction framework that introduces a metric-level multi-head attention mechanism to capture metric interactions and incorporates Focal Loss function to effectively handle class imbalance. Experiments conducted on three large-scale open-source projects demonstrate that ARFT-Transformer on average outperforms state-of-the-art cross-project ARB prediction methods in both single-source and multi-source cases, achieving up to a 29.54% and 19.92% improvement in Balance metric.
长时间运行的软件系统经常遭受软件老化,这通常是由与老化相关的bug (arb)引起的。为了在开发阶段早期降低ARB的风险,在软件老化研究中引入了ARB预测。然而,由于ARB难以收集,项目内ARB预测面临数据稀缺性的挑战,因此提出了跨项目ARB预测。该任务面临两个主要挑战:1)源项目与目标项目分布差异导致的领域适应问题;2) arb易发和无arb样本之间严重的类不平衡。尽管已经提出了各种跨项目ARB预测方法,但现有的方法都是独立对待输入指标,往往忽略了丰富的指标间依赖关系,这可能导致信息重叠和对指标重要性的误判,从而潜在地影响模型的性能。此外,在训练过程中通常使用交叉熵作为损失函数,无法区分样本分类的难易程度。为了克服这些限制,我们提出了ARFT-Transformer,这是一个基于变压器的跨项目ARB预测框架,它引入了度量级别的多头注意机制来捕获度量交互,并结合了Focal Loss函数来有效处理类不平衡。在三个大型开源项目上进行的实验表明,在单源和多源情况下,ARFT-Transformer平均优于最先进的跨项目ARB预测方法,在Balance指标上实现了29.54%和19.92%的改进。
{"title":"ARFT-Transformer: Modeling metric dependencies for cross-project aging-related bug prediction","authors":"Shuning Ge ,&nbsp;Fangyun Qin ,&nbsp;Xiaohui Wan ,&nbsp;Yang Liu ,&nbsp;Qian Dai ,&nbsp;Zheng Zheng","doi":"10.1016/j.jss.2026.112795","DOIUrl":"10.1016/j.jss.2026.112795","url":null,"abstract":"<div><div>Software systems that run for long periods often suffer from software aging, which is typically caused by Aging-Related Bugs (ARBs). To mitigate the risk of ARBs early in the development phase, ARB prediction has been introduced into software aging research. However, due to the difficulty of collecting ARBs, within-project ARB prediction faces the challenge of data scarcity, leading to the proposal of cross-project ARB prediction. This task faces two major challenges: 1) domain adaptation issue caused by distribution difference between source and target projects; and 2) severe class imbalance between ARB-prone and ARB-free samples. Although various methods have been proposed for cross-project ARB prediction, existing approaches treat the input metrics independently and often neglect the rich inter-metric dependencies, which can lead to overlapping information and misjudgment of metric importance, potentially affecting the model’s performance. Moreover, they typically use cross-entropy as the loss function during training, which cannot distinguish the difficulty of sample classification. To overcome these limitations, we propose ARFT-Transformer, a transformer-based cross-project ARB prediction framework that introduces a metric-level multi-head attention mechanism to capture metric interactions and incorporates Focal Loss function to effectively handle class imbalance. Experiments conducted on three large-scale open-source projects demonstrate that ARFT-Transformer on average outperforms state-of-the-art cross-project ARB prediction methods in both single-source and multi-source cases, achieving up to a 29.54% and 19.92% improvement in Balance metric.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"236 ","pages":"Article 112795"},"PeriodicalIF":4.1,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Many hands make light work: An LLM-based multi-agent system for detecting malicious PyPI packages 人多办事容易:用于检测恶意PyPI包的基于llm的多代理系统
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-21 DOI: 10.1016/j.jss.2026.112792
Muhammad Umar Zeshan, Motunrayo Ibiyo, Claudio Di Sipio, Phuong T. Nguyen, Davide Di Ruscio
Malicious code in open-source repositories such as PyPI poses a growing threat to software supply chains. Traditional rule-based tools often overlook the semantic patterns in source code that are crucial for identifying adversarial components. Large language models (LLMs) show promise for software analysis, yet their use in interpretable and modular security pipelines remains limited.
This paper presents LAMPS, a multi-agent system that employs collaborative LLMs to detect malicious PyPI packages. The system consists of four role-specific agents for package retrieval, file extraction, classification, and verdict aggregation, coordinated through the CrewAI framework. A prototype combines a fine-tuned CodeBERT model for classification with LLaMA 3 agents for contextual reasoning. LAMPS has been evaluated on two complementary datasets: D1, a balanced collection of 6000 setup.py files, and D2, a realistic multi-file dataset with 1296 files and natural class imbalance. On D1, LAMPS achieves 97.7% accuracy, surpassing MPHunter and TD-IDF stacking models–two state-of-the-art approaches. On D2, it reaches 99.5% accuracy and 99.5% balanced accuracy, outperforming RAG-based approaches and fine-tuned single-agent baselines. McNemar’s test confirmed these improvements as highly significant. The results demonstrate the feasibility of distributed LLM reasoning for malicious code detection and highlight the benefits of modular multi-agent designs in software supply chain security.
PyPI等开源存储库中的恶意代码对软件供应链构成了越来越大的威胁。传统的基于规则的工具经常忽略源代码中的语义模式,这些模式对于识别对抗性组件至关重要。大型语言模型(llm)显示了软件分析的前景,但它们在可解释和模块化安全管道中的应用仍然有限。本文介绍了lamp,一个采用协作llm来检测恶意PyPI包的多代理系统。该系统由四个角色特定的代理组成,用于包检索、文件提取、分类和结论聚合,并通过CrewAI框架进行协调。原型结合了用于分类的经过微调的CodeBERT模型和用于上下文推理的LLaMA 3代理。lamp已经在两个互补的数据集上进行了评估:D1,一个包含6000个setup.py文件的平衡集合,以及D2,一个包含1296个文件和自然类不平衡的现实多文件数据集。在D1上,lamp达到97.7%的准确率,超过了MPHunter和TD-IDF两种最先进的叠加模型。在D2上,它达到99.5%的准确率和99.5%的平衡准确率,优于基于rag的方法和微调的单代理基线。McNemar的测试证实了这些改进是非常显著的。结果证明了分布式LLM推理用于恶意代码检测的可行性,并突出了模块化多智能体设计在软件供应链安全方面的优势。
{"title":"Many hands make light work: An LLM-based multi-agent system for detecting malicious PyPI packages","authors":"Muhammad Umar Zeshan,&nbsp;Motunrayo Ibiyo,&nbsp;Claudio Di Sipio,&nbsp;Phuong T. Nguyen,&nbsp;Davide Di Ruscio","doi":"10.1016/j.jss.2026.112792","DOIUrl":"10.1016/j.jss.2026.112792","url":null,"abstract":"<div><div>Malicious code in open-source repositories such as PyPI poses a growing threat to software supply chains. Traditional rule-based tools often overlook the semantic patterns in source code that are crucial for identifying adversarial components. Large language models (LLMs) show promise for software analysis, yet their use in interpretable and modular security pipelines remains limited.</div><div>This paper presents <span>LAMPS</span>, a multi-agent system that employs collaborative LLMs to detect malicious PyPI packages. The system consists of four role-specific agents for <em>package retrieval, file extraction, classification</em>, and <em>verdict aggregation</em>, coordinated through the CrewAI framework. A prototype combines a fine-tuned CodeBERT model for classification with LLaMA 3 agents for contextual reasoning. <span>LAMPS</span> has been evaluated on two complementary datasets: D<sub>1</sub>, a balanced collection of 6000 <span>setup.py</span> files, and D<sub>2</sub>, a realistic multi-file dataset with 1296 files and natural class imbalance. On D<sub>1</sub>, <span>LAMPS</span> achieves 97.7% accuracy, surpassing <span>MPHunter</span> and TD-IDF stacking models–two state-of-the-art approaches. On D<sub>2</sub>, it reaches 99.5% accuracy and 99.5% balanced accuracy, outperforming RAG-based approaches and fine-tuned single-agent baselines. McNemar’s test confirmed these improvements as highly significant. The results demonstrate the feasibility of distributed LLM reasoning for malicious code detection and highlight the benefits of modular multi-agent designs in software supply chain security.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"236 ","pages":"Article 112792"},"PeriodicalIF":4.1,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SysPro: Reproducing system-level concurrency bugs from bug reports SysPro:从bug报告中重现系统级并发bug
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-21 DOI: 10.1016/j.jss.2026.112785
Tarannum Shaila Zaman , Chadni Islam , Jiangfan Shi , Zihan Shi , Fiona Xian , Tingting Yu
Reproducing system-level concurrency bugs requires both input data and the precise interleaving order of system calls. This process is challenging because such bugs are non-deterministic, and bug reports often lack the detailed information needed. Additionally, the unstructured nature of reports written in natural language makes it difficult to extract necessary details. Existing tools are inadequate to reproduce these bugs due to their inability to manage the specific interleaving at the system call level. To address these challenges, we propose SysPro, a novel approach that automatically extracts relevant system call names from bug reports and identifies their locations in the source code. It generates input data by utilizing information retrieval, regular expression matching, and the category-partition method. This extracted input and interleaving data are then used to reproduce bugs through dynamic source code instrumentation. Our empirical study on real-world benchmarks demonstrates that SysPro is both effective and efficient at localizing and reproducing system-level concurrency bugs from bug reports.
再现系统级并发性bug需要输入数据和系统调用的精确交错顺序。这个过程是具有挑战性的,因为这些错误是不确定的,并且错误报告通常缺乏所需的详细信息。此外,用自然语言编写的报告的非结构化性质使得很难提取必要的细节。现有的工具不足以重现这些bug,因为它们无法在系统调用级别管理特定的交错。为了应对这些挑战,我们提出了SysPro,这是一种新颖的方法,可以自动从错误报告中提取相关的系统调用名称,并识别它们在源代码中的位置。它利用信息检索、正则表达式匹配和类别划分方法生成输入数据。然后,这些提取的输入和交错的数据被用来通过动态源代码检测重现bug。我们对真实世界基准的实证研究表明,SysPro在从bug报告中定位和再现系统级并发错误方面既有效又高效。
{"title":"SysPro: Reproducing system-level concurrency bugs from bug reports","authors":"Tarannum Shaila Zaman ,&nbsp;Chadni Islam ,&nbsp;Jiangfan Shi ,&nbsp;Zihan Shi ,&nbsp;Fiona Xian ,&nbsp;Tingting Yu","doi":"10.1016/j.jss.2026.112785","DOIUrl":"10.1016/j.jss.2026.112785","url":null,"abstract":"<div><div>Reproducing system-level concurrency bugs requires both input data and the precise interleaving order of system calls. This process is challenging because such bugs are non-deterministic, and bug reports often lack the detailed information needed. Additionally, the unstructured nature of reports written in natural language makes it difficult to extract necessary details. Existing tools are inadequate to reproduce these bugs due to their inability to manage the specific interleaving at the system call level. To address these challenges, we propose SysPro, a novel approach that automatically extracts relevant system call names from bug reports and identifies their locations in the source code. It generates input data by utilizing information retrieval, regular expression matching, and the category-partition method. This extracted input and interleaving data are then used to reproduce bugs through dynamic source code instrumentation. Our empirical study on real-world benchmarks demonstrates that SysPro is both effective and efficient at localizing and reproducing system-level concurrency bugs from bug reports.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"236 ","pages":"Article 112785"},"PeriodicalIF":4.1,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An empirical eye-tracking study of cross-lingual program comprehension and debugging 跨语言程序理解与调试的眼动追踪实证研究
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-21 DOI: 10.1016/j.jss.2026.112793
Ameer Mohammed, Reem Albaghli, Hanaa Alrushood, Fatme Ghaddar
The ability for students to effectively adapt to new programming languages is a desirable skill that is useful for careers demanding rapid adoption of different languages. This study aims to measure the cognitive load required to generalize programming skills from one language to another across three different tasks: code comprehension, syntactic debugging, and semantic debugging. Participants with basic background in either Java or Python (but not both) were asked to explain a given code segment or identify the syntactic/semantic bug for code written in Python. The cognitive load (i.e., mental effort) to tackle the three tasks in Python for Java-trained students is then measured by employing eye-tracking technology and compared against Python-trained students to determine the overhead in processing these tasks. Our results show that the difference in cognitive load between Java and Python students was more significant when focusing on conditional or iterative constructs compared to other statements in the code. These findings suggest that certain code elements require more effort than others when trying to understand code in a new language, guiding educators toward focusing more on those challenging areas when instructing students with existing knowledge in a different programming language.
学生有效地适应新的编程语言的能力是一项理想的技能,对需要快速采用不同语言的职业很有用。本研究旨在测量将编程技能从一种语言推广到另一种语言所需的认知负荷,包括三种不同的任务:代码理解、语法调试和语义调试。具有Java或Python基础背景(但不是两者都有)的参与者被要求解释给定的代码段或识别用Python编写的代码的语法/语义错误。然后,通过使用眼动追踪技术来测量java训练的学生在Python中解决三个任务的认知负荷(即心理努力),并将其与Python训练的学生进行比较,以确定处理这些任务的开销。我们的研究结果表明,与代码中的其他语句相比,当关注条件或迭代结构时,Java和Python学生之间的认知负荷差异更为显著。这些发现表明,当试图用一种新语言理解代码时,某些代码元素比其他代码元素需要更多的努力,这引导教育工作者在用不同的编程语言指导学生使用现有知识时更多地关注那些具有挑战性的领域。
{"title":"An empirical eye-tracking study of cross-lingual program comprehension and debugging","authors":"Ameer Mohammed,&nbsp;Reem Albaghli,&nbsp;Hanaa Alrushood,&nbsp;Fatme Ghaddar","doi":"10.1016/j.jss.2026.112793","DOIUrl":"10.1016/j.jss.2026.112793","url":null,"abstract":"<div><div>The ability for students to effectively adapt to new programming languages is a desirable skill that is useful for careers demanding rapid adoption of different languages. This study aims to measure the cognitive load required to generalize programming skills from one language to another across three different tasks: code comprehension, syntactic debugging, and semantic debugging. Participants with basic background in either Java or Python (but not both) were asked to explain a given code segment or identify the syntactic/semantic bug for code written in Python. The cognitive load (i.e., mental effort) to tackle the three tasks in Python for Java-trained students is then measured by employing eye-tracking technology and compared against Python-trained students to determine the overhead in processing these tasks. Our results show that the difference in cognitive load between Java and Python students was more significant when focusing on conditional or iterative constructs compared to other statements in the code. These findings suggest that certain code elements require more effort than others when trying to understand code in a new language, guiding educators toward focusing more on those challenging areas when instructing students with existing knowledge in a different programming language.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"236 ","pages":"Article 112793"},"PeriodicalIF":4.1,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the use of machine learning for failure prediction after collective changes in automated continuous integration testing 关于在自动化持续集成测试中使用机器学习进行集体变更后的故障预测
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-20 DOI: 10.1016/j.jss.2026.112791
Ömer Özdemir , Reyhan Aydoğan , Hasan Sözer
Continuous Integration (CI) is a development practice where developers regularly merge their code changes into a central repository, enabling simultaneous collaboration across a shared codebase. This frequent integration and automated building process in CI helps to detect and resolve conflicts or errors early in development. However, in large-scale systems, the build process can be costly. Each build incurs expenses, while skipping builds can increase the risk of undetected failures. Accurate predictions can help to identify builds that can be safely skipped to reduce CI costs. This paper presents an empirical study within an industrial setting, investigating the use of machine learning techniques to predict build failures after a set of collective changes. Unlike many existing works that apply random data splitting, our results show that chronological (time-based) splitting offers a more realistic and reliable assessment of model performance in CI environments. We evaluate various models and feature combinations on a dataset derived from real-world industrial projects. We observe high precision but low recall in predicting failed builds, allowing hundreds of successful builds to be correctly skipped, with around a dozen failures potentially being missed. Our analysis shows that this yields substantial time savings of approximately 2.5 h per build on average, while missed failures necessarily result in delayed failure detection, whose practical impact depends on application criticality and operational context.
持续集成(CI)是一种开发实践,开发人员定期将他们的代码更改合并到一个中央存储库中,从而支持跨共享代码库的同步协作。CI中这种频繁的集成和自动化构建过程有助于在开发早期检测和解决冲突或错误。然而,在大型系统中,构建过程可能是昂贵的。每次构建都会产生费用,而跳过构建会增加未检测到故障的风险。准确的预测可以帮助识别可以安全地跳过的构建,从而降低CI成本。本文在工业环境中进行了一项实证研究,调查了机器学习技术在一系列集体变更后预测构建失败的使用情况。与许多应用随机数据分割的现有工作不同,我们的研究结果表明,按时间顺序(基于时间的)分割在CI环境中提供了更现实和可靠的模型性能评估。我们评估了来自现实世界工业项目的数据集上的各种模型和特征组合。在预测失败的构建时,我们观察到高精度但召回率低,允许正确跳过数百个成功的构建,可能会错过大约12个失败。我们的分析表明,这样平均每次构建可以节省大约2.5小时的大量时间,而错过的故障必然导致延迟的故障检测,其实际影响取决于应用程序的关键性和操作上下文。
{"title":"On the use of machine learning for failure prediction after collective changes in automated continuous integration testing","authors":"Ömer Özdemir ,&nbsp;Reyhan Aydoğan ,&nbsp;Hasan Sözer","doi":"10.1016/j.jss.2026.112791","DOIUrl":"10.1016/j.jss.2026.112791","url":null,"abstract":"<div><div>Continuous Integration (CI) is a development practice where developers regularly merge their code changes into a central repository, enabling simultaneous collaboration across a shared codebase. This frequent integration and automated building process in CI helps to detect and resolve conflicts or errors early in development. However, in large-scale systems, the build process can be costly. Each build incurs expenses, while skipping builds can increase the risk of undetected failures. Accurate predictions can help to identify builds that can be safely skipped to reduce CI costs. This paper presents an empirical study within an industrial setting, investigating the use of machine learning techniques to predict build failures after a set of collective changes. Unlike many existing works that apply random data splitting, our results show that chronological (time-based) splitting offers a more realistic and reliable assessment of model performance in CI environments. We evaluate various models and feature combinations on a dataset derived from real-world industrial projects. We observe high precision but low recall in predicting failed builds, allowing hundreds of successful builds to be correctly skipped, with around a dozen failures potentially being missed. Our analysis shows that this yields substantial time savings of approximately 2.5 h per build on average, while missed failures necessarily result in delayed failure detection, whose practical impact depends on application criticality and operational context.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"236 ","pages":"Article 112791"},"PeriodicalIF":4.1,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The pragmatics of hybridity: A grounded theory of method integration in software engineering projects 混合的语用学:软件工程项目中方法集成的基础理论
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-18 DOI: 10.1016/j.jss.2026.112790
Godfried B Adaba
Hybrid project management is becoming a dominant delivery mode in software engineering, yet the mechanisms through which organisations enact and sustain hybrid practices remain insufficiently theorised. Existing accounts often imply a linear or prescriptive integration of governance and agile methods, overlooking the negotiated, context-dependent nature of hybrid work. This study advances a process-based explanation of hybrid delivery by developing the grounded theory of contingent hybridity, derived through a Constructivist Grounded Theory (CGT) study within a multinational IT firm. Drawing on interviews, observations, and project artefacts, the findings show that hybridisation is not simply the coexistence of plan-driven project governance and agile routines, but an emergent socio-technical process shaped by practitioners’ interpretive work and situated adaptation. Four interdependent mechanisms structure this process: structural anchoring, through which governance frameworks provide stability and legitimacy; adaptive enactment, whereby agile practices are tailored and embedded within formal controls; boundary work, involving translators and hybrid ceremonies that reconcile divergent organisational logics; and role hybridisation, in which practitioners fluidly shift between control-oriented and delivery-focused responsibilities. The analysis demonstrates that hybrid practices vary across roles and project phases, with effective integration depending less on adherence to prescribed templates and more on ongoing, context-sensitive negotiation. These insights refine theoretical understandings of hybrid project management by moving beyond static typologies toward a dynamic, practice-centred perspective and offer actionable guidance for organisations seeking to balance agility and control in complex, regulated environments.
混合项目管理正在成为软件工程中的主要交付模式,然而组织制定和维持混合实践的机制仍然缺乏足够的理论化。现有的描述通常意味着治理和敏捷方法的线性或规定性集成,忽略了混合工作的协商性、依赖于上下文的性质。本研究通过对一家跨国IT公司的建构主义扎根理论(CGT)的研究,提出了一种基于过程的混合交付解释。通过访谈、观察和项目工件,研究结果表明,混合不仅仅是计划驱动的项目治理和敏捷例程的共存,而是由从业者的解释工作和情境适应形成的新兴社会技术过程。四种相互依存的机制构成了这一进程:结构锚定,通过这种机制,治理框架提供了稳定性和合法性;适应性制定,敏捷实践被裁剪并嵌入到正式的控制中;边界工作,包括翻译和混合仪式,以调和不同的组织逻辑;以及角色混合,在这种情况下,从业者可以在面向控制的职责和以交付为中心的职责之间流畅地转换。分析表明,混合实践在不同的角色和项目阶段是不同的,有效的集成更少地依赖于对规定模板的遵守,而更多地依赖于正在进行的、对上下文敏感的协商。这些见解通过从静态类型学转向动态的、以实践为中心的视角,完善了对混合项目管理的理论理解,并为寻求在复杂、受监管的环境中平衡敏捷性和控制力的组织提供了可操作的指导。
{"title":"The pragmatics of hybridity: A grounded theory of method integration in software engineering projects","authors":"Godfried B Adaba","doi":"10.1016/j.jss.2026.112790","DOIUrl":"10.1016/j.jss.2026.112790","url":null,"abstract":"<div><div>Hybrid project management is becoming a dominant delivery mode in software engineering, yet the mechanisms through which organisations enact and sustain hybrid practices remain insufficiently theorised. Existing accounts often imply a linear or prescriptive integration of governance and agile methods, overlooking the negotiated, context-dependent nature of hybrid work. This study advances a process-based explanation of hybrid delivery by developing the grounded theory of contingent hybridity, derived through a Constructivist Grounded Theory (CGT) study within a multinational IT firm. Drawing on interviews, observations, and project artefacts, the findings show that hybridisation is not simply the coexistence of plan-driven project governance and agile routines, but an emergent socio-technical process shaped by practitioners’ interpretive work and situated adaptation. Four interdependent mechanisms structure this process: structural anchoring, through which governance frameworks provide stability and legitimacy; adaptive enactment, whereby agile practices are tailored and embedded within formal controls; boundary work, involving translators and hybrid ceremonies that reconcile divergent organisational logics; and role hybridisation, in which practitioners fluidly shift between control-oriented and delivery-focused responsibilities. The analysis demonstrates that hybrid practices vary across roles and project phases, with effective integration depending less on adherence to prescribed templates and more on ongoing, context-sensitive negotiation. These insights refine theoretical understandings of hybrid project management by moving beyond static typologies toward a dynamic, practice-centred perspective and offer actionable guidance for organisations seeking to balance agility and control in complex, regulated environments.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"235 ","pages":"Article 112790"},"PeriodicalIF":4.1,"publicationDate":"2026-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Fine-grained parametric bootstrap approach for NHPP-based software reliability modeling 基于nhpp的软件可靠性建模的细粒度参数自举方法
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-18 DOI: 10.1016/j.jss.2026.112789
Jingchi Wu, Tadashi Dohi, Junjun Zheng, Hiroyuki Okamura
In software reliability, practitioners demand to estimate software reliability measures accurately from software fault-count data, for making the release decision and project management. To achieve these objectives, software fault-count processes are often described using software reliability models (SRMs) based on stochastic counting processes like non-homogeneous Poisson processes (NHPPs), and statistical point estimation of model parameters is carried out. Substituting the point estimates of model parameters into several software reliability measures, one gets the point estimates of desired reliability measures. However, since such point estimators tend to have high variances, the resulting release decision and project management plans are not reliable under uncertainty. Then, interval estimation of software reliability measures is expected to realize more robust decision making, but is quite difficult to obtain the analytical confidence regions. Bootstrap is a statistical method that generates realizations of statistical estimators by resampling fault-count data. It allows us to evaluate the statistical properties of software reliability measures under uncertainty. In this paper, we propose a fine-grained parametric bootstrap method for NHPP-based SRMs, where a thinning-like resampling algorithm is employed instead of intuitive resampling algorithms which generate the bootstrap data with ties problem. We compare our thinning-like resampling algorithm with the existing ones in both Monte Carlo simulation and empirical study. It can be shown that the model parameters and their associated software reliability measures estimated by our fine-grained parametric bootstrap method are more accurate and robust than the other bootstrap algorithms.
在软件可靠性方面,从业者需要从软件故障计数数据中准确地估计软件可靠性度量,以便做出发布决策和项目管理。为了实现这些目标,通常使用基于非齐次泊松过程(NHPPs)等随机计数过程的软件可靠性模型(SRMs)来描述软件故障计数过程,并对模型参数进行统计点估计。将模型参数的点估计代入多个软件可靠性测度中,得到期望可靠性测度的点估计。然而,由于这样的点估计器往往有很高的方差,在不确定性下,最终的发布决策和项目管理计划是不可靠的。然后,对软件可靠性测度的区间估计有望实现更稳健的决策,但很难获得分析置信区域。Bootstrap是一种统计方法,它通过对故障计数数据进行重采样来生成统计估计器的实现。它允许我们在不确定的情况下评估软件可靠性度量的统计特性。在本文中,我们提出了一种基于nhpp的srm的细粒度参数自举方法,该方法采用了一种类似于瘦化的重采样算法,而不是直观的重采样算法,该算法产生带约束问题的自举数据。在蒙特卡罗模拟和实证研究中,将我们的类稀疏重采样算法与现有的类稀疏重采样算法进行了比较。结果表明,采用细粒度参数自举方法估计的模型参数及其相关的软件可靠性指标比其他自举算法更准确,鲁棒性更好。
{"title":"A Fine-grained parametric bootstrap approach for NHPP-based software reliability modeling","authors":"Jingchi Wu,&nbsp;Tadashi Dohi,&nbsp;Junjun Zheng,&nbsp;Hiroyuki Okamura","doi":"10.1016/j.jss.2026.112789","DOIUrl":"10.1016/j.jss.2026.112789","url":null,"abstract":"<div><div>In software reliability, practitioners demand to estimate software reliability measures accurately from software fault-count data, for making the release decision and project management. To achieve these objectives, software fault-count processes are often described using software reliability models (SRMs) based on stochastic counting processes like non-homogeneous Poisson processes (NHPPs), and statistical point estimation of model parameters is carried out. Substituting the point estimates of model parameters into several software reliability measures, one gets the point estimates of desired reliability measures. However, since such point estimators tend to have high variances, the resulting release decision and project management plans are not reliable under uncertainty. Then, interval estimation of software reliability measures is expected to realize more robust decision making, but is quite difficult to obtain the analytical confidence regions. Bootstrap is a statistical method that generates realizations of statistical estimators by resampling fault-count data. It allows us to evaluate the statistical properties of software reliability measures under uncertainty. In this paper, we propose a fine-grained parametric bootstrap method for NHPP-based SRMs, where a thinning-like resampling algorithm is employed instead of intuitive resampling algorithms which generate the bootstrap data with ties problem. We compare our thinning-like resampling algorithm with the existing ones in both Monte Carlo simulation and empirical study. It can be shown that the model parameters and their associated software reliability measures estimated by our fine-grained parametric bootstrap method are more accurate and robust than the other bootstrap algorithms.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"236 ","pages":"Article 112789"},"PeriodicalIF":4.1,"publicationDate":"2026-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Systems and Software
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1