首页 > 最新文献

Journal of Systems and Software最新文献

英文 中文
On software testing reference ontologies 关于软件测试参考本体
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-25 DOI: 10.1016/j.jss.2025.112759
Maryam Havakeshian, Yvan Labiche
Software testing manipulates various artifacts, such as application and test code, requirements, test objectives and results, that are important to stakeholders such as developers, testers, and the QA team. To reason and make decisions from these varied artifacts, information needs to be recorded in a structured, computer-readable format. This highlights the need for effective knowledge management, for which ontologies are an ideal solution.
We aim to answer two key questions for software testing practitioners who wish to use ontology-based knowledge management: what criteria can one rely on to decide which testing ontology to use, and how do existing ontologies fare when using these criteria?
We coalesced several notions of the quality of an ontology under the umbrella of the concept of a “beautiful ontology” others have introduced. In doing so we focussed on ontology evaluation criteria that are sufficiently well-defined to lead to repeatable assessments. Relying on published systematic literature reviews we selected four testing reference ontologies for assessment, namely STOWS, OntoTest, ROoST, and TestTDO.
Results indicate that a small number of published ontology assessment criteria have been defined with sufficient formality to allow their non-biased use. Nevertheless, we primarily based our assessment of the four selected ontologies on the necessary isomorphic mapping between an ontology and the domain it represents. Results indicate that none of the selected ontologies is designed with sufficient rigour. One observation we make is that published ontologies ought to be described more rigorously with a necessary complete dictionary describing all concepts, properties, relations and axioms.
软件测试操作各种工件,例如应用程序和测试代码、需求、测试目标和结果,它们对涉众(例如开发人员、测试人员和QA团队)很重要。为了根据这些不同的工件进行推理并做出决策,需要将信息记录为结构化的、计算机可读的格式。这突出了对有效知识管理的需求,而本体是一个理想的解决方案。我们的目标是为希望使用基于本体的知识管理的软件测试从业者回答两个关键问题:一个人可以依靠什么标准来决定使用哪个测试本体,以及当使用这些标准时,现有的本体表现如何?我们在其他人介绍的“美丽本体论”概念的保护伞下合并了关于本体论质量的几个概念。在此过程中,我们将重点放在本体评估标准上,这些标准被充分定义以导致可重复的评估。根据已发表的系统文献综述,我们选择了四个测试参考本体进行评估,即STOWS、OntoTest、ROoST和TestTDO。结果表明,少数已发表的本体评估标准已被充分正式地定义,以允许其无偏倚地使用。尽管如此,我们主要基于对四个选定本体的评估,这四个本体与它所代表的领域之间的必要同构映射。结果表明,所选本体的设计都不够严谨。我们的一个观察是,发表的本体论应该用一个必要的完整字典来更严格地描述所有的概念、属性、关系和公理。
{"title":"On software testing reference ontologies","authors":"Maryam Havakeshian,&nbsp;Yvan Labiche","doi":"10.1016/j.jss.2025.112759","DOIUrl":"10.1016/j.jss.2025.112759","url":null,"abstract":"<div><div>Software testing manipulates various artifacts, such as application and test code, requirements, test objectives and results, that are important to stakeholders such as developers, testers, and the QA team. To reason and make decisions from these varied artifacts, information needs to be recorded in a structured, computer-readable format. This highlights the need for effective knowledge management, for which ontologies are an ideal solution.</div><div>We aim to answer two key questions for software testing practitioners who wish to use ontology-based knowledge management: what criteria can one rely on to decide which testing ontology to use, and how do existing ontologies fare when using these criteria?</div><div>We coalesced several notions of the quality of an ontology under the umbrella of the concept of a “beautiful ontology” others have introduced. In doing so we focussed on ontology evaluation criteria that are sufficiently well-defined to lead to repeatable assessments. Relying on published systematic literature reviews we selected four testing reference ontologies for assessment, namely STOWS, OntoTest, ROoST, and TestTDO.</div><div>Results indicate that a small number of published ontology assessment criteria have been defined with sufficient formality to allow their non-biased use. Nevertheless, we primarily based our assessment of the four selected ontologies on the necessary isomorphic mapping between an ontology and the domain it represents. Results indicate that none of the selected ontologies is designed with sufficient rigour. One observation we make is that published ontologies ought to be described more rigorously with a necessary complete dictionary describing all concepts, properties, relations and axioms.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"235 ","pages":"Article 112759"},"PeriodicalIF":4.1,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging explainable AI to characterize floating-point exceptions in linear solvers 利用可解释的人工智能来描述线性求解器中的浮点异常
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-24 DOI: 10.1016/j.jss.2025.112757
Ignacio Laguna
Linear solver packages are central to many scientific, engineering, and machine learning applications. When floating-point exceptions occur in these solvers, e.g., division by zero or overflow, numerical results are compromised and become unreliable. Existing static and dynamic analysis tools can detect such exceptions, but they do not explain why the exceptions occur in terms of the solver inputs. We present a study to characterize the inputs that cause numerical exceptions in linear solver packages. Our approach uses explainable AI (XAI) to find the most relevant characteristics of input matrices that explain the occurrence of exceptions in the solvers. Since training data in this domain is scarce, we perform extensive data gathering and data augmentation to obtain exception-inducing inputs. Our approach uses a repair strategy on the features blamed by XAI to validate that such features indeed explain the exceptions. We compare the LIME and SHAP XAI techniques using a dozen matrix features with three classifiers. We evaluate the approach on three widely used linear solver packages and find that some input characteristics can explain the occurrence of exceptions 100% of the time, in specific solvers and preconditioners.
线性求解器包是许多科学、工程和机器学习应用的核心。当这些解算器中出现浮点异常时,例如除零或溢出,数值结果会受到损害并变得不可靠。现有的静态和动态分析工具可以检测到此类异常,但它们不能解释为什么会出现求解器输入方面的异常。我们提出了一项研究,以表征导致线性求解器软件包中的数值异常的输入。我们的方法使用可解释的AI (XAI)来找到输入矩阵中最相关的特征,这些特征可以解释解算器中异常的发生。由于这个领域的训练数据是稀缺的,我们执行大量的数据收集和数据增强来获得异常诱导输入。我们的方法对XAI责备的特性使用修复策略,以验证这些特性确实解释了异常。我们使用带有三个分类器的十二个矩阵特征来比较LIME和shapxai技术。我们在三个广泛使用的线性求解器包上评估了该方法,并发现在特定的求解器和预处理器中,一些输入特性可以100%地解释异常的发生。
{"title":"Leveraging explainable AI to characterize floating-point exceptions in linear solvers","authors":"Ignacio Laguna","doi":"10.1016/j.jss.2025.112757","DOIUrl":"10.1016/j.jss.2025.112757","url":null,"abstract":"<div><div>Linear solver packages are central to many scientific, engineering, and machine learning applications. When floating-point exceptions occur in these solvers, e.g., division by zero or overflow, numerical results are compromised and become unreliable. Existing static and dynamic analysis tools can detect such exceptions, but they do not explain why the exceptions occur in terms of the solver inputs. We present a study to characterize the inputs that cause numerical exceptions in linear solver packages. Our approach uses explainable AI (XAI) to find the most relevant characteristics of input matrices that explain the occurrence of exceptions in the solvers. Since training data in this domain is scarce, we perform extensive data gathering and data augmentation to obtain exception-inducing inputs. Our approach uses a <em>repair</em> strategy on the features blamed by XAI to validate that such features indeed explain the exceptions. We compare the <span>LIME</span> and <span>SHAP</span> XAI techniques using a dozen matrix features with three classifiers. We evaluate the approach on three widely used linear solver packages and find that some input characteristics can explain the occurrence of exceptions 100% of the time, in specific solvers and preconditioners.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"234 ","pages":"Article 112757"},"PeriodicalIF":4.1,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clash: Enhancing context-sensitivity in data-flow analysis for mitigating the impact of indirect calls 冲突:增强数据流分析中的上下文敏感性,以减轻间接调用的影响
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-24 DOI: 10.1016/j.jss.2025.112753
Jinyan Xie, Yingzhou Zhang, Mingzhe Hu, Liping Han, Le Yu, Qiuran Ding
Indirect calls pose significant challenges to static data-flow analysis due to their dynamic resolution at runtime, which introduces imprecision and inefficiency by allowing spurious data facts to propagate across mismatched call contexts. Prior work like Context-Free-Language (CFL) cannot maintain context-sensitivity in indirect call resolution, as the dynamically constructed data-flow edges lack sufficient context information. To address this limitation, we present Clash, a novel framework that enhances context-sensitivity in data-flow analysis without compromising scalability. Clash introduces two core innovations including Mutex identifiers and conflict detectors. Mutex identifiers statically approximate invocation contexts and are bound to function pointers to detect mismatches during data-flow analysis. Conflict detectors refer to Call-Store Constraint (CSC), Callback-Constraint (CBC), and Call-Return Constraint (CRC), which selectively suppress invalid inter-procedural data-flows at critical edges in the Static Value-Flow Graph (SVFG). These mechanisms enable Clash to efficiently prune infeasible data-flows while preserving necessary data dependencies.
Our evaluation shows that Clash achieves an average reduction of 76.93%, 51.79%, 53.97% and 48.23% in excessive data-flows compared to First-Layer Type Analysis (FLTA), Inter-procedural Finite Distributive Subset (IFDS/IDE), Static-Value-Flow (SVF) and Type-Flow Analysis (TFA) respectively. In terms of performance, Clash reduces whole-program analysis time by 82.83%, 27.79%, 41.4%, and 39.38% compared to these methods respectively. Therefore, Clash provides a scalable and precise solution for managing context-sensitive data-flows in the presence of complex indirect calls.
由于间接调用在运行时的动态解析,它给静态数据流分析带来了重大挑战,这会通过允许在不匹配的调用上下文中传播虚假的数据事实而引入不精确和低效率。由于动态构造的数据流边缘缺乏足够的上下文信息,诸如上下文自由语言(CFL)等先前的工作在间接调用解析中不能保持上下文敏感性。为了解决这一限制,我们提出了Clash,这是一个新的框架,可以在不影响可伸缩性的情况下增强数据流分析中的上下文敏感性。Clash引入了两个核心创新,包括互斥标识符和冲突检测器。互斥标识符静态地近似调用上下文,并绑定到函数指针,以便在数据流分析期间检测不匹配。冲突检测器指的是调用-存储约束(CSC)、回调约束(CBC)和调用-返回约束(CRC),它们在静态值流图(SVFG)的关键边有选择地抑制无效的过程间数据流。这些机制使Clash能够有效地修剪不可行的数据流,同时保留必要的数据依赖性。我们的评估表明,与第一层类型分析(FLTA)、程序间有限分布子集(IFDS/IDE)、静态价值流(SVF)和类型流分析(TFA)相比,Clash在过度数据流方面的平均减少率分别为76.93%、51.79%、53.97%和48.23%。在性能方面,与上述方法相比,Clash的全程序分析时间分别缩短了82.83%、27.79%、41.4%和39.38%。因此,Clash为管理存在复杂间接调用的上下文敏感数据流提供了可伸缩的精确解决方案。
{"title":"Clash: Enhancing context-sensitivity in data-flow analysis for mitigating the impact of indirect calls","authors":"Jinyan Xie,&nbsp;Yingzhou Zhang,&nbsp;Mingzhe Hu,&nbsp;Liping Han,&nbsp;Le Yu,&nbsp;Qiuran Ding","doi":"10.1016/j.jss.2025.112753","DOIUrl":"10.1016/j.jss.2025.112753","url":null,"abstract":"<div><div>Indirect calls pose significant challenges to static data-flow analysis due to their dynamic resolution at runtime, which introduces imprecision and inefficiency by allowing spurious data facts to propagate across mismatched call contexts. Prior work like Context-Free-Language (CFL) cannot maintain context-sensitivity in indirect call resolution, as the dynamically constructed data-flow edges lack sufficient context information. To address this limitation, we present <span>Clash</span>, a novel framework that enhances context-sensitivity in data-flow analysis without compromising scalability. <span>Clash</span> introduces two core innovations including Mutex identifiers and conflict detectors. Mutex identifiers statically approximate invocation contexts and are bound to function pointers to detect mismatches during data-flow analysis. Conflict detectors refer to Call-Store Constraint (CSC), Callback-Constraint (CBC), and Call-Return Constraint (CRC), which selectively suppress invalid inter-procedural data-flows at critical edges in the Static Value-Flow Graph (SVFG). These mechanisms enable <span>Clash</span> to efficiently prune infeasible data-flows while preserving necessary data dependencies.</div><div>Our evaluation shows that <span>Clash</span> achieves an average reduction of 76.93%, 51.79%, 53.97% and 48.23% in excessive data-flows compared to First-Layer Type Analysis (FLTA), Inter-procedural Finite Distributive Subset (IFDS/IDE), Static-Value-Flow (SVF) and Type-Flow Analysis (TFA) respectively. In terms of performance, <span>Clash</span> reduces whole-program analysis time by 82.83%, 27.79%, 41.4%, and 39.38% compared to these methods respectively. Therefore, <span>Clash</span> provides a scalable and precise solution for managing context-sensitive data-flows in the presence of complex indirect calls.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"235 ","pages":"Article 112753"},"PeriodicalIF":4.1,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A self-sustainable service assembly for decentralized computing environments 用于分散计算环境的自持续服务组件
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-23 DOI: 10.1016/j.jss.2025.112755
Mauro Caporuscio , Mirko D’Angelo , Vincenzo Grassi , Raffaela Mirandola , Francesca Ricci
The landscape of modern computing systems is shifting towards architectures built by combining available services under the “everything as a service” paradigm. These architectures are deployed on distributed cloud-edge infrastructures, aiming to provide innovative services to a wide range of users. However, it is crucial for these systems to address environmental sustainability concerns. This poses challenges in operating such systems in open, dynamic, and uncertain environments while minimizing their energy consumption. To tackle these challenges, we propose a decentralized service assembly approach that ensures the assembly is energetically self-sustainable by relying on locally harvested and stored energy. In our contribution, we introduce a general service selection template that enables the derivation of different selection policies. These policies guide the construction and maintenance of the service assembly. To evaluate their effectiveness in meeting the sustainability requirements, we conduct a comprehensive set of simulation experiments, providing valuable insights.
现代计算系统的格局正在向“一切即服务”范式下通过组合可用服务构建的体系结构转变。这些架构部署在分布式云边缘基础设施上,旨在为广泛的用户提供创新服务。然而,这些系统解决环境可持续性问题至关重要。这对在开放、动态和不确定的环境中操作此类系统提出了挑战,同时最小化其能耗。为了应对这些挑战,我们提出了一种分散的服务组装方法,通过依靠当地收集和储存的能源,确保组装在能源上自我可持续发展。在本文中,我们介绍了一个通用的服务选择模板,它支持派生不同的选择策略。这些策略指导服务组件的构建和维护。为了评估它们在满足可持续性要求方面的有效性,我们进行了一套全面的模拟实验,提供了有价值的见解。
{"title":"A self-sustainable service assembly for decentralized computing environments","authors":"Mauro Caporuscio ,&nbsp;Mirko D’Angelo ,&nbsp;Vincenzo Grassi ,&nbsp;Raffaela Mirandola ,&nbsp;Francesca Ricci","doi":"10.1016/j.jss.2025.112755","DOIUrl":"10.1016/j.jss.2025.112755","url":null,"abstract":"<div><div>The landscape of modern computing systems is shifting towards architectures built by combining available services under the “everything as a service” paradigm. These architectures are deployed on distributed cloud-edge infrastructures, aiming to provide innovative services to a wide range of users. However, it is crucial for these systems to address environmental sustainability concerns. This poses challenges in operating such systems in open, dynamic, and uncertain environments while minimizing their energy consumption. To tackle these challenges, we propose a decentralized service assembly approach that ensures the assembly is energetically self-sustainable by relying on locally harvested and stored energy. In our contribution, we introduce a general service selection template that enables the derivation of different selection policies. These policies guide the construction and maintenance of the service assembly. To evaluate their effectiveness in meeting the sustainability requirements, we conduct a comprehensive set of simulation experiments, providing valuable insights.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"235 ","pages":"Article 112755"},"PeriodicalIF":4.1,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A consistency management framework for digital twin models 数字孪生模型的一致性管理框架
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-22 DOI: 10.1016/j.jss.2025.112750
Hossain Muhammad Muctadir , Eduard Kamburjan , Loek Cleophas , Mark van den Brand
Digital twins (DTs) encapsulate the concept of a real-world entity (RE) and corresponding bidirectionally connected virtual one (VE) mimicking certain aspects of the former in order to facilitate various use-cases such as predictive maintenance. DTs typically encompass various models that are often developed by experts from different domains using diverse tools. To maintain consistency among these models and ensure the continued functioning of the system, effective identification of any consistency issues and addressing them whenever necessary is imperative. In this paper, we investigate the concept of consistency management and propose a consistency management framework that addresses various characteristics of DT models. Subsequently, we present three working examples that implement the proposed framework with graph-based techniques. Taking the working examples into account, we demonstrate and argue that our consistency management framework can provide crucial assistance in the consistency management of DT models.
数字孪生(DTs)封装了真实世界实体(RE)和相应的双向连接虚拟实体(VE)的概念,模拟了前者的某些方面,以促进预测性维护等各种用例。DTs通常包含各种模型,这些模型通常由来自不同领域的专家使用不同的工具开发。为了保持这些模型之间的一致性并确保系统的持续运行,必须有效地识别任何一致性问题并在必要时处理它们。在本文中,我们研究了一致性管理的概念,并提出了一个解决DT模型各种特征的一致性管理框架。随后,我们给出了三个使用基于图的技术实现所提出框架的工作示例。考虑到工作示例,我们证明并论证了我们的一致性管理框架可以为DT模型的一致性管理提供至关重要的帮助。
{"title":"A consistency management framework for digital twin models","authors":"Hossain Muhammad Muctadir ,&nbsp;Eduard Kamburjan ,&nbsp;Loek Cleophas ,&nbsp;Mark van den Brand","doi":"10.1016/j.jss.2025.112750","DOIUrl":"10.1016/j.jss.2025.112750","url":null,"abstract":"<div><div>Digital twins (DTs) encapsulate the concept of a real-world entity (RE) and corresponding bidirectionally connected virtual one (VE) mimicking certain aspects of the former in order to facilitate various use-cases such as predictive maintenance. DTs typically encompass various models that are often developed by experts from different domains using diverse tools. To maintain consistency among these models and ensure the continued functioning of the system, effective identification of any consistency issues and addressing them whenever necessary is imperative. In this paper, we investigate the concept of consistency management and propose a consistency management framework that addresses various characteristics of DT models. Subsequently, we present three working examples that implement the proposed framework with graph-based techniques. Taking the working examples into account, we demonstrate and argue that our consistency management framework can provide crucial assistance in the consistency management of DT models.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"234 ","pages":"Article 112750"},"PeriodicalIF":4.1,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing the capability of android dynamic analysis tools to combat anti-runtime analysis techniques 评估android动态分析工具对抗反运行时分析技术的能力
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-18 DOI: 10.1016/j.jss.2025.112747
Dewen Suo , Lei Xue , Weihao Huang , Runze Tan , Guozi Sun
As the dominant mobile operating system, Android continues to attract a substantial influx of new applications each year. However, this growth is accompanied by increased attention from malicious actors, resulting in a significant rise in security threats to the Android ecosystem. Among these threats, the adoption of Anti-Runtime Analysis (ARA) techniques by malicious applications poses a serious challenge, as it hinders security professionals from effectively analyzing malicious behaviors using dynamic analysis tools. ARA technologies are designed to prevent the dynamic examination of applications, thus complicating efforts to ensure platform security. This paper presents a comprehensive empirical study that assesses the ability of widely-used Android dynamic analysis tools to bypass various ARA techniques. Our findings reveal a critical gap in the effectiveness of existing dynamic analysis tools to counter ARA mechanisms, highlighting an urgent need for more robust solutions. This work provides valuable insights into the limitations of existing tools and highlights the need for improved methods to counteract ARA technologies, thus advancing the field of software security and dynamic analysis.
作为占主导地位的移动操作系统,Android每年都会吸引大量新应用程序的涌入。然而,这种增长伴随着恶意行为者越来越多的关注,导致Android生态系统的安全威胁显著上升。在这些威胁中,恶意应用程序采用反运行时分析(ARA)技术带来了严重的挑战,因为它阻碍了安全专业人员使用动态分析工具有效地分析恶意行为。ARA技术旨在防止应用程序的动态检查,从而使确保平台安全的工作复杂化。本文提出了一项全面的实证研究,评估了广泛使用的Android动态分析工具绕过各种ARA技术的能力。我们的研究结果揭示了现有动态分析工具在对抗ARA机制方面的有效性存在严重差距,强调了迫切需要更强大的解决方案。这项工作对现有工具的局限性提供了有价值的见解,并强调了改进方法以对抗ARA技术的必要性,从而推动了软件安全和动态分析领域的发展。
{"title":"Assessing the capability of android dynamic analysis tools to combat anti-runtime analysis techniques","authors":"Dewen Suo ,&nbsp;Lei Xue ,&nbsp;Weihao Huang ,&nbsp;Runze Tan ,&nbsp;Guozi Sun","doi":"10.1016/j.jss.2025.112747","DOIUrl":"10.1016/j.jss.2025.112747","url":null,"abstract":"<div><div>As the dominant mobile operating system, Android continues to attract a substantial influx of new applications each year. However, this growth is accompanied by increased attention from malicious actors, resulting in a significant rise in security threats to the Android ecosystem. Among these threats, the adoption of Anti-Runtime Analysis (ARA) techniques by malicious applications poses a serious challenge, as it hinders security professionals from effectively analyzing malicious behaviors using dynamic analysis tools. ARA technologies are designed to prevent the dynamic examination of applications, thus complicating efforts to ensure platform security. This paper presents a comprehensive empirical study that assesses the ability of widely-used Android dynamic analysis tools to bypass various ARA techniques. Our findings reveal a critical gap in the effectiveness of existing dynamic analysis tools to counter ARA mechanisms, highlighting an urgent need for more robust solutions. This work provides valuable insights into the limitations of existing tools and highlights the need for improved methods to counteract ARA technologies, thus advancing the field of software security and dynamic analysis.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"234 ","pages":"Article 112747"},"PeriodicalIF":4.1,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A search-based file recommendation approach for infrastructure-as-code evolution 用于基础设施即代码演进的基于搜索的文件推荐方法
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-17 DOI: 10.1016/j.jss.2025.112746
Narjes Bessghaier , Ali Ouni , Mohammed Sayagh , Mohamed Wiem Mkaouer
Infrastructure-as-Code (IaC) is increasingly adopted in modern software projects to automate the setup of infrastructure components like servers and networking through code-based files. However, as configurations grow in size, complexity, and coupling with other artifacts (e.g., test files), it becomes challenging for development teams to identify the right files to change. Therefore, we propose an automated approach to recommend files likely to co-change with a given IaC file. Our approach uses a mono-objective genetic algorithm (GA) with a combination of two heuristics: file similarity and change history. Then, GA searches for the optimal solution and generates a ranked list of files from the most to the least likely files to co-change with the given IaC file. We evaluated our approach on 20 open-source Ansible and Puppet projects. Results show that our approach correctly recommended files in 86 % of commits within the top 10 recommendations. Using Instance Space Analysis (ISA), we found that our approach performs better for IaC files relying heavily on external modules and maintained by dedicated developers. However, GA struggles with highly customized Ansible files. In Puppet projects, high path similarity did not consistently predict co-changing files, while higher content similarity improved the similarity heuristic.
在现代软件项目中越来越多地采用基础设施即代码(IaC),通过基于代码的文件自动设置服务器和网络等基础设施组件。然而,随着配置在大小、复杂性以及与其他工件(例如,测试文件)的耦合方面的增长,确定要更改的正确文件对开发团队来说变得具有挑战性。因此,我们提出了一种自动化的方法来推荐可能与给定IaC文件共同更改的文件。我们的方法使用单目标遗传算法(GA),结合两种启发式方法:文件相似性和更改历史。然后,GA搜索最优解,并生成一个文件列表,从最可能到最不可能与给定IaC文件共同更改的文件。我们在20个开源的Ansible和Puppet项目上评估了我们的方法。结果表明,我们的方法在前10个推荐文件中正确推荐了86%的提交文件。使用实例空间分析(ISA),我们发现我们的方法对于严重依赖外部模块并由专门的开发人员维护的IaC文件执行得更好。然而,GA很难处理高度定制的Ansible文件。在Puppet项目中,高路径相似度并不能一致地预测共同更改的文件,而高内容相似度改进了相似启发式。
{"title":"A search-based file recommendation approach for infrastructure-as-code evolution","authors":"Narjes Bessghaier ,&nbsp;Ali Ouni ,&nbsp;Mohammed Sayagh ,&nbsp;Mohamed Wiem Mkaouer","doi":"10.1016/j.jss.2025.112746","DOIUrl":"10.1016/j.jss.2025.112746","url":null,"abstract":"<div><div>Infrastructure-as-Code (IaC) is increasingly adopted in modern software projects to automate the setup of infrastructure components like servers and networking through code-based files. However, as configurations grow in size, complexity, and coupling with other artifacts (e.g., test files), it becomes challenging for development teams to identify the right files to change. Therefore, we propose an automated approach to recommend files likely to co-change with a given IaC file. Our approach uses a mono-objective genetic algorithm (GA) with a combination of two heuristics: file <em>similarity</em> and change <em>history</em>. Then, GA searches for the optimal solution and generates a ranked list of files from the most to the least likely files to co-change with the given IaC file. We evaluated our approach on 20 open-source Ansible and Puppet projects. Results show that our approach correctly recommended files in 86 % of commits within the top 10 recommendations. Using Instance Space Analysis (ISA), we found that our approach performs better for IaC files relying heavily on external modules and maintained by dedicated developers. However, GA struggles with highly customized Ansible files. In Puppet projects, high path similarity did not consistently predict co-changing files, while higher content similarity improved the similarity heuristic.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"234 ","pages":"Article 112746"},"PeriodicalIF":4.1,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the use of unsupervised machine learning for classification of crowd-based software requirements 关于使用无监督机器学习对基于人群的软件需求进行分类
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-15 DOI: 10.1016/j.jss.2025.112724
Naimish Sharma, Arpit Sharma
Crowd-based requirements engineering (CrowdRE) involves large-scale user participation to gather software requirements. Supervised machine learning (SML) is commonly used to classify these requirements but demands significant time, computational resources, and high-quality labeled data, which are scarce in CrowdRE. The objective of this paper is to investigate the potential of clustering-based unsupervised ML to classify crowd-based requirements. Our framework evaluates sentence embedding models which convert textual requirements into numerical vectors, selecting optimal ones using information retrieval (IR) measures. These vectors are grouped via clustering algorithms, followed by manual or automated label assignment. Automated labeling involves generating a class-specific corpus for every class and computing semantic similarity to assign labels, while manual labeling is supported by topic modeling which uncovers thematic structures within every cluster. We validated the framework on 3000 crowd-generated smart home requirements, tackling binary, tertiary, quaternary, and quinary classification tasks. Automated labeling achieved F1 scores of up to  ∼ 90 %,  ∼ 82 %,  ∼ 70 %, and  ∼ 52 %, respectively, with manual labeling showing similar performance. Compared to logistic regression which is a supervised ML model, our framework occasionally outperformed it in F1 scores. Against Llama-3.2-3B-Instruct, which is a state-of-the-art lightweight large language model (LLM), it surpassed performance in 38 % of automated and 35 % of manual labeling cases. We also show that our framework enables one to analyze and identify labeling-related issues in the dataset, enhancing ground-truth data quality. These findings show that computationally efficient unsupervised methods effectively classify software requirements in data-scarce CrowdRE settings, offering a viable alternative to supervised approaches.
基于群体的需求工程(CrowdRE)涉及大规模的用户参与来收集软件需求。监督式机器学习(SML)通常用于对这些需求进行分类,但需要大量的时间、计算资源和高质量的标记数据,这些在CrowdRE中是稀缺的。本文的目的是研究基于聚类的无监督机器学习对基于人群的需求进行分类的潜力。我们的框架评估将文本需求转换为数值向量的句子嵌入模型,并使用信息检索(IR)方法选择最优的句子嵌入模型。这些向量通过聚类算法分组,然后手动或自动分配标签。自动标记包括为每个类生成特定于类的语料库,并计算语义相似度来分配标签,而手动标记由主题建模支持,主题建模揭示每个簇中的主题结构。我们在3000个人群生成的智能家居需求上验证了该框架,处理了二进制、三级、四级和五边形分类任务。自动标注的F1分数分别为 ~ 90%、 ~ 82%、 ~ 70%、 ~ 52%,与人工标注的成绩相似。与逻辑回归(一种有监督的ML模型)相比,我们的框架偶尔在F1分数上优于它。与Llama-3.2-3B-Instruct相比,Llama-3.2-3B-Instruct是一种最先进的轻量级大语言模型(LLM),它在38%的自动标记和35%的手动标记情况下的性能都超过了它。我们还表明,我们的框架使人们能够分析和识别数据集中与标签相关的问题,提高真实数据质量。这些发现表明,在数据稀缺的CrowdRE环境中,计算效率高的无监督方法可以有效地对软件需求进行分类,为监督方法提供了一种可行的替代方案。
{"title":"On the use of unsupervised machine learning for classification of crowd-based software requirements","authors":"Naimish Sharma,&nbsp;Arpit Sharma","doi":"10.1016/j.jss.2025.112724","DOIUrl":"10.1016/j.jss.2025.112724","url":null,"abstract":"<div><div>Crowd-based requirements engineering (CrowdRE) involves large-scale user participation to gather software requirements. Supervised machine learning (SML) is commonly used to classify these requirements but demands significant time, computational resources, and high-quality labeled data, which are scarce in CrowdRE. The objective of this paper is to investigate the potential of clustering-based unsupervised ML to classify crowd-based requirements. Our framework evaluates sentence embedding models which convert textual requirements into numerical vectors, selecting optimal ones using information retrieval (IR) measures. These vectors are grouped via clustering algorithms, followed by manual or automated label assignment. Automated labeling involves generating a class-specific corpus for every class and computing semantic similarity to assign labels, while manual labeling is supported by topic modeling which uncovers thematic structures within every cluster. We validated the framework on 3000 crowd-generated smart home requirements, tackling binary, tertiary, quaternary, and quinary classification tasks. Automated labeling achieved F1 scores of up to  ∼ 90 %,  ∼ 82 %,  ∼ 70 %, and  ∼ 52 %, respectively, with manual labeling showing similar performance. Compared to logistic regression which is a supervised ML model, our framework occasionally outperformed it in F1 scores. Against Llama-3.2-3B-Instruct, which is a state-of-the-art lightweight large language model (LLM), it surpassed performance in 38 % of automated and 35 % of manual labeling cases. We also show that our framework enables one to analyze and identify labeling-related issues in the dataset, enhancing ground-truth data quality. These findings show that computationally efficient unsupervised methods effectively classify software requirements in data-scarce CrowdRE settings, offering a viable alternative to supervised approaches.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"234 ","pages":"Article 112724"},"PeriodicalIF":4.1,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MBTModelGenerator: Automated reverse engineering of test models from clickstream data for model-based testing of web applications MBTModelGenerator:基于web应用程序的基于模型的测试的点击流数据的测试模型的自动逆向工程
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-13 DOI: 10.1016/j.jss.2025.112745
Vahid Garousi , Sasidhar Matta , Alper Buğra Keleş , Yunus Balaman , Zafar Jafarov , Aytan Mövsümova , Atif Namazov

Context

Model-Based Testing (MBT) was first introduced in 1970′s, and has the potential to improve efficiency and effectiveness of testing. However, its adoption—especially for web applications—has been hindered by the effort required to manually design MBT models, and keep them updated.

Objective

Based on the above challenge in a real industrial context, this study introduces an automated approach to reduce that effort by reverse engineering MBT models from clickstream data captured during users' interaction with web applications.

Method

We have developed and present in this paper an open-source tool, named MBTModelGenerator, which logs user interactions via a lightweight JavaScript module in the front-end, and transmits them to a REST API backend. These interactions are then transformed into directly executable MBT models in the input format of an open-source MBT tool named GraphWalker.

Results

The tool was evaluated on two representative open-source web applications, Spring PetClinic and a Task Manager web app, and is under evaluation in several large-scale industrial testing projects. The generated MBT models accurately reflected user navigation flows and could be executed in the GraphWalker MBT tool without any manual changes. Using the tool has significantly reduced the effort of MBT model design by more than 90%, while still allowing test engineers to inspect and refine the generated models for completeness.

Conclusion

Our approach facilitates lightweight adoption of MBT by automating model generation, which is the most effort intensive phase of MBT. To ensure correctness and completeness, the generated models should still be reviewed by test engineers —but that effort remains substantially lower than designing MBT models from scratch. The tool is in active industrial use and available as open-source for reuse and further development.
基于上下文模型的测试(context - model - based Testing, MBT)于20世纪70年代首次引入,具有提高测试效率和有效性的潜力。然而,它的采用——特别是对于web应用程序——由于需要手工设计MBT模型并保持更新而受到阻碍。基于上述在真实工业环境中的挑战,本研究引入了一种自动化方法,通过从用户与web应用程序交互期间捕获的点击流数据中对MBT模型进行逆向工程来减少这种工作量。方法我们开发并在本文中展示了一个开源工具,名为MBTModelGenerator,它通过前端的轻量级JavaScript模块记录用户交互,并将其传输到REST API后端。然后,这些交互以名为GraphWalker的开源MBT工具的输入格式转换为直接可执行的MBT模型。结果该工具在两个具有代表性的开源web应用程序(Spring PetClinic和Task Manager web应用程序)上进行了评估,并在几个大型工业测试项目中进行了评估。生成的MBT模型准确地反映了用户导航流,并且可以在GraphWalker MBT工具中执行,而无需进行任何手动更改。使用该工具大大减少了MBT模型设计的工作量,减少了90%以上,同时仍然允许测试工程师检查和完善生成的模型的完整性。结论我们的方法通过自动化模型生成促进了MBT的轻量化采用,这是MBT最耗费精力的阶段。为了确保正确性和完整性,生成的模型仍然应该由测试工程师进行检查,但是这种工作仍然比从头开始设计MBT模型要少得多。该工具在工业上非常活跃,并且可以作为开源来重用和进一步开发。
{"title":"MBTModelGenerator: Automated reverse engineering of test models from clickstream data for model-based testing of web applications","authors":"Vahid Garousi ,&nbsp;Sasidhar Matta ,&nbsp;Alper Buğra Keleş ,&nbsp;Yunus Balaman ,&nbsp;Zafar Jafarov ,&nbsp;Aytan Mövsümova ,&nbsp;Atif Namazov","doi":"10.1016/j.jss.2025.112745","DOIUrl":"10.1016/j.jss.2025.112745","url":null,"abstract":"<div><h3>Context</h3><div>Model-Based Testing (MBT) was first introduced in 1970′s, and has the potential to improve efficiency and effectiveness of testing. However, its adoption—especially for web applications—has been hindered by the effort required to manually design MBT models, and keep them updated.</div></div><div><h3>Objective</h3><div>Based on the above challenge in a real industrial context, this study introduces an automated approach to reduce that effort by reverse engineering MBT models from clickstream data captured during users' interaction with web applications.</div></div><div><h3>Method</h3><div>We have developed and present in this paper an open-source tool, named <em>MBTModelGenerator</em>, which logs user interactions via a lightweight JavaScript module in the front-end, and transmits them to a REST API backend. These interactions are then transformed into directly executable MBT models in the input format of an open-source MBT tool named <em>GraphWalker</em>.</div></div><div><h3>Results</h3><div>The tool was evaluated on two representative open-source web applications, Spring PetClinic and a Task Manager web app, and is under evaluation in several large-scale industrial testing projects. The generated MBT models accurately reflected user navigation flows and could be executed in the GraphWalker MBT tool without any manual changes. Using the tool has significantly reduced the effort of MBT model design by more than 90%, while still allowing test engineers to inspect and refine the generated models for completeness.</div></div><div><h3>Conclusion</h3><div>Our approach facilitates lightweight adoption of MBT by automating model generation, which is the most effort intensive phase of MBT. To ensure correctness and completeness, the generated models should still be reviewed by test engineers —but that effort remains substantially lower than designing MBT models from scratch. The tool is in active industrial use and available as open-source for reuse and further development.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"234 ","pages":"Article 112745"},"PeriodicalIF":4.1,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving the ability of pre-trained language model by imparting large language model’s experience 通过传授大型语言模型的经验,提高预训练语言模型的能力
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-13 DOI: 10.1016/j.jss.2025.112744
Xin Yin, Chao Ni , Xinrui Li, Xiaohu Yang
Large Language Models (LLMs) and pre-trained Language Models (LMs) have achieved impressive success on many software engineering tasks (e.g., code completion and code generation). By leveraging huge existing code corpora (e.g., GitHub), these models can understand the patterns in source code and use these patterns to predict code properties. However, LLMs under few-shot learning perform poorly on non-generative tasks (e.g., fault localization and vulnerability localization), and fine-tuning LLMs is time-consuming and costly for end users and small organizations. Furthermore, the performance of fine-tuning LMs for non-generative tasks is impressive, yet it heavily depends on the amount and quality of data. As a result, the current lack of data and the high cost of collecting it in real-world scenarios further limit the applicability of LMs. In this paper, we leverage the powerful generation capabilities of LLMs to enhance pre-trained LMs. Specifically, we use LLMs to generate domain-specific data, thereby improving the performance of pre-trained LMs on the target tasks. We conduct experiments by combining different LLMs in our generation phase and introducing various LMs to learn from the LLM-generated data. Then, we compare the performance of these LMs before and after learning the data. We find that LLM-generated data significantly enhances the performance of LMs. The improvement can reach up to 58.36 % for fault localization and up to 6.09 % for clone detection.
大型语言模型(llm)和预训练语言模型(LMs)在许多软件工程任务(例如,代码完成和代码生成)上取得了令人印象深刻的成功。通过利用庞大的现有代码语料库(例如,GitHub),这些模型可以理解源代码中的模式,并使用这些模式来预测代码属性。然而,在few-shot学习下的llm在非生成任务(例如,故障定位和漏洞定位)上表现不佳,并且对llm进行微调对于最终用户和小型组织来说是耗时且昂贵的。此外,用于非生成任务的微调LMs的性能令人印象深刻,但它在很大程度上取决于数据的数量和质量。因此,当前数据的缺乏和在现实场景中收集数据的高成本进一步限制了LMs的适用性。在本文中,我们利用llm强大的生成能力来增强预训练的LMs。具体来说,我们使用llm来生成特定领域的数据,从而提高预训练的lm在目标任务上的性能。我们在生成阶段结合不同的llm进行实验,并引入不同的llm来学习llm生成的数据。然后,我们比较了这些lm在学习数据前后的性能。我们发现llm生成的数据显著提高了llm的性能。故障定位的改进率可达58.36%,克隆检测的改进率可达6.09%。
{"title":"Improving the ability of pre-trained language model by imparting large language model’s experience","authors":"Xin Yin,&nbsp;Chao Ni ,&nbsp;Xinrui Li,&nbsp;Xiaohu Yang","doi":"10.1016/j.jss.2025.112744","DOIUrl":"10.1016/j.jss.2025.112744","url":null,"abstract":"<div><div>Large Language Models (LLMs) and pre-trained Language Models (LMs) have achieved impressive success on many software engineering tasks (e.g., code completion and code generation). By leveraging huge existing code corpora (e.g., GitHub), these models can understand the patterns in source code and use these patterns to predict code properties. However, LLMs under few-shot learning perform poorly on non-generative tasks (e.g., fault localization and vulnerability localization), and fine-tuning LLMs is time-consuming and costly for end users and small organizations. Furthermore, the performance of fine-tuning LMs for non-generative tasks is impressive, yet it heavily depends on the amount and quality of data. As a result, the current lack of data and the high cost of collecting it in real-world scenarios further limit the applicability of LMs. In this paper, we leverage the powerful generation capabilities of LLMs to enhance pre-trained LMs. Specifically, we use LLMs to generate domain-specific data, thereby improving the performance of pre-trained LMs on the target tasks. We conduct experiments by combining different LLMs in our generation phase and introducing various LMs to learn from the LLM-generated data. Then, we compare the performance of these LMs before and after learning the data. We find that LLM-generated data significantly enhances the performance of LMs. The improvement can reach up to 58.36 % for fault localization and up to 6.09 % for clone detection.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"234 ","pages":"Article 112744"},"PeriodicalIF":4.1,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Systems and Software
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1