首页 > 最新文献

Automated Software Engineering最新文献

英文 中文
A systematic literature review on software security testing using metaheuristics 关于使用元搜索技术进行软件安全测试的系统性文献综述
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-23 DOI: 10.1007/s10515-024-00433-0
Fatma Ahsan, Faisal Anwer

The security of an application is critical for its success, as breaches cause loss for organizations and individuals. Search-based software security testing (SBSST) is the field that utilizes metaheuristics to generate test cases for the software testing for some pre-specified security test adequacy criteria This paper conducts a systematic literature review to compare metaheuristics and fitness functions used in software security testing, exploring their distinctive capabilities and impact on vulnerability detection and code coverage. The aim is to provide insights for fortifying software systems against emerging threats in the rapidly evolving technological landscape. This paper examines how search-based algorithms have been explored in the context of code coverage and software security testing. Moreover, the study highlights different metaheuristics and fitness functions for security testing and code coverage. This paper follows the standard guidelines from Kitchenham to conduct SLR and obtained 122 primary studies related to SBSST after a multi-stage selection process. The papers were from different sources journals, conference proceedings, workshops, summits, and researchers’ webpages published between 2001 and 2022. The outcomes demonstrate that the main tackled vulnerabilities using metaheuristics are XSS, SQLI, program crash, and XMLI. The findings have suggested several areas for future research directions, including detecting server-side request forgery and security testing of third-party components. Moreover, new metaheuristics must also need to be explored to detect security vulnerabilities that are still unexplored or explored significantly less. Furthermore, metaheuristics can be combined with machine learning and reinforcement learning techniques for better results. Some metaheuristics can be designed by looking at the complexity of security testing and exploiting more fitness functions related to detecting different vulnerabilities.

应用程序的安全性对其成功至关重要,因为漏洞会给组织和个人造成损失。基于搜索的软件安全测试(SBSST)是一个利用元搜索技术为软件测试生成测试用例的领域,测试用例要符合某些预先指定的安全测试充分性标准。本文通过系统的文献综述,比较了软件安全测试中使用的元搜索技术和拟合函数,探讨了它们的独特功能及其对漏洞检测和代码覆盖的影响。目的是为在快速发展的技术环境中强化软件系统抵御新兴威胁提供见解。本文探讨了在代码覆盖率和软件安全测试中如何探索基于搜索的算法。此外,本研究还重点介绍了用于安全测试和代码覆盖的不同元搜索算法和拟合函数。本文遵循 Kitchenham 的标准指南进行 SLR,经过多阶段筛选,获得了 122 篇与 SBSST 相关的主要研究论文。这些论文来自 2001 年至 2022 年间发表的不同来源的期刊、会议论文集、研讨会、峰会和研究人员的网页。研究结果表明,利用元搜索技术解决的主要漏洞有 XSS、SQLI、程序崩溃和 XMLI。研究结果提出了未来研究的几个方向,包括检测服务器端请求伪造和第三方组件的安全测试。此外,还必须探索新的元启发式方法,以检测尚未探索或探索较少的安全漏洞。此外,元启发式方法还可与机器学习和强化学习技术相结合,以获得更好的效果。一些元启发式方法可以通过研究安全测试的复杂性和利用更多与检测不同漏洞相关的适应度函数来设计。
{"title":"A systematic literature review on software security testing using metaheuristics","authors":"Fatma Ahsan,&nbsp;Faisal Anwer","doi":"10.1007/s10515-024-00433-0","DOIUrl":"10.1007/s10515-024-00433-0","url":null,"abstract":"<div><p>The security of an application is critical for its success, as breaches cause loss for organizations and individuals. Search-based software security testing (SBSST) is the field that utilizes metaheuristics to generate test cases for the software testing for some pre-specified security test adequacy criteria This paper conducts a systematic literature review to compare metaheuristics and fitness functions used in software security testing, exploring their distinctive capabilities and impact on vulnerability detection and code coverage. The aim is to provide insights for fortifying software systems against emerging threats in the rapidly evolving technological landscape. This paper examines how search-based algorithms have been explored in the context of code coverage and software security testing. Moreover, the study highlights different metaheuristics and fitness functions for security testing and code coverage. This paper follows the standard guidelines from Kitchenham to conduct SLR and obtained 122 primary studies related to SBSST after a multi-stage selection process. The papers were from different sources journals, conference proceedings, workshops, summits, and researchers’ webpages published between 2001 and 2022. The outcomes demonstrate that the main tackled vulnerabilities using metaheuristics are XSS, SQLI, program crash, and XMLI. The findings have suggested several areas for future research directions, including detecting server-side request forgery and security testing of third-party components. Moreover, new metaheuristics must also need to be explored to detect security vulnerabilities that are still unexplored or explored significantly less. Furthermore, metaheuristics can be combined with machine learning and reinforcement learning techniques for better results. Some metaheuristics can be designed by looking at the complexity of security testing and exploiting more fitness functions related to detecting different vulnerabilities.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141107156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel automated framework for fine-grained sentiment analysis of application reviews using deep neural networks 利用深度神经网络对应用评论进行细粒度情感分析的新型自动框架
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-16 DOI: 10.1007/s10515-024-00444-x
Haochen Zou, Yongli Wang

The substantial volume of user feedback contained in application reviews significantly contributes to the development of human-centred software requirement engineering. The abundance of unstructured text data necessitates an automated analytical framework for decision-making. Language models can automatically extract fine-grained aspect-based sentiment information from application reviews. Existing approaches are constructed based on the general domain corpus, and are challenging to elucidate the internal technique of the recognition process, along with the factors contributing to the analysis results. To fully utilize software engineering domain-specific knowledge and accurately identify aspect-sentiment pairs from application reviews, we design a dependency-enhanced heterogeneous graph neural networks architecture based on the dual-level attention mechanism. The heterogeneous information network with knowledge resources from the software engineering field is embedded into graph convolutional networks to consider the attribute characteristics of different node types. The relationship between aspect terms and sentiment terms in application reviews is determined by adjusting the dual-level attention mechanism. Semantic dependency enhancement is introduced to comprehensively model contextual relationships and analyze sentence structure, thereby distinguishing important contextual information. To our knowledge, this marks initial efforts to leverage software engineering domain knowledge resources to deep neural networks to address fine-grained sentiment analysis issues. The experimental results on multiple public benchmark datasets indicate the effectiveness of the proposed automated framework in aspect-based sentiment analysis tasks for application reviews.

应用评论中包含的大量用户反馈信息,极大地促进了以人为本的软件需求工程的发展。大量的非结构化文本数据需要一个用于决策的自动化分析框架。语言模型可以从应用评论中自动提取细粒度的基于方面的情感信息。现有的方法是基于一般领域的语料库构建的,在阐明识别过程的内部技术以及导致分析结果的因素方面具有挑战性。为了充分利用软件工程领域的特定知识,准确识别应用评论中的方面-情感对,我们设计了一种基于双层关注机制的依赖增强型异构图神经网络架构。将包含软件工程领域知识资源的异构信息网络嵌入图卷积网络,以考虑不同节点类型的属性特征。通过调整双层注意机制,确定应用评论中方面术语和情感术语之间的关系。此外,还引入了语义依赖增强技术,以全面模拟上下文关系并分析句子结构,从而区分重要的上下文信息。据我们所知,这标志着利用软件工程领域知识资源的深度神经网络解决细粒度情感分析问题的初步尝试。在多个公共基准数据集上的实验结果表明,所提出的自动化框架在基于方面的应用评论情感分析任务中非常有效。
{"title":"A novel automated framework for fine-grained sentiment analysis of application reviews using deep neural networks","authors":"Haochen Zou,&nbsp;Yongli Wang","doi":"10.1007/s10515-024-00444-x","DOIUrl":"10.1007/s10515-024-00444-x","url":null,"abstract":"<div><p>The substantial volume of user feedback contained in application reviews significantly contributes to the development of human-centred software requirement engineering. The abundance of unstructured text data necessitates an automated analytical framework for decision-making. Language models can automatically extract fine-grained aspect-based sentiment information from application reviews. Existing approaches are constructed based on the general domain corpus, and are challenging to elucidate the internal technique of the recognition process, along with the factors contributing to the analysis results. To fully utilize software engineering domain-specific knowledge and accurately identify aspect-sentiment pairs from application reviews, we design a dependency-enhanced heterogeneous graph neural networks architecture based on the dual-level attention mechanism. The heterogeneous information network with knowledge resources from the software engineering field is embedded into graph convolutional networks to consider the attribute characteristics of different node types. The relationship between aspect terms and sentiment terms in application reviews is determined by adjusting the dual-level attention mechanism. Semantic dependency enhancement is introduced to comprehensively model contextual relationships and analyze sentence structure, thereby distinguishing important contextual information. To our knowledge, this marks initial efforts to leverage software engineering domain knowledge resources to deep neural networks to address fine-grained sentiment analysis issues. The experimental results on multiple public benchmark datasets indicate the effectiveness of the proposed automated framework in aspect-based sentiment analysis tasks for application reviews.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140968914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A systematic review of refactoring opportunities by software antipattern detection 通过软件反模式检测系统审查重构机会
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-15 DOI: 10.1007/s10515-024-00443-y
Somayeh Kalhor, Mohammad Reza Keyvanpour, Afshin Salajegheh

The violation of the semantic and structural software principles, such as low connection, high coherence, high understanding, and others, are called anti-patterns, which is one of the concerns of the software development process. They are caused due to bad design or programming that must be detected and removed to improve the application’s source code. Refactoring operators efficiently eliminate antipatterns, but they must first be identified. Therefore, antipattern detection is a critical issue in software engineering, and to do this, various approaches have been proposed. So far, review articles have been published to classify and compare these approaches. However, a comprehensive study using evaluation parameters has not compared different anti-pattern detection methods at all software abstraction levels. In this article, all the methods presented so far are classified, then their advantages and disadvantages are highlighted. Finally, a complete comparison of each category by evaluation metrics is provided. Our proposed classification considers three aspects, levels of abstraction, degree of dependence on developers’ skills, and techniques used. Then, the evaluation metrics reported on this subject are analyzed, and the qualitative values of these metrics for each category are presented. This information can help researchers compare and understand existing methods and improve them.

违反语义和结构软件原则(如低连接、高一致性、高理解性等)的行为被称为反模式(anti-patterns),这也是软件开发过程中需要关注的问题之一。反模式是由于不良设计或编程造成的,必须加以检测和消除,以改进应用程序的源代码。重构操作员可以有效地消除反模式,但必须首先识别反模式。因此,反模式检测是软件工程中的一个关键问题,为此,人们提出了各种方法。迄今为止,已有评论文章对这些方法进行了分类和比较。然而,一项使用评估参数的综合研究还没有对所有软件抽象层次的不同反模式检测方法进行比较。本文将对迄今为止介绍的所有方法进行分类,然后重点介绍它们的优缺点。最后,通过评估指标对每一类方法进行全面比较。我们建议的分类考虑了三个方面,即抽象程度、对开发人员技能的依赖程度和使用的技术。然后,我们分析了这方面的评价指标,并给出了每个类别的这些指标的定性值。这些信息有助于研究人员比较和理解现有方法,并对其进行改进。
{"title":"A systematic review of refactoring opportunities by software antipattern detection","authors":"Somayeh Kalhor,&nbsp;Mohammad Reza Keyvanpour,&nbsp;Afshin Salajegheh","doi":"10.1007/s10515-024-00443-y","DOIUrl":"10.1007/s10515-024-00443-y","url":null,"abstract":"<div><p>The violation of the semantic and structural software principles, such as low connection, high coherence, high understanding, and others, are called anti-patterns, which is one of the concerns of the software development process. They are caused due to bad design or programming that must be detected and removed to improve the application’s source code. Refactoring operators efficiently eliminate antipatterns, but they must first be identified. Therefore, antipattern detection is a critical issue in software engineering, and to do this, various approaches have been proposed. So far, review articles have been published to classify and compare these approaches. However, a comprehensive study using evaluation parameters has not compared different anti-pattern detection methods at all software abstraction levels. In this article, all the methods presented so far are classified, then their advantages and disadvantages are highlighted. Finally, a complete comparison of each category by evaluation metrics is provided. Our proposed classification considers three aspects, levels of abstraction, degree of dependence on developers’ skills, and techniques used. Then, the evaluation metrics reported on this subject are analyzed, and the qualitative values of these metrics for each category are presented. This information can help researchers compare and understand existing methods and improve them.\u0000</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140925320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extracting high-level activities from low-level program execution logs 从低级程序执行日志中提取高级活动
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-13 DOI: 10.1007/s10515-024-00441-0
Evgenii V. Stepanov, Alexey A. Mitsyuk

Modern runtime environments, standard libraries, and other frameworks provide many ways of diagnostics for software engineers. One form of such diagnostics is logging low-level events which characterize internal processes during program execution like garbage collection, assembly loading, just-in-time compilation, etc. Low-level program execution event logs contain a large number of events and event classes, which makes it impossible to discover meaningful process models straight from the event log, so extraction of high-level activities is a necessary step for further processing of such logs. In this paper, .NET applications execution logs are considered and an approach based on an unsupervised technique is extended with the domain-driven hierarchy built with the knowledge of a structure of logged events. The proposed approach allows treating events on different levels of abstraction, thus extending the number of patterns and activities found with the unsupervised technique. Experiments with real-life .NET programs execution event logs are conducted to demonstrate the proposed approach’s capability.

现代运行环境、标准库和其他框架为软件工程师提供了多种诊断方法。其中一种诊断方式是记录低级事件,这些事件描述了程序执行过程中的内部流程,如垃圾回收、程序集加载、即时编译等。底层程序执行事件日志包含大量的事件和事件类,因此不可能直接从事件日志中发现有意义的流程模型,所以提取高层活动是进一步处理此类日志的必要步骤。本文考虑了.NET 应用程序的执行日志,并在无监督技术的基础上扩展了一种方法,利用日志事件结构知识建立了领域驱动层次结构。所提出的方法允许在不同的抽象层次上处理事件,从而扩展了无监督技术所发现的模式和活动的数量。我们利用真实的 .NET 程序执行事件日志进行了实验,以证明所提方法的能力。
{"title":"Extracting high-level activities from low-level program execution logs","authors":"Evgenii V. Stepanov,&nbsp;Alexey A. Mitsyuk","doi":"10.1007/s10515-024-00441-0","DOIUrl":"10.1007/s10515-024-00441-0","url":null,"abstract":"<div><p>Modern runtime environments, standard libraries, and other frameworks provide many ways of diagnostics for software engineers. One form of such diagnostics is logging low-level events which characterize internal processes during program execution like garbage collection, assembly loading, just-in-time compilation, etc. Low-level program execution event logs contain a large number of events and event classes, which makes it impossible to discover meaningful process models straight from the event log, so extraction of high-level activities is a necessary step for further processing of such logs. In this paper, .NET applications execution logs are considered and an approach based on an unsupervised technique is extended with the domain-driven hierarchy built with the knowledge of a structure of logged events. The proposed approach allows treating events on different levels of abstraction, thus extending the number of patterns and activities found with the unsupervised technique. Experiments with real-life .NET programs execution event logs are conducted to demonstrate the proposed approach’s capability.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140925262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing software vulnerability detection using RoBERTa and machine learning 利用 RoBERTa 和机器学习优化软件漏洞检测
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-08 DOI: 10.1007/s10515-024-00440-1
Cho Xuan Do, Nguyen Trong Luu, Phuong Thi Lan Nguyen

Detecting vulnerabilities in source code written in C and C +  + is currently essential as attack techniques against systems seek to find, exploit, and attack these vulnerabilities. In this article, to improve the effectiveness of the source code vulnerability detection process, we propose a new approach based on building and representing source code features using natural language processing (NLP) techniques. Our proposal in the article consists of two main stages: (i) building a feature profile of the source code using the RoBERTa model, and (ii) classifying source code based on the feature profile using a supervised machine learning algorithm. Specifically, with our proposal utilizing the pre-trained RoBERTa model, we have successfully built and represented important features of source code as complete vectors, thereby enhancing the effectiveness of prediction and vulnerability detection models. The experimental part of our article compared and evaluated our proposal with other approaches on the FFmpeg + Qume dataset. The experimental results in the article showed that the approach in this study was superior to other research directions on all measures. Therefore, the proposal to use NLP techniques based on the RoBERTa model not only has scientific significance as a new research direction that has not been proposed for application but also has practical significance when all experimental results are highly effective.

目前,检测用 C 和 C + + 编写的源代码中的漏洞至关重要,因为针对系统的攻击技术试图找到、利用和攻击这些漏洞。在本文中,为了提高源代码漏洞检测过程的有效性,我们提出了一种基于使用自然语言处理(NLP)技术构建和表示源代码特征的新方法。我们在文章中提出的建议包括两个主要阶段:(i) 使用 RoBERTa 模型建立源代码的特征轮廓;(ii) 使用监督机器学习算法根据特征轮廓对源代码进行分类。具体来说,我们利用预先训练好的 RoBERTa 模型,成功地构建了源代码的重要特征,并将其表示为完整的向量,从而提高了预测和漏洞检测模型的有效性。文章的实验部分在 FFmpeg + Qume 数据集上对我们的建议与其他方法进行了比较和评估。文章中的实验结果表明,本研究的方法在所有指标上都优于其他研究方向。因此,基于 RoBERTa 模型使用 NLP 技术的建议不仅具有科学意义,是一个尚未提出应用的新研究方向,而且在所有实验结果都非常有效的情况下,还具有实际意义。
{"title":"Optimizing software vulnerability detection using RoBERTa and machine learning","authors":"Cho Xuan Do,&nbsp;Nguyen Trong Luu,&nbsp;Phuong Thi Lan Nguyen","doi":"10.1007/s10515-024-00440-1","DOIUrl":"10.1007/s10515-024-00440-1","url":null,"abstract":"<div><p>Detecting vulnerabilities in source code written in C and C +  + is currently essential as attack techniques against systems seek to find, exploit, and attack these vulnerabilities. In this article, to improve the effectiveness of the source code vulnerability detection process, we propose a new approach based on building and representing source code features using natural language processing (NLP) techniques. Our proposal in the article consists of two main stages: (i) building a feature profile of the source code using the RoBERTa model, and (ii) classifying source code based on the feature profile using a supervised machine learning algorithm. Specifically, with our proposal utilizing the pre-trained RoBERTa model, we have successfully built and represented important features of source code as complete vectors, thereby enhancing the effectiveness of prediction and vulnerability detection models. The experimental part of our article compared and evaluated our proposal with other approaches on the FFmpeg + Qume dataset. The experimental results in the article showed that the approach in this study was superior to other research directions on all measures. Therefore, the proposal to use NLP techniques based on the RoBERTa model not only has scientific significance as a new research direction that has not been proposed for application but also has practical significance when all experimental results are highly effective.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140925556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tsoa: a two-stage optimization approach for GCC compilation options to minimize execution time Tsoa:针对 GCC 编译选项的两阶段优化方法,最大限度地缩短执行时间
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-04-24 DOI: 10.1007/s10515-024-00437-w
Youcong Ni, Xin Du, Yuan Yuan, Ruliang Xiao, Gaolin Chen

The open-source compiler GCC offers numerous options to improve execution time. Two categories of approaches, machine learning-based and design space exploration, have emerged for selecting the optimal set of options. However, they continue to face challenge in quickly obtaining high-quality solutions due to the large and discrete optimization space, time-consuming utility evaluation for selected options, and complex interactions among options. To address these challenges, we propose TSOA, a Two-Stage Optimization Approach for GCC compilation options to minimize execution time. In the first stage, we present OPPM, an Option Preselection algorithm based on Pattern Mining. OPPM generates diverse samples to cover a wide range of option interactions. It subsequently mines frequent options from both objective-improved and non-improved samples. The mining results are further validated using CRC codes to precisely preselect options and reduce the optimization space. Transitioning to the second stage, we present OSEA, an Option Selection Evolutionary optimization Algorithm. OSEA is grounded in solution preselection and an option interaction graph. The solution preselection employs a random forest to build a classifier, efficiently identifying promising solutions for the next-generation population and thereby reducing the time spent on utility evaluation. Simultaneously, the option interaction graph is built to capture option interplays and their influence on objectives from evaluated solutions. Then, high-quality solutions are generated based on the option interaction graph. We evaluate the performance of TSOA by comparing it with representative machine learning-based and design space exploration approaches across a diverse set of 20 problem instances from two benchmark platforms. Additionally, we validate the effectiveness of OPPM and conduct related ablation experiments. The experimental results show that TSOA outperforms state-of-the-art approaches significantly in both optimization time and solution quality. Moreover, OPPM outperforms other option preselection algorithms, while the effectiveness of random forest-assisted solution preselection, along with new solution generation based on the option interaction graph, has been verified.

开源编译器 GCC 提供了许多选项来缩短执行时间。为选择最优选项集,出现了基于机器学习和设计空间探索的两类方法。然而,由于优化空间庞大且离散、所选选项的效用评估耗时以及选项间复杂的相互作用,这些方法在快速获得高质量解决方案方面仍然面临挑战。为了应对这些挑战,我们提出了 TSOA,一种针对 GCC 编译选项的两阶段优化方法,以最大限度地缩短执行时间。在第一阶段,我们提出了基于模式挖掘的选项预选算法 OPPM。OPPM 生成各种样本,以涵盖广泛的选项交互。随后,它从目标改进样本和非改进样本中挖掘频繁出现的期权。挖掘结果通过 CRC 代码进一步验证,以精确预选选项并缩小优化空间。进入第二阶段后,我们将介绍一种选项选择进化优化算法 OSEA。OSEA 以解决方案预选和选项交互图为基础。解决方案预选采用随机森林建立分类器,为下一代群体有效识别出有前途的解决方案,从而减少用于效用评估的时间。同时,建立选项交互图来捕捉选项间的相互作用及其对已评估解决方案目标的影响。然后,根据选项交互图生成高质量的解决方案。我们将 TSOA 与基于机器学习的代表性方法和设计空间探索方法进行了比较,评估了它在两个基准平台的 20 个问题实例中的性能。此外,我们还验证了 OPPM 的有效性,并进行了相关的消融实验。实验结果表明,TSOA 在优化时间和解决方案质量方面都明显优于最先进的方法。此外,OPPM 的性能也优于其他选项预选算法,而随机森林辅助解决方案预选以及基于选项交互图的新解决方案生成的有效性也得到了验证。
{"title":"Tsoa: a two-stage optimization approach for GCC compilation options to minimize execution time","authors":"Youcong Ni,&nbsp;Xin Du,&nbsp;Yuan Yuan,&nbsp;Ruliang Xiao,&nbsp;Gaolin Chen","doi":"10.1007/s10515-024-00437-w","DOIUrl":"10.1007/s10515-024-00437-w","url":null,"abstract":"<div><p>The open-source compiler GCC offers numerous options to improve execution time. Two categories of approaches, machine learning-based and design space exploration, have emerged for selecting the optimal set of options. However, they continue to face challenge in quickly obtaining high-quality solutions due to the large and discrete optimization space, time-consuming utility evaluation for selected options, and complex interactions among options. To address these challenges, we propose TSOA, a Two-Stage Optimization Approach for GCC compilation options to minimize execution time. In the first stage, we present OPPM, an Option Preselection algorithm based on Pattern Mining. OPPM generates diverse samples to cover a wide range of option interactions. It subsequently mines frequent options from both objective-improved and non-improved samples. The mining results are further validated using CRC codes to precisely preselect options and reduce the optimization space. Transitioning to the second stage, we present OSEA, an Option Selection Evolutionary optimization Algorithm. OSEA is grounded in solution preselection and an option interaction graph. The solution preselection employs a random forest to build a classifier, efficiently identifying promising solutions for the next-generation population and thereby reducing the time spent on utility evaluation. Simultaneously, the option interaction graph is built to capture option interplays and their influence on objectives from evaluated solutions. Then, high-quality solutions are generated based on the option interaction graph. We evaluate the performance of TSOA by comparing it with representative machine learning-based and design space exploration approaches across a diverse set of 20 problem instances from two benchmark platforms. Additionally, we validate the effectiveness of OPPM and conduct related ablation experiments. The experimental results show that TSOA outperforms state-of-the-art approaches significantly in both optimization time and solution quality. Moreover, OPPM outperforms other option preselection algorithms, while the effectiveness of random forest-assisted solution preselection, along with new solution generation based on the option interaction graph, has been verified.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140659966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ProRLearn: boosting prompt tuning-based vulnerability detection by reinforcement learning ProRLearn:通过强化学习提高基于提示调整的漏洞检测能力
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-04-20 DOI: 10.1007/s10515-024-00438-9
Zilong Ren, Xiaolin Ju, Xiang Chen, Hao Shen

Software vulnerability detection is a critical step in ensuring system security and data protection. Recent research has demonstrated the effectiveness of deep learning in automated vulnerability detection. However, it is difficult for deep learning models to understand the semantics and domain-specific knowledge of source code. In this study, we introduce a new vulnerability detection framework, ProRLearn, which leverages two main techniques: prompt tuning and reinforcement learning. Since existing fine-tuning of pre-trained language models (PLMs) struggles to leverage domain knowledge fully, we introduce a new automatic prompt-tuning technique. Precisely, prompt tuning mimics the pre-training process of PLMs by rephrasing task input and adding prompts, using the PLM’s output as the prediction output. The introduction of the reinforcement learning reward mechanism aims to guide the behavior of vulnerability detection through a reward and punishment model, enabling it to learn effective strategies for obtaining maximum long-term rewards in specific environments. The introduction of reinforcement learning aims to encourage the model to learn how to maximize rewards or minimize penalties, thus enhancing performance. Experiments on three datasets (FFMPeg+Qemu, Reveal, and Big-Vul) indicate that ProRLearn achieves performance improvement of 3.27–70.96% over state-of-the-art baselines in terms of F1 score. The combination of prompt tuning and reinforcement learning can offer a potential opportunity to improve performance in vulnerability detection. This means that it can effectively improve the performance in responding to constantly changing network environments and new threats. This interdisciplinary approach contributes to a better understanding of the interplay between natural language processing and reinforcement learning, opening up new opportunities and challenges for future research and applications.

软件漏洞检测是确保系统安全和数据保护的关键一步。最近的研究证明了深度学习在自动漏洞检测中的有效性。然而,深度学习模型很难理解源代码的语义和特定领域知识。在本研究中,我们介绍了一种新的漏洞检测框架 ProRLearn,它利用了两种主要技术:及时调整和强化学习。由于现有的预训练语言模型(PLM)的微调难以充分利用领域知识,我们引入了一种新的自动提示调整技术。确切地说,提示调整模拟了预训练语言模型的预训练过程,通过重新措辞任务输入并添加提示,将预训练语言模型的输出作为预测输出。引入强化学习奖励机制,旨在通过奖惩模型引导漏洞检测行为,使其学习有效策略,在特定环境中获得最大的长期回报。引入强化学习的目的是鼓励模型学习如何使奖励最大化或惩罚最小化,从而提高性能。在三个数据集(FFMPeg+Qemu、Reveal 和 Big-Vul)上的实验表明,就 F1 分数而言,ProRLearn 比最先进的基线模型提高了 3.27%-70.96% 的性能。提示调整与强化学习的结合为提高漏洞检测性能提供了潜在的机会。这意味着,它能有效提高应对不断变化的网络环境和新威胁的性能。这种跨学科方法有助于更好地理解自然语言处理和强化学习之间的相互作用,为未来的研究和应用带来了新的机遇和挑战。
{"title":"ProRLearn: boosting prompt tuning-based vulnerability detection by reinforcement learning","authors":"Zilong Ren,&nbsp;Xiaolin Ju,&nbsp;Xiang Chen,&nbsp;Hao Shen","doi":"10.1007/s10515-024-00438-9","DOIUrl":"10.1007/s10515-024-00438-9","url":null,"abstract":"<div><p>Software vulnerability detection is a critical step in ensuring system security and data protection. Recent research has demonstrated the effectiveness of deep learning in automated vulnerability detection. However, it is difficult for deep learning models to understand the semantics and domain-specific knowledge of source code. In this study, we introduce a new vulnerability detection framework, ProRLearn, which leverages two main techniques: prompt tuning and reinforcement learning. Since existing fine-tuning of pre-trained language models (PLMs) struggles to leverage domain knowledge fully, we introduce a new automatic prompt-tuning technique. Precisely, prompt tuning mimics the pre-training process of PLMs by rephrasing task input and adding prompts, using the PLM’s output as the prediction output. The introduction of the reinforcement learning reward mechanism aims to guide the behavior of vulnerability detection through a reward and punishment model, enabling it to learn effective strategies for obtaining maximum long-term rewards in specific environments. The introduction of reinforcement learning aims to encourage the model to learn how to maximize rewards or minimize penalties, thus enhancing performance. Experiments on three datasets (FFMPeg+Qemu, Reveal, and Big-Vul) indicate that ProRLearn achieves performance improvement of 3.27–70.96% over state-of-the-art baselines in terms of F1 score. The combination of prompt tuning and reinforcement learning can offer a potential opportunity to improve performance in vulnerability detection. This means that it can effectively improve the performance in responding to constantly changing network environments and new threats. This interdisciplinary approach contributes to a better understanding of the interplay between natural language processing and reinforcement learning, opening up new opportunities and challenges for future research and applications.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140625680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OneLog: towards end-to-end software log anomaly detection OneLog:实现端到端软件日志异常检测
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-04-16 DOI: 10.1007/s10515-024-00428-x
Shayan Hashemi, Mika Mäntylä

With the growth of online services, IoT devices, and DevOps-oriented software development, software log anomaly detection is becoming increasingly important. Prior works mainly follow a traditional four-staged architecture (Preprocessor, Parser, Vectorizer, and Classifier). This paper proposes OneLog, which utilizes a single deep neural network instead of multiple separate components. OneLog harnesses convolutional neural network (CNN) at the character level to take digits, numbers, and punctuations, which were removed in prior works, into account alongside the main natural language text. We evaluate our approach in six message- and sequence-based data sets: HDFS, Hadoop, BGL, Thunderbird, Spirit, and Liberty. We experiment with Onelog with single-, multi-, and cross-project setups. Onelog offers state-of-the-art performance in our datasets. Onelog can utilize multi-project datasets simultaneously during training, which suggests our model can generalize between datasets. Multi-project training also improves Onelog performance making it ideal when limited training data is available for an individual project. We also found that cross-project anomaly detection is possible with a single project pair (Liberty and Spirit). Analysis of model internals shows that one log has multiple modes of detecting anomalies and that the model learns manually validated parsing rules for the log messages. We conclude that character-based CNNs are a promising approach toward end-to-end learning in log anomaly detection. They offer good performance and generalization over multiple datasets. We will make our scripts publicly available upon the acceptance of this paper.

随着在线服务、物联网设备和面向 DevOps 的软件开发的发展,软件日志异常检测变得越来越重要。之前的研究主要遵循传统的四阶段架构(预处理器、解析器、矢量器和分类器)。本文提出的 OneLog 采用单一深度神经网络,而非多个独立组件。OneLog 在字符层面利用卷积神经网络(CNN),将数字、数字和标点符号(在之前的研究中被删除)与主要自然语言文本一起考虑在内。我们在六个基于消息和序列的数据集中对我们的方法进行了评估:HDFS、Hadoop、BGL、Thunderbird、Spirit 和 Liberty。我们使用 Onelog 进行了单项目、多项目和跨项目设置实验。在我们的数据集中,Onelog 提供了最先进的性能。在训练过程中,Onelog 可以同时使用多个项目数据集,这表明我们的模型可以在数据集之间进行泛化。多项目训练也提高了 Onelog 的性能,使其成为单个项目训练数据有限时的理想选择。我们还发现,单个项目对(Liberty 和 Spirit)可以进行跨项目异常检测。对模型内部结构的分析表明,一个日志具有多种检测异常的模式,而且该模型可学习经人工验证的日志信息解析规则。我们的结论是,基于字符的 CNN 是日志异常检测中一种很有前途的端到端学习方法。它们在多个数据集上具有良好的性能和泛化能力。本文一经接受,我们将公开我们的脚本。
{"title":"OneLog: towards end-to-end software log anomaly detection","authors":"Shayan Hashemi,&nbsp;Mika Mäntylä","doi":"10.1007/s10515-024-00428-x","DOIUrl":"10.1007/s10515-024-00428-x","url":null,"abstract":"<div><p>With the growth of online services, IoT devices, and DevOps-oriented software development, software log anomaly detection is becoming increasingly important. Prior works mainly follow a traditional four-staged architecture (Preprocessor, Parser, Vectorizer, and Classifier). This paper proposes OneLog, which utilizes a single deep neural network instead of multiple separate components. OneLog harnesses convolutional neural network (CNN) at the character level to take digits, numbers, and punctuations, which were removed in prior works, into account alongside the main natural language text. We evaluate our approach in six message- and sequence-based data sets: HDFS, Hadoop, BGL, Thunderbird, Spirit, and Liberty. We experiment with Onelog with single-, multi-, and cross-project setups. Onelog offers state-of-the-art performance in our datasets. Onelog can utilize multi-project datasets simultaneously during training, which suggests our model can generalize between datasets. Multi-project training also improves Onelog performance making it ideal when limited training data is available for an individual project. We also found that cross-project anomaly detection is possible with a single project pair (Liberty and Spirit). Analysis of model internals shows that one log has multiple modes of detecting anomalies and that the model learns manually validated parsing rules for the log messages. We conclude that character-based CNNs are a promising approach toward end-to-end learning in log anomaly detection. They offer good performance and generalization over multiple datasets. We will make our scripts publicly available upon the acceptance of this paper.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10515-024-00428-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140614302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated quantum software engineering 自动化量子软件工程
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-04-12 DOI: 10.1007/s10515-024-00436-x
Aritra Sarkar

As bigger quantum processors with hundreds of qubits become increasingly available, the potential for quantum computing to solve problems intractable for classical computers is becoming more tangible. Designing efficient quantum algorithms and software in tandem is key to achieving quantum advantage. Quantum software engineering is challenging due to the unique counterintuitive nature of quantum logic. Moreover, with larger quantum systems, traditional programming using quantum assembly language and qubit-level reasoning is becoming infeasible. Automated Quantum Software Engineering (AQSE) can help to reduce the barrier to entry, speed up development, reduce errors, and improve the efficiency of quantum software. This article elucidates the motivation to research AQSE (why), a precise description of such a framework (what), and reflections on components that are required for implementing it (how).

随着拥有数百量子比特的大型量子处理器越来越多,量子计算解决经典计算机难以解决的问题的潜力正变得越来越明显。同时设计高效的量子算法和软件是实现量子优势的关键。由于量子逻辑具有独特的反直觉性质,量子软件工程极具挑战性。此外,随着量子系统规模的扩大,使用量子汇编语言和量子比特级推理进行传统编程变得不可行。自动化量子软件工程(AQSE)有助于降低入门门槛、加快开发速度、减少错误并提高量子软件的效率。本文阐明了研究 AQSE 的动机(为什么)、对这一框架的精确描述(是什么),以及对实现这一框架所需组件的思考(怎么做)。
{"title":"Automated quantum software engineering","authors":"Aritra Sarkar","doi":"10.1007/s10515-024-00436-x","DOIUrl":"10.1007/s10515-024-00436-x","url":null,"abstract":"<div><p>As bigger quantum processors with hundreds of qubits become increasingly available, the potential for quantum computing to solve problems intractable for classical computers is becoming more tangible. Designing efficient quantum algorithms and software in tandem is key to achieving quantum advantage. Quantum software engineering is challenging due to the unique counterintuitive nature of quantum logic. Moreover, with larger quantum systems, traditional programming using quantum assembly language and qubit-level reasoning is becoming infeasible. Automated Quantum Software Engineering (AQSE) can help to reduce the barrier to entry, speed up development, reduce errors, and improve the efficiency of quantum software. This article elucidates the motivation to research AQSE (why), a precise description of such a framework (what), and reflections on components that are required for implementing it (how).</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10515-024-00436-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140598066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bug reports priority classification models. Replication study 错误报告优先级分类模型。复制研究
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-04-10 DOI: 10.1007/s10515-024-00432-1
Andreea Galbin-Nasui, Andreea Vescan

Bug tracking systems receive a large number of bugs on a daily basis. The process of maintaining the integrity of the software and producing high-quality software is challenging. The bug-sorting process is usually a manual task that can lead to human errors and be time-consuming. The purpose of this research is twofold: first, to conduct a literature review on the bug report priority classification approaches, and second, to replicate existing approaches with various classifiers to extract new insights about the priority classification approaches. We used a Systematic Literature Review methodology to identify the most relevant existing approaches related to the bug report priority classification problem. Furthermore, we conducted a replication study on three classifiers: Naive Bayes (NB), Support Vector Machines (SVM), and Convolutional Neural Network (CNN). Two sets of experiments are performed: first, our own NLTK implementation based on NB and CNN, and second, based on Weka implementation for NB, SVM, and CNN. The dataset used consists of several Eclipse projects and one project related to database systems. The obtained results are better for the bug priority P3 for the CNN classifier, and overall the quality relation between the three classifiers is preserved as in the original studies. The replication study confirmed the findings of the original studies, emphasizing the need to further investigate the relationship between the characteristics of the projects used as training and those used as testing.

错误跟踪系统每天都会收到大量的错误。保持软件的完整性和生产高质量软件的过程充满挑战。错误分类过程通常是一项人工任务,可能会导致人为错误并耗费大量时间。本研究有两个目的:首先,对错误报告优先级分类方法进行文献综述;其次,使用各种分类器复制现有方法,以提取关于优先级分类方法的新见解。我们采用了系统文献综述的方法来确定与错误报告优先级分类问题相关的最相关的现有方法。此外,我们还对三种分类器进行了复制研究:Naive Bayes (NB)、支持向量机 (SVM) 和卷积神经网络 (CNN)。我们进行了两组实验:第一组是我们自己基于 NB 和 CNN 的 NLTK 实现,第二组是基于 Weka 实现的 NB、SVM 和 CNN。使用的数据集包括几个 Eclipse 项目和一个与数据库系统相关的项目。对于 CNN 分类器来说,错误优先级 P3 得到的结果更好,总体而言,三种分类器之间的质量关系与最初的研究结果相同。复制研究证实了原始研究的结果,强调了进一步研究用作训练的项目和用作测试的项目的特征之间的关系的必要性。
{"title":"Bug reports priority classification models. Replication study","authors":"Andreea Galbin-Nasui,&nbsp;Andreea Vescan","doi":"10.1007/s10515-024-00432-1","DOIUrl":"10.1007/s10515-024-00432-1","url":null,"abstract":"<div><p>Bug tracking systems receive a large number of bugs on a daily basis. The process of maintaining the integrity of the software and producing high-quality software is challenging. The bug-sorting process is usually a manual task that can lead to human errors and be time-consuming. The purpose of this research is twofold: first, to conduct a literature review on the bug report priority classification approaches, and second, to replicate existing approaches with various classifiers to extract new insights about the priority classification approaches. We used a Systematic Literature Review methodology to identify the most relevant existing approaches related to the bug report priority classification problem. Furthermore, we conducted a replication study on three classifiers: Naive Bayes (NB), Support Vector Machines (SVM), and Convolutional Neural Network (CNN). Two sets of experiments are performed: first, our own NLTK implementation based on NB and CNN, and second, based on Weka implementation for NB, SVM, and CNN. The dataset used consists of several Eclipse projects and one project related to database systems. The obtained results are better for the bug priority P3 for the CNN classifier, and overall the quality relation between the three classifiers is preserved as in the original studies. The replication study confirmed the findings of the original studies, emphasizing the need to further investigate the relationship between the characteristics of the projects used as training and those used as testing.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140598070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Automated Software Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1