首页 > 最新文献

Automated Software Engineering最新文献

英文 中文
Data cleaning and machine learning: a systematic literature review 数据清理与机器学习:系统文献综述
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-11 DOI: 10.1007/s10515-024-00453-w
Pierre-Olivier Côté, Amin Nikanjam, Nafisa Ahmed, Dmytro Humeniuk, Foutse Khomh

Machine Learning (ML) is integrated into a growing number of systems for various applications. Because the performance of an ML model is highly dependent on the quality of the data it has been trained on, there is a growing interest in approaches to detect and repair data errors (i.e., data cleaning). Researchers are also exploring how ML can be used for data cleaning; hence creating a dual relationship between ML and data cleaning. To the best of our knowledge, there is no study that comprehensively reviews this relationship. This paper’s objectives are twofold. First, it aims to summarize the latest approaches for data cleaning for ML and ML for data cleaning. Second, it provides future work recommendations. We conduct a systematic literature review of the papers published between 2016 and 2022 inclusively. We identify different types of data cleaning activities with and for ML: feature cleaning, label cleaning, entity matching, outlier detection, imputation, and holistic data cleaning. We summarize the content of 101 papers covering various data cleaning activities and provide 24 future work recommendations. Our review highlights many promising data cleaning techniques that can be further extended. We believe that our review of the literature will help the community develop better approaches to clean data.

机器学习(ML)被越来越多的系统集成到各种应用中。由于 ML 模型的性能在很大程度上取决于它所训练的数据的质量,因此人们对检测和修复数据错误(即数据清理)的方法越来越感兴趣。研究人员也在探索如何将 ML 用于数据清洗,从而在 ML 和数据清洗之间建立起双重关系。据我们所知,目前还没有一项研究对这种关系进行全面回顾。本文有两个目的。首先,本文旨在总结用于 ML 的数据清洗和用于数据清洗的 ML 的最新方法。其次,本文提出了未来的工作建议。我们对 2016 年至 2022 年间发表的论文进行了系统的文献综述。我们确定了使用 ML 和针对 ML 的不同类型的数据清洗活动:特征清洗、标签清洗、实体匹配、离群点检测、估算和整体数据清洗。我们总结了 101 篇涉及各种数据清洗活动的论文内容,并提供了 24 项未来工作建议。我们的综述强调了许多有前途的数据清洗技术,这些技术可以进一步扩展。我们相信,我们的文献综述将有助于社区开发出更好的数据清理方法。
{"title":"Data cleaning and machine learning: a systematic literature review","authors":"Pierre-Olivier Côté,&nbsp;Amin Nikanjam,&nbsp;Nafisa Ahmed,&nbsp;Dmytro Humeniuk,&nbsp;Foutse Khomh","doi":"10.1007/s10515-024-00453-w","DOIUrl":"10.1007/s10515-024-00453-w","url":null,"abstract":"<div><p>Machine Learning (ML) is integrated into a growing number of systems for various applications. Because the performance of an ML model is highly dependent on the quality of the data it has been trained on, there is a growing interest in approaches to detect and repair data errors (i.e., data cleaning). Researchers are also exploring how ML can be used for data cleaning; hence creating a dual relationship between ML and data cleaning. To the best of our knowledge, there is no study that comprehensively reviews this relationship. This paper’s objectives are twofold. First, it aims to summarize the latest approaches for data cleaning for ML and ML for data cleaning. Second, it provides future work recommendations. We conduct a systematic literature review of the papers published between 2016 and 2022 inclusively. We identify different types of data cleaning activities with and for ML: feature cleaning, label cleaning, entity matching, outlier detection, imputation, and holistic data cleaning. We summarize the content of 101 papers covering various data cleaning activities and provide 24 future work recommendations. Our review highlights many promising data cleaning techniques that can be further extended. We believe that our review of the literature will help the community develop better approaches to clean data.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SDK4ED: a platform for building energy efficient, dependable, and maintainable embedded software SDK4ED:构建节能、可靠、可维护嵌入式软件的平台
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-11 DOI: 10.1007/s10515-024-00450-z
Miltiadis Siavvas, Dimitrios Tsoukalas, Charalambos Marantos, Lazaros Papadopoulos, Christos Lamprakos, Oliviu Matei, Christos Strydis, Muhammad Ali Siddiqi, Philippe Chrobocinski, Katarzyna Filus, Joanna Domańska, Paris Avgeriou, Apostolos Ampatzoglou, Dimitrios Soudris, Alexander Chatzigeorgiou, Erol Gelenbe, Dionysios Kehagias, Dimitrios Tzovaras

Developing embedded software applications is a challenging task, chiefly due to the limitations that are imposed by the hardware devices or platforms on which they operate, as well as due to the heterogeneous non-functional requirements that they need to exhibit. Modern embedded systems need to be energy efficient and dependable, whereas their maintenance costs should be minimized, in order to ensure the success and longevity of their application. Being able to build embedded software that satisfies the imposed hardware limitations, while maintaining high quality with respect to critical non-functional requirements is a difficult task that requires proper assistance. To this end, in the present paper, we present the SDK4ED Platform, which facilitates the development of embedded software that exhibits high quality with respect to important quality attributes, with a main focus on energy consumption, dependability, and maintainability. This is achieved through the provision of state-of-the-art and novel quality attribute-specific monitoring and optimization mechanisms, as well as through a novel fuzzy multi-criteria decision-making mechanism for facilitating the selection of code refactorings, which is based on trade-off analysis among the three main attributes of choice. Novel forecasting techniques are also proposed to further support decision making during the development of embedded software. The usefulness, practicality, and industrial relevance of the SDK4ED platform were evaluated in a real-world setting, through three use cases on actual commercial embedded software applications stemming from the airborne, automotive, and healthcare domains, as well as through an industrial study. To the best of our knowledge, this is the first quality analysis platform that focuses on multiple quality criteria, which also takes into account their trade-offs to facilitate code refactoring selection.

开发嵌入式软件应用程序是一项极具挑战性的任务,这主要是由于其运行所依赖的硬件设备或平台所带来的限制,以及它们需要满足的各种非功能性要求。现代嵌入式系统需要高能效和高可靠性,同时应最大限度地降低维护成本,以确保其应用的成功和寿命。要构建既能满足硬件限制,又能在关键的非功能要求方面保持高质量的嵌入式软件,是一项需要适当帮助的艰巨任务。为此,我们在本文中介绍了 SDK4ED 平台,该平台有助于开发在重要质量属性方面表现出高质量的嵌入式软件,主要侧重于能耗、可靠性和可维护性。为实现这一目标,我们提供了最先进、最新颖的针对特定质量属性的监控和优化机制,以及一种新颖的模糊多标准决策机制,该机制基于对三个主要选择属性的权衡分析,便于选择代码重构。此外,还提出了新颖的预测技术,以进一步支持嵌入式软件开发过程中的决策制定。通过机载、汽车和医疗保健领域实际商业嵌入式软件应用的三个用例,以及一项工业研究,在现实世界环境中评估了 SDK4ED 平台的有用性、实用性和工业相关性。据我们所知,这是第一个专注于多种质量标准的质量分析平台,该平台还考虑到了它们之间的权衡,以促进代码重构选择。
{"title":"SDK4ED: a platform for building energy efficient, dependable, and maintainable embedded software","authors":"Miltiadis Siavvas,&nbsp;Dimitrios Tsoukalas,&nbsp;Charalambos Marantos,&nbsp;Lazaros Papadopoulos,&nbsp;Christos Lamprakos,&nbsp;Oliviu Matei,&nbsp;Christos Strydis,&nbsp;Muhammad Ali Siddiqi,&nbsp;Philippe Chrobocinski,&nbsp;Katarzyna Filus,&nbsp;Joanna Domańska,&nbsp;Paris Avgeriou,&nbsp;Apostolos Ampatzoglou,&nbsp;Dimitrios Soudris,&nbsp;Alexander Chatzigeorgiou,&nbsp;Erol Gelenbe,&nbsp;Dionysios Kehagias,&nbsp;Dimitrios Tzovaras","doi":"10.1007/s10515-024-00450-z","DOIUrl":"10.1007/s10515-024-00450-z","url":null,"abstract":"<div><p>Developing embedded software applications is a challenging task, chiefly due to the limitations that are imposed by the hardware devices or platforms on which they operate, as well as due to the heterogeneous non-functional requirements that they need to exhibit. Modern embedded systems need to be energy efficient and dependable, whereas their maintenance costs should be minimized, in order to ensure the success and longevity of their application. Being able to build embedded software that satisfies the imposed hardware limitations, while maintaining high quality with respect to critical non-functional requirements is a difficult task that requires proper assistance. To this end, in the present paper, we present the SDK4ED Platform, which facilitates the development of embedded software that exhibits high quality with respect to important quality attributes, with a main focus on energy consumption, dependability, and maintainability. This is achieved through the provision of state-of-the-art and novel quality attribute-specific monitoring and optimization mechanisms, as well as through a novel fuzzy multi-criteria decision-making mechanism for facilitating the selection of code refactorings, which is based on trade-off analysis among the three main attributes of choice. Novel forecasting techniques are also proposed to further support decision making during the development of embedded software. The usefulness, practicality, and industrial relevance of the SDK4ED platform were evaluated in a real-world setting, through three use cases on actual commercial embedded software applications stemming from the airborne, automotive, and healthcare domains, as well as through an industrial study. To the best of our knowledge, this is the first quality analysis platform that focuses on multiple quality criteria, which also takes into account their trade-offs to facilitate code refactoring selection.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated test data generation and stubbing method for C/C++ embedded projects C/C++ 嵌入式项目的自动测试数据生成和存根方法
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-10 DOI: 10.1007/s10515-024-00449-6
Lam Nguyen Tung, Nguyen Vu Binh Duong, Khoi Nguyen Le, Pham Ngoc Hung

Automated test data generation for unit testing C/C++ functions using concolic testing has been known for improving software quality while reducing human testing effort. However, concolic testing could face challenging problems when tackling complex practical projects. This paper proposes a concolic-based method named Automated Unit Testing and Stubbing (AUTS) for automated test data and stub generation. The key idea of the proposed method is to apply the concolic testing approach with three major improvements. Firstly, the test data generation, which includes two path search strategies, not only is able to avoid infeasible paths but also achieves higher code coverage. Secondly, AUTS generates appropriate values for specialized data types to cover more test scenarios. Finally, the proposed method integrates automatic stub preparation and generation to reduce the costs of human effort. The method even works on incomplete source code or missing libraries. AUTS is implemented in a tool to test various C/C++ industrial and open-source projects. The experimental results show that the proposed method significantly improves the coverage of the generated test data in comparison with other existing methods.

使用协程测试为 C/C++ 函数的单元测试自动生成测试数据,可以提高软件质量,同时减少人工测试工作量。然而,在处理复杂的实际项目时,协程测试可能会面临一些具有挑战性的问题。本文提出了一种基于协程的方法,名为自动单元测试和存根(AUTS),用于自动生成测试数据和存根。该方法的主要思想是在应用协程测试方法的基础上进行三大改进。首先,测试数据生成包括两种路径搜索策略,不仅能避免不可行路径,还能实现更高的代码覆盖率。其次,AUTS 为专门的数据类型生成适当的值,以覆盖更多的测试场景。最后,建议的方法集成了自动存根准备和生成功能,以减少人力成本。该方法甚至可用于不完整的源代码或缺失的库。AUTS 已在一个工具中实现,用于测试各种 C/C++ 工业项目和开源项目。实验结果表明,与其他现有方法相比,拟议方法显著提高了生成测试数据的覆盖率。
{"title":"Automated test data generation and stubbing method for C/C++ embedded projects","authors":"Lam Nguyen Tung,&nbsp;Nguyen Vu Binh Duong,&nbsp;Khoi Nguyen Le,&nbsp;Pham Ngoc Hung","doi":"10.1007/s10515-024-00449-6","DOIUrl":"10.1007/s10515-024-00449-6","url":null,"abstract":"<div><p>Automated test data generation for unit testing C/C++ functions using concolic testing has been known for improving software quality while reducing human testing effort. However, concolic testing could face challenging problems when tackling complex practical projects. This paper proposes a concolic-based method named Automated Unit Testing and Stubbing (AUTS) for automated test data and stub generation. The key idea of the proposed method is to apply the concolic testing approach with three major improvements. Firstly, the test data generation, which includes two path search strategies, not only is able to avoid infeasible paths but also achieves higher code coverage. Secondly, AUTS generates appropriate values for specialized data types to cover more test scenarios. Finally, the proposed method integrates automatic stub preparation and generation to reduce the costs of human effort. The method even works on incomplete source code or missing libraries. AUTS is implemented in a tool to test various C/C++ industrial and open-source projects. The experimental results show that the proposed method significantly improves the coverage of the generated test data in comparison with other existing methods.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Test case selection and prioritization approach for automated regression testing using ontology and COSMIC measurement 利用本体论和 COSMIC 测量方法为自动回归测试选择测试用例并确定优先次序
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-10 DOI: 10.1007/s10515-024-00447-8
Zaineb Sakhrawi, Taher Labidi

Regression testing is an important activity that aims to provide information about the quality of the software product under test when changes occur. The two primary techniques for optimizing regression testing are test case selection and prioritization. To identify features affected by a change and determine the best test cases for selection and prioritization, techniques allowing the semantic representation and the quantification of testing concepts are required. The goal of this paper is threefold. Firstly, we proposed an ontology-based test case selection model that enables automated regression testing by dynamically selecting appropriate test cases. The selection of test cases is based on a semantic mapping between change requests and their associated test suites and test cases. Secondly, the selected test cases are prioritized based on their functional size. The functional size is determined using the COmmon Software Measurement International Consortium (COSMIC) Functional Size Measurement (FSM) method. The test case prioritization attempts to reorganize test case execution in accordance with its goal. One common goal is fault detection, in which test cases with a higher functional size (i.e., with a higher chance of detecting a fault) are run first, followed by the remaining test cases. Thirdly, we built an automated testing tool using the output of the aforementioned processes to validate the robustness of our proposed research methodology. Results from a case study in the automotive industry domain show that semantically presenting change requests and using standardized FSM methods to quantify their related test cases are the most interesting metrics. Obviously, they assist in the automation of regression testing and, therefore, in all the software testing processes.

回归测试是一项重要活动,目的是在发生变化时提供有关被测软件产品质量的信息。优化回归测试的两种主要技术是测试用例选择和优先级排序。要识别受变更影响的特征,并确定最佳测试用例的选择和优先级,就需要有能够对测试概念进行语义表述和量化的技术。本文的目标有三个方面。首先,我们提出了一种基于本体的测试用例选择模型,通过动态选择适当的测试用例来实现自动回归测试。测试用例的选择基于变更请求及其相关测试套件和测试用例之间的语义映射。其次,所选测试用例的优先级取决于其功能大小。功能大小是使用国际通用软件测量联盟(COSMIC)的功能大小测量(FSM)方法确定的。测试用例优先级排序试图根据其目标重新组织测试用例的执行。其中一个常见的目标是故障检测,在这种情况下,首先运行功能大小较高(即检测到故障的几率较高)的测试用例,然后再运行其余的测试用例。第三,我们利用上述流程的输出建立了一个自动测试工具,以验证我们提出的研究方法的稳健性。汽车行业领域案例研究的结果表明,从语义上呈现变更请求和使用标准化 FSM 方法量化相关测试用例是最有趣的衡量标准。显然,它们有助于回归测试的自动化,因此也有助于所有软件测试流程的自动化。
{"title":"Test case selection and prioritization approach for automated regression testing using ontology and COSMIC measurement","authors":"Zaineb Sakhrawi,&nbsp;Taher Labidi","doi":"10.1007/s10515-024-00447-8","DOIUrl":"10.1007/s10515-024-00447-8","url":null,"abstract":"<div><p>Regression testing is an important activity that aims to provide information about the quality of the software product under test when changes occur. The two primary techniques for optimizing regression testing are test case selection and prioritization. To identify features affected by a change and determine the best test cases for selection and prioritization, techniques allowing the semantic representation and the quantification of testing concepts are required. The goal of this paper is threefold. Firstly, we proposed an ontology-based test case selection model that enables automated regression testing by dynamically selecting appropriate test cases. The selection of test cases is based on a semantic mapping between change requests and their associated test suites and test cases. Secondly, the selected test cases are prioritized based on their functional size. The functional size is determined using the COmmon Software Measurement International Consortium (COSMIC) Functional Size Measurement (FSM) method. The test case prioritization attempts to reorganize test case execution in accordance with its goal. One common goal is fault detection, in which test cases with a higher functional size (i.e., with a higher chance of detecting a fault) are run first, followed by the remaining test cases. Thirdly, we built an automated testing tool using the output of the aforementioned processes to validate the robustness of our proposed research methodology. Results from a case study in the automotive industry domain show that semantically presenting change requests and using standardized FSM methods to quantify their related test cases are the most interesting metrics. Obviously, they assist in the automation of regression testing and, therefore, in all the software testing processes.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The future of API analytics 应用程序接口分析的未来
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-09 DOI: 10.1007/s10515-024-00442-z
Di Wu, Hongyu Zhang, Yang Feng, Zhenjiang Dong, Ying Sun

Reusing APIs can greatly expedite the software development process and reduce programming effort. To learn how to use APIs, developers often rely on API learning resources (such as API references and tutorials) that contain rich and valuable API knowledge. In recent years, numerous API analytic approaches have been presented to help developers mine API knowledge from API learning resources. While these approaches have shown promising results in various tasks, there are many opportunities in this area. In this paper, we discuss several possible future works on API analytics.

重复使用 API 可以大大加快软件开发过程,减少编程工作量。为了学习如何使用应用程序接口,开发人员通常依赖于包含丰富而有价值的应用程序接口知识的应用程序接口学习资源(如应用程序接口参考资料和教程)。近年来,人们提出了许多 API 分析方法来帮助开发人员从 API 学习资源中挖掘 API 知识。虽然这些方法在各种任务中取得了可喜的成果,但这一领域仍有许多机遇。在本文中,我们将讨论 API 分析未来可能开展的几项工作。
{"title":"The future of API analytics","authors":"Di Wu,&nbsp;Hongyu Zhang,&nbsp;Yang Feng,&nbsp;Zhenjiang Dong,&nbsp;Ying Sun","doi":"10.1007/s10515-024-00442-z","DOIUrl":"10.1007/s10515-024-00442-z","url":null,"abstract":"<div><p>Reusing APIs can greatly expedite the software development process and reduce programming effort. To learn how to use APIs, developers often rely on API learning resources (such as API references and tutorials) that contain rich and valuable API knowledge. In recent years, numerous API analytic approaches have been presented to help developers mine API knowledge from API learning resources. While these approaches have shown promising results in various tasks, there are many opportunities in this area. In this paper, we discuss several possible future works on API analytics.\u0000</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated requirement contradiction detection through formal logic and LLMs 通过形式逻辑和 LLM 自动检测需求矛盾
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-06 DOI: 10.1007/s10515-024-00452-x
Alexander Elenga Gärtner, Dietmar Göhlich

This paper introduces ALICE (Automated Logic for Identifying Contradictions in Engineering), a novel automated contradiction detection system tailored for formal requirements expressed in controlled natural language. By integrating formal logic with advanced large language models (LLMs), ALICE represents a significant leap forward in identifying and classifying contradictions within requirements documents. Our methodology, grounded on an expanded taxonomy of contradictions, employs a decision tree model addressing seven critical questions to ascertain the presence and type of contradictions. A pivotal achievement of our research is demonstrated through a comparative study, where ALICE’s performance markedly surpasses that of an LLM-only approach by detecting 60% of all contradictions. ALICE achieves a higher accuracy and recall rate, showcasing its efficacy in processing real-world, complex requirement datasets. Furthermore, the successful application of ALICE to real-world datasets validates its practical applicability and scalability. This work not only advances the automated detection of contradictions in formal requirements but also sets a precedent for the application of AI in enhancing reasoning systems within product development. We advocate for ALICE’s scalability and adaptability, presenting it as a cornerstone for future endeavors in model customization and dataset labeling, thereby contributing a substantial foundation to requirements engineering.

本文介绍了 ALICE(用于识别工程中矛盾的自动逻辑),这是一个新颖的自动矛盾检测系统,专为用受控自然语言表达的形式化需求而量身定制。通过将形式逻辑与先进的大型语言模型(LLMs)相结合,ALICE 在识别和分类需求文档中的矛盾方面实现了重大飞跃。我们的方法以扩展的矛盾分类法为基础,采用决策树模型解决七个关键问题,以确定矛盾的存在和类型。通过对比研究,我们证明了我们研究的一项关键成果:ALICE 的性能明显超过了纯 LLM 方法,能检测出 60% 的矛盾。ALICE 实现了更高的准确率和召回率,展示了其在处理现实世界复杂需求数据集时的功效。此外,ALICE 在现实世界数据集上的成功应用也验证了它的实用性和可扩展性。这项工作不仅推进了形式化需求中矛盾的自动检测,还为应用人工智能增强产品开发中的推理系统开创了先例。我们提倡 ALICE 的可扩展性和适应性,将其作为未来模型定制和数据集标注工作的基石,从而为需求工程学奠定坚实的基础。
{"title":"Automated requirement contradiction detection through formal logic and LLMs","authors":"Alexander Elenga Gärtner,&nbsp;Dietmar Göhlich","doi":"10.1007/s10515-024-00452-x","DOIUrl":"10.1007/s10515-024-00452-x","url":null,"abstract":"<div><p>This paper introduces ALICE (Automated Logic for Identifying Contradictions in Engineering), a novel automated contradiction detection system tailored for formal requirements expressed in controlled natural language. By integrating formal logic with advanced large language models (LLMs), ALICE represents a significant leap forward in identifying and classifying contradictions within requirements documents. Our methodology, grounded on an expanded taxonomy of contradictions, employs a decision tree model addressing seven critical questions to ascertain the presence and type of contradictions. A pivotal achievement of our research is demonstrated through a comparative study, where ALICE’s performance markedly surpasses that of an LLM-only approach by detecting 60% of all contradictions. ALICE achieves a higher accuracy and recall rate, showcasing its efficacy in processing real-world, complex requirement datasets. Furthermore, the successful application of ALICE to real-world datasets validates its practical applicability and scalability. This work not only advances the automated detection of contradictions in formal requirements but also sets a precedent for the application of AI in enhancing reasoning systems within product development. We advocate for ALICE’s scalability and adaptability, presenting it as a cornerstone for future endeavors in model customization and dataset labeling, thereby contributing a substantial foundation to requirements engineering.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10515-024-00452-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized design refactoring (ODR): a generic framework for automated search-based refactoring to optimize object-oriented software architectures 优化设计重构(ODR):基于搜索的自动重构通用框架,用于优化面向对象的软件架构
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-06 DOI: 10.1007/s10515-024-00446-9
Tarik Houichime, Younes El Amrani

Software design optimization (SDO) demands advanced abstract reasoning to define optimal design components’ structure and interactions. Modeling tools such as UML and MERISE, and to a degree, programming languages, are chiefly developed for lucid human–machine design dialogue. For effective automation of SDO, an abstract layer attuned to the machine’s computational prowess is crucial, allowing it to harness its swift calculation and inference in determining the best design. This paper contributes an innovative and universal framework for search-based software design refactoring with an emphasis on optimization. The framework accommodates 44% of Fowler’s cataloged refactorings. Owing to its adaptable and succinct structure, it integrates effortlessly with diverse optimization heuristics, eliminating the requirement for further adaptation. Distinctively, our framework offers an artifact representation that obviates the necessity for a separate solution representation, this unified dual-purpose representation not only streamlines the optimization process but also facilitates the computation of essential object-oriented metrics. This ensures a robust assessment of the optimized model through the construction of pertinent fitness functions. Moreover, the artifact representation supports parallel optimization processes and demonstrates commendable scalability with design expansion.

软件设计优化(SDO)需要高级抽象推理来定义最佳设计组件的结构和交互。UML 和 MERISE 等建模工具以及某种程度上的编程语言,主要是为清晰的人机设计对话而开发的。要实现 SDO 的有效自动化,一个与机器计算能力相适应的抽象层至关重要,它可以让机器在确定最佳设计时利用其快速计算和推理能力。本文为基于搜索的软件设计重构提供了一个创新的通用框架,重点在于优化。该框架适用于 44% 的 Fowler 目录重构。由于该框架具有适应性强、结构简洁的特点,它可以毫不费力地与各种优化启发式方法集成,从而消除了进一步调整的要求。与众不同的是,我们的框架提供的工件表示法无需单独的解决方案表示法,这种统一的两用表示法不仅简化了优化过程,还便于计算面向对象的基本指标。这确保了通过构建相关的适应度函数对优化模型进行稳健评估。此外,工件表示法还支持并行优化过程,并随着设计的扩展表现出令人称道的可扩展性。
{"title":"Optimized design refactoring (ODR): a generic framework for automated search-based refactoring to optimize object-oriented software architectures","authors":"Tarik Houichime,&nbsp;Younes El Amrani","doi":"10.1007/s10515-024-00446-9","DOIUrl":"10.1007/s10515-024-00446-9","url":null,"abstract":"<div><p>Software design optimization (SDO) demands advanced abstract reasoning to define optimal design components’ structure and interactions. Modeling tools such as UML and MERISE, and to a degree, programming languages, are chiefly developed for lucid human–machine design dialogue. For effective automation of SDO, an abstract layer attuned to the machine’s computational prowess is crucial, allowing it to harness its swift calculation and inference in determining the best design. This paper contributes an innovative and universal framework for search-based software design refactoring with an emphasis on optimization. The framework accommodates 44% of Fowler’s cataloged refactorings. Owing to its adaptable and succinct structure, it integrates effortlessly with diverse optimization heuristics, eliminating the requirement for further adaptation. Distinctively, our framework offers an artifact representation that obviates the necessity for a separate solution representation, this unified dual-purpose representation not only streamlines the optimization process but also facilitates the computation of essential object-oriented metrics. This ensures a robust assessment of the optimized model through the construction of pertinent fitness functions. Moreover, the artifact representation supports parallel optimization processes and demonstrates commendable scalability with design expansion.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141551371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the impact of data preprocessing techniques on composite classifier algorithms in cross-project defect prediction 探索数据预处理技术对跨项目缺陷预测中复合分类器算法的影响
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-06 DOI: 10.1007/s10515-024-00454-9
Andreea Vescan, Radu Găceanu, Camelia Şerban

Success in software projects is now an important challenge. The main focus of the engineering community is to predict software defects based on the history of classes and other code elements. However, these software defect prediction techniques are effective only as long as there is enough data to train the prediction model. To mitigate this problem, cross-project defect prediction is used. The purpose of this research investigation is twofold: first, to replicate the experiments in the original paper proposal, and second, to investigate other settings regarding defect prediction with the aim of providing new insights and results regarding the best approach. In this study, three composite algorithms, namely AvgVoting, MaxVoting and Bagging are used. These algorithms integrate multiple machine classifiers to improve cross-project defect prediction. The experiments use pre-processed methods (normalization and standardization) and also feature selection. The results of the replicated experiments confirm the original findings when using raw data for all three methods. When normalization is applied, better results than in the original paper are obtained. Even better results are obtained when feature selection is used. In the original paper, the MaxVoting approach shows the best performance in terms of the F-measure, and BaggingJ48 shows the best performance in terms of cost-effectiveness. The same results in terms of F-measure were obtained in the current experiments: best MaxVoting, followed by AvgVoting and then by BaggingJ48. Our results emphasize the previously obtained outcome; the original study is confirmed when using raw data. Moreover, we obtained better results when using preprocessing and feature selection.

目前,软件项目的成功是一项重要挑战。工程界的主要关注点是根据类和其他代码元素的历史预测软件缺陷。然而,这些软件缺陷预测技术只有在有足够的数据来训练预测模型时才有效。为了缓解这一问题,我们采用了跨项目缺陷预测技术。本研究调查的目的有两个:第一,复制原始论文提案中的实验;第二,调查有关缺陷预测的其他设置,目的是提供有关最佳方法的新见解和结果。本研究采用了三种复合算法,即 AvgVoting、MaxVoting 和 Bagging。这些算法集成了多个机器分类器,以改进跨项目缺陷预测。实验使用了预处理方法(规范化和标准化)以及特征选择。重复实验的结果证实了所有三种方法使用原始数据时的原始发现。在使用标准化方法时,实验结果比原始数据更好。在使用特征选择时,结果甚至更好。在原论文中,MaxVoting 方法在 F-measure 方面表现最佳,而 BaggingJ48 在成本效益方面表现最佳。本次实验在 F 测量方面也得到了相同的结果:MaxVoting 最佳,其次是 AvgVoting,然后是 BaggingJ48。我们的结果强调了之前获得的结果;原始研究在使用原始数据时得到了证实。此外,在使用预处理和特征选择时,我们获得了更好的结果。
{"title":"Exploring the impact of data preprocessing techniques on composite classifier algorithms in cross-project defect prediction","authors":"Andreea Vescan,&nbsp;Radu Găceanu,&nbsp;Camelia Şerban","doi":"10.1007/s10515-024-00454-9","DOIUrl":"10.1007/s10515-024-00454-9","url":null,"abstract":"<div><p>Success in software projects is now an important challenge. The main focus of the engineering community is to predict software defects based on the history of classes and other code elements. However, these software defect prediction techniques are effective only as long as there is enough data to train the prediction model. To mitigate this problem, cross-project defect prediction is used. The purpose of this research investigation is twofold: first, to replicate the experiments in the original paper proposal, and second, to investigate other settings regarding defect prediction with the aim of providing new insights and results regarding the best approach. In this study, three composite algorithms, namely AvgVoting, MaxVoting and Bagging are used. These algorithms integrate multiple machine classifiers to improve cross-project defect prediction. The experiments use pre-processed methods (normalization and standardization) and also feature selection. The results of the replicated experiments confirm the original findings when using raw data for all three methods. When normalization is applied, better results than in the original paper are obtained. Even better results are obtained when feature selection is used. In the original paper, the MaxVoting approach shows the best performance in terms of the F-measure, and BaggingJ48 shows the best performance in terms of cost-effectiveness. The same results in terms of F-measure were obtained in the current experiments: best MaxVoting, followed by AvgVoting and then by BaggingJ48. Our results emphasize the previously obtained outcome; the original study is confirmed when using raw data. Moreover, we obtained better results when using preprocessing and feature selection.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10515-024-00454-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing fault localization in microservices systems through span-level using graph convolutional networks 利用图卷积网络通过跨度级加强微服务系统的故障定位
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-05 DOI: 10.1007/s10515-024-00445-w
He Kong, Tong Li, Jingguo Ge, Lei Zhang, Liangxiong Li

In the domain of cloud computing and distributed systems, microservices architecture has become preeminent due to its scalability and flexibility. However, the distributed nature of microservices systems introduces significant challenges in maintaining operational reliability, especially in fault localization. Traditional methods for fault localization are insufficient due to time-intensive and prone to error. Addressing this gap, we present SpanGraph, a novel framework employing graph convolutional networks (GCN) to achieve efficient span-level fault localization. SpanGraph constructs a directed graph from system traces to capture invocation relationships and execution times. It then utilizes GCN for edge representation learning to detect anomalies. Experimental results demonstrate that SpanGraph outperforms all baseline approaches on both the Sockshop and TrainTicket datasets. We also conduct incremental experiments on SpanGraph using unseen traces to validate its generalizability and scalability. Furthermore, we perform an ablation study, sensitivity analysis, and complexity analysis for SpanGraph to further verify its robustness, effectiveness, and flexibility. Finally, we validate SpanGraph’s effectiveness in anomaly detection and fault location using real-world datasets.

在云计算和分布式系统领域,微服务架构因其可扩展性和灵活性而变得尤为重要。然而,微服务系统的分布式特性给维护运行可靠性带来了巨大挑战,尤其是在故障定位方面。传统的故障定位方法费时费力,而且容易出错。为了弥补这一不足,我们提出了一种采用图卷积网络(GCN)的新型框架--SpanGraph,以实现高效的跨度级故障定位。SpanGraph 从系统跟踪中构建有向图,以捕捉调用关系和执行时间。然后,它利用 GCN 进行边缘表示学习,以检测异常。实验结果表明,SpanGraph 在 Sockshop 和 TrainTicket 数据集上的表现优于所有基准方法。我们还使用未见痕迹对 SpanGraph 进行了增量实验,以验证其通用性和可扩展性。此外,我们还对 SpanGraph 进行了消融研究、敏感性分析和复杂性分析,以进一步验证其稳健性、有效性和灵活性。最后,我们使用真实数据集验证了 SpanGraph 在异常检测和故障定位方面的有效性。
{"title":"Enhancing fault localization in microservices systems through span-level using graph convolutional networks","authors":"He Kong,&nbsp;Tong Li,&nbsp;Jingguo Ge,&nbsp;Lei Zhang,&nbsp;Liangxiong Li","doi":"10.1007/s10515-024-00445-w","DOIUrl":"10.1007/s10515-024-00445-w","url":null,"abstract":"<div><p>In the domain of cloud computing and distributed systems, microservices architecture has become preeminent due to its scalability and flexibility. However, the distributed nature of microservices systems introduces significant challenges in maintaining operational reliability, especially in fault localization. Traditional methods for fault localization are insufficient due to time-intensive and prone to error. Addressing this gap, we present SpanGraph, a novel framework employing graph convolutional networks (GCN) to achieve efficient span-level fault localization. SpanGraph constructs a directed graph from system traces to capture invocation relationships and execution times. It then utilizes GCN for edge representation learning to detect anomalies. Experimental results demonstrate that SpanGraph outperforms all baseline approaches on both the Sockshop and TrainTicket datasets. We also conduct incremental experiments on SpanGraph using unseen traces to validate its generalizability and scalability. Furthermore, we perform an ablation study, sensitivity analysis, and complexity analysis for SpanGraph to further verify its robustness, effectiveness, and flexibility. Finally, we validate SpanGraph’s effectiveness in anomaly detection and fault location using real-world datasets.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141254404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive framework for inter-app ICC security analysis of Android apps 安卓应用程序间 ICC 安全分析综合框架
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-04 DOI: 10.1007/s10515-024-00439-8
Atefeh Nirumand, Bahman Zamani, Behrouz Tork Ladani

The Inter-Component Communication (ICC) model in Android enables the sharing of data and services among app components. However, it has been associated with several problems, including complexity, support for unconstrained communication, and difficulties for developers to understand. These issues have led to numerous security vulnerabilities in Android ICC. While existing research has focused on specific subsets of these vulnerabilities, it lacks comprehensive and scalable modeling of app specifications and interactions, which limits the precision of analysis. To tackle these problems, we introduce VAnDroid3, a Model-Driven Reverse Engineering (MDRE) framework. VAnDroid3 utilizes purposeful model-based representations to enhance the comprehension of apps and their interactions. We have made significant extensions to our previous work, which include the identification of six prominent ICC vulnerabilities and the consideration of both Intent and Data sharing mechanisms that facilitate ICCs. By employing MDRE techniques to create more efficient and accurate domain-specific models from apps, VAnDroid3 enables the analysis of ICC vulnerabilities on intra- and inter-app communication levels. We have implemented VAnDroid3 as an Eclipse-based tool and conducted extensive experiments to evaluate its correctness, scalability, and run-time performance. Additionally, we compared VAnDroid3 with state-of-the-art tools. The results substantiate VAnDroid3 as a promising framework for revealing Android inter-app ICC security issues.

安卓系统中的组件间通信(ICC)模型实现了应用程序组件之间的数据和服务共享。然而,它也存在一些问题,包括复杂性、支持无约束通信以及开发人员难以理解等。这些问题导致 Android ICC 中出现了许多安全漏洞。虽然现有研究侧重于这些漏洞的特定子集,但缺乏对应用程序规范和交互的全面、可扩展建模,从而限制了分析的精确性。为了解决这些问题,我们引入了模型驱动逆向工程(MDRE)框架 VAnDroid3。VAnDroid3 利用有目的的基于模型的表征来增强对应用程序及其交互的理解。我们对以前的工作进行了重大扩展,包括识别了六个突出的 ICC 漏洞,并考虑了促进 ICC 的意图和数据共享机制。通过采用 MDRE 技术从应用程序中创建更高效、更准确的特定领域模型,VAnDroid3 能够分析应用程序内部和应用程序之间通信层面的 ICC 漏洞。我们将 VAnDroid3 作为基于 Eclipse 的工具来实现,并进行了大量实验来评估其正确性、可扩展性和运行时性能。此外,我们还将 VAnDroid3 与最先进的工具进行了比较。结果证明,VAnDroid3 是揭示 Android 应用程序间 ICC 安全问题的理想框架。
{"title":"A comprehensive framework for inter-app ICC security analysis of Android apps","authors":"Atefeh Nirumand,&nbsp;Bahman Zamani,&nbsp;Behrouz Tork Ladani","doi":"10.1007/s10515-024-00439-8","DOIUrl":"10.1007/s10515-024-00439-8","url":null,"abstract":"<div><p>The Inter-Component Communication (ICC) model in Android enables the sharing of data and services among app components. However, it has been associated with several problems, including complexity, support for unconstrained communication, and difficulties for developers to understand. These issues have led to numerous security vulnerabilities in Android ICC. While existing research has focused on specific subsets of these vulnerabilities, it lacks comprehensive and scalable modeling of app specifications and interactions, which limits the precision of analysis. To tackle these problems, we introduce VAnDroid3, a Model-Driven Reverse Engineering (MDRE) framework. VAnDroid3 utilizes purposeful model-based representations to enhance the comprehension of apps and their interactions. We have made significant extensions to our previous work, which include the identification of six prominent ICC vulnerabilities and the consideration of both Intent and Data sharing mechanisms that facilitate ICCs. By employing MDRE techniques to create more efficient and accurate domain-specific models from apps, VAnDroid3 enables the analysis of ICC vulnerabilities on intra- and inter-app communication levels. We have implemented VAnDroid3 as an Eclipse-based tool and conducted extensive experiments to evaluate its correctness, scalability, and run-time performance. Additionally, we compared VAnDroid3 with state-of-the-art tools. The results substantiate VAnDroid3 as a promising framework for revealing Android inter-app ICC security issues.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141254413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Automated Software Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1