首页 > 最新文献

Automated Software Engineering最新文献

英文 中文
Automated test data generation and stubbing method for C/C++ embedded projects C/C++ 嵌入式项目的自动测试数据生成和存根方法
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-10 DOI: 10.1007/s10515-024-00449-6
Lam Nguyen Tung, Nguyen Vu Binh Duong, Khoi Nguyen Le, Pham Ngoc Hung

Automated test data generation for unit testing C/C++ functions using concolic testing has been known for improving software quality while reducing human testing effort. However, concolic testing could face challenging problems when tackling complex practical projects. This paper proposes a concolic-based method named Automated Unit Testing and Stubbing (AUTS) for automated test data and stub generation. The key idea of the proposed method is to apply the concolic testing approach with three major improvements. Firstly, the test data generation, which includes two path search strategies, not only is able to avoid infeasible paths but also achieves higher code coverage. Secondly, AUTS generates appropriate values for specialized data types to cover more test scenarios. Finally, the proposed method integrates automatic stub preparation and generation to reduce the costs of human effort. The method even works on incomplete source code or missing libraries. AUTS is implemented in a tool to test various C/C++ industrial and open-source projects. The experimental results show that the proposed method significantly improves the coverage of the generated test data in comparison with other existing methods.

使用协程测试为 C/C++ 函数的单元测试自动生成测试数据,可以提高软件质量,同时减少人工测试工作量。然而,在处理复杂的实际项目时,协程测试可能会面临一些具有挑战性的问题。本文提出了一种基于协程的方法,名为自动单元测试和存根(AUTS),用于自动生成测试数据和存根。该方法的主要思想是在应用协程测试方法的基础上进行三大改进。首先,测试数据生成包括两种路径搜索策略,不仅能避免不可行路径,还能实现更高的代码覆盖率。其次,AUTS 为专门的数据类型生成适当的值,以覆盖更多的测试场景。最后,建议的方法集成了自动存根准备和生成功能,以减少人力成本。该方法甚至可用于不完整的源代码或缺失的库。AUTS 已在一个工具中实现,用于测试各种 C/C++ 工业项目和开源项目。实验结果表明,与其他现有方法相比,拟议方法显著提高了生成测试数据的覆盖率。
{"title":"Automated test data generation and stubbing method for C/C++ embedded projects","authors":"Lam Nguyen Tung,&nbsp;Nguyen Vu Binh Duong,&nbsp;Khoi Nguyen Le,&nbsp;Pham Ngoc Hung","doi":"10.1007/s10515-024-00449-6","DOIUrl":"10.1007/s10515-024-00449-6","url":null,"abstract":"<div><p>Automated test data generation for unit testing C/C++ functions using concolic testing has been known for improving software quality while reducing human testing effort. However, concolic testing could face challenging problems when tackling complex practical projects. This paper proposes a concolic-based method named Automated Unit Testing and Stubbing (AUTS) for automated test data and stub generation. The key idea of the proposed method is to apply the concolic testing approach with three major improvements. Firstly, the test data generation, which includes two path search strategies, not only is able to avoid infeasible paths but also achieves higher code coverage. Secondly, AUTS generates appropriate values for specialized data types to cover more test scenarios. Finally, the proposed method integrates automatic stub preparation and generation to reduce the costs of human effort. The method even works on incomplete source code or missing libraries. AUTS is implemented in a tool to test various C/C++ industrial and open-source projects. The experimental results show that the proposed method significantly improves the coverage of the generated test data in comparison with other existing methods.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Test case selection and prioritization approach for automated regression testing using ontology and COSMIC measurement 利用本体论和 COSMIC 测量方法为自动回归测试选择测试用例并确定优先次序
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-10 DOI: 10.1007/s10515-024-00447-8
Zaineb Sakhrawi, Taher Labidi

Regression testing is an important activity that aims to provide information about the quality of the software product under test when changes occur. The two primary techniques for optimizing regression testing are test case selection and prioritization. To identify features affected by a change and determine the best test cases for selection and prioritization, techniques allowing the semantic representation and the quantification of testing concepts are required. The goal of this paper is threefold. Firstly, we proposed an ontology-based test case selection model that enables automated regression testing by dynamically selecting appropriate test cases. The selection of test cases is based on a semantic mapping between change requests and their associated test suites and test cases. Secondly, the selected test cases are prioritized based on their functional size. The functional size is determined using the COmmon Software Measurement International Consortium (COSMIC) Functional Size Measurement (FSM) method. The test case prioritization attempts to reorganize test case execution in accordance with its goal. One common goal is fault detection, in which test cases with a higher functional size (i.e., with a higher chance of detecting a fault) are run first, followed by the remaining test cases. Thirdly, we built an automated testing tool using the output of the aforementioned processes to validate the robustness of our proposed research methodology. Results from a case study in the automotive industry domain show that semantically presenting change requests and using standardized FSM methods to quantify their related test cases are the most interesting metrics. Obviously, they assist in the automation of regression testing and, therefore, in all the software testing processes.

回归测试是一项重要活动,目的是在发生变化时提供有关被测软件产品质量的信息。优化回归测试的两种主要技术是测试用例选择和优先级排序。要识别受变更影响的特征,并确定最佳测试用例的选择和优先级,就需要有能够对测试概念进行语义表述和量化的技术。本文的目标有三个方面。首先,我们提出了一种基于本体的测试用例选择模型,通过动态选择适当的测试用例来实现自动回归测试。测试用例的选择基于变更请求及其相关测试套件和测试用例之间的语义映射。其次,所选测试用例的优先级取决于其功能大小。功能大小是使用国际通用软件测量联盟(COSMIC)的功能大小测量(FSM)方法确定的。测试用例优先级排序试图根据其目标重新组织测试用例的执行。其中一个常见的目标是故障检测,在这种情况下,首先运行功能大小较高(即检测到故障的几率较高)的测试用例,然后再运行其余的测试用例。第三,我们利用上述流程的输出建立了一个自动测试工具,以验证我们提出的研究方法的稳健性。汽车行业领域案例研究的结果表明,从语义上呈现变更请求和使用标准化 FSM 方法量化相关测试用例是最有趣的衡量标准。显然,它们有助于回归测试的自动化,因此也有助于所有软件测试流程的自动化。
{"title":"Test case selection and prioritization approach for automated regression testing using ontology and COSMIC measurement","authors":"Zaineb Sakhrawi,&nbsp;Taher Labidi","doi":"10.1007/s10515-024-00447-8","DOIUrl":"10.1007/s10515-024-00447-8","url":null,"abstract":"<div><p>Regression testing is an important activity that aims to provide information about the quality of the software product under test when changes occur. The two primary techniques for optimizing regression testing are test case selection and prioritization. To identify features affected by a change and determine the best test cases for selection and prioritization, techniques allowing the semantic representation and the quantification of testing concepts are required. The goal of this paper is threefold. Firstly, we proposed an ontology-based test case selection model that enables automated regression testing by dynamically selecting appropriate test cases. The selection of test cases is based on a semantic mapping between change requests and their associated test suites and test cases. Secondly, the selected test cases are prioritized based on their functional size. The functional size is determined using the COmmon Software Measurement International Consortium (COSMIC) Functional Size Measurement (FSM) method. The test case prioritization attempts to reorganize test case execution in accordance with its goal. One common goal is fault detection, in which test cases with a higher functional size (i.e., with a higher chance of detecting a fault) are run first, followed by the remaining test cases. Thirdly, we built an automated testing tool using the output of the aforementioned processes to validate the robustness of our proposed research methodology. Results from a case study in the automotive industry domain show that semantically presenting change requests and using standardized FSM methods to quantify their related test cases are the most interesting metrics. Obviously, they assist in the automation of regression testing and, therefore, in all the software testing processes.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The future of API analytics 应用程序接口分析的未来
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-09 DOI: 10.1007/s10515-024-00442-z
Di Wu, Hongyu Zhang, Yang Feng, Zhenjiang Dong, Ying Sun

Reusing APIs can greatly expedite the software development process and reduce programming effort. To learn how to use APIs, developers often rely on API learning resources (such as API references and tutorials) that contain rich and valuable API knowledge. In recent years, numerous API analytic approaches have been presented to help developers mine API knowledge from API learning resources. While these approaches have shown promising results in various tasks, there are many opportunities in this area. In this paper, we discuss several possible future works on API analytics.

重复使用 API 可以大大加快软件开发过程,减少编程工作量。为了学习如何使用应用程序接口,开发人员通常依赖于包含丰富而有价值的应用程序接口知识的应用程序接口学习资源(如应用程序接口参考资料和教程)。近年来,人们提出了许多 API 分析方法来帮助开发人员从 API 学习资源中挖掘 API 知识。虽然这些方法在各种任务中取得了可喜的成果,但这一领域仍有许多机遇。在本文中,我们将讨论 API 分析未来可能开展的几项工作。
{"title":"The future of API analytics","authors":"Di Wu,&nbsp;Hongyu Zhang,&nbsp;Yang Feng,&nbsp;Zhenjiang Dong,&nbsp;Ying Sun","doi":"10.1007/s10515-024-00442-z","DOIUrl":"10.1007/s10515-024-00442-z","url":null,"abstract":"<div><p>Reusing APIs can greatly expedite the software development process and reduce programming effort. To learn how to use APIs, developers often rely on API learning resources (such as API references and tutorials) that contain rich and valuable API knowledge. In recent years, numerous API analytic approaches have been presented to help developers mine API knowledge from API learning resources. While these approaches have shown promising results in various tasks, there are many opportunities in this area. In this paper, we discuss several possible future works on API analytics.\u0000</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated requirement contradiction detection through formal logic and LLMs 通过形式逻辑和 LLM 自动检测需求矛盾
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-06 DOI: 10.1007/s10515-024-00452-x
Alexander Elenga Gärtner, Dietmar Göhlich

This paper introduces ALICE (Automated Logic for Identifying Contradictions in Engineering), a novel automated contradiction detection system tailored for formal requirements expressed in controlled natural language. By integrating formal logic with advanced large language models (LLMs), ALICE represents a significant leap forward in identifying and classifying contradictions within requirements documents. Our methodology, grounded on an expanded taxonomy of contradictions, employs a decision tree model addressing seven critical questions to ascertain the presence and type of contradictions. A pivotal achievement of our research is demonstrated through a comparative study, where ALICE’s performance markedly surpasses that of an LLM-only approach by detecting 60% of all contradictions. ALICE achieves a higher accuracy and recall rate, showcasing its efficacy in processing real-world, complex requirement datasets. Furthermore, the successful application of ALICE to real-world datasets validates its practical applicability and scalability. This work not only advances the automated detection of contradictions in formal requirements but also sets a precedent for the application of AI in enhancing reasoning systems within product development. We advocate for ALICE’s scalability and adaptability, presenting it as a cornerstone for future endeavors in model customization and dataset labeling, thereby contributing a substantial foundation to requirements engineering.

本文介绍了 ALICE(用于识别工程中矛盾的自动逻辑),这是一个新颖的自动矛盾检测系统,专为用受控自然语言表达的形式化需求而量身定制。通过将形式逻辑与先进的大型语言模型(LLMs)相结合,ALICE 在识别和分类需求文档中的矛盾方面实现了重大飞跃。我们的方法以扩展的矛盾分类法为基础,采用决策树模型解决七个关键问题,以确定矛盾的存在和类型。通过对比研究,我们证明了我们研究的一项关键成果:ALICE 的性能明显超过了纯 LLM 方法,能检测出 60% 的矛盾。ALICE 实现了更高的准确率和召回率,展示了其在处理现实世界复杂需求数据集时的功效。此外,ALICE 在现实世界数据集上的成功应用也验证了它的实用性和可扩展性。这项工作不仅推进了形式化需求中矛盾的自动检测,还为应用人工智能增强产品开发中的推理系统开创了先例。我们提倡 ALICE 的可扩展性和适应性,将其作为未来模型定制和数据集标注工作的基石,从而为需求工程学奠定坚实的基础。
{"title":"Automated requirement contradiction detection through formal logic and LLMs","authors":"Alexander Elenga Gärtner,&nbsp;Dietmar Göhlich","doi":"10.1007/s10515-024-00452-x","DOIUrl":"10.1007/s10515-024-00452-x","url":null,"abstract":"<div><p>This paper introduces ALICE (Automated Logic for Identifying Contradictions in Engineering), a novel automated contradiction detection system tailored for formal requirements expressed in controlled natural language. By integrating formal logic with advanced large language models (LLMs), ALICE represents a significant leap forward in identifying and classifying contradictions within requirements documents. Our methodology, grounded on an expanded taxonomy of contradictions, employs a decision tree model addressing seven critical questions to ascertain the presence and type of contradictions. A pivotal achievement of our research is demonstrated through a comparative study, where ALICE’s performance markedly surpasses that of an LLM-only approach by detecting 60% of all contradictions. ALICE achieves a higher accuracy and recall rate, showcasing its efficacy in processing real-world, complex requirement datasets. Furthermore, the successful application of ALICE to real-world datasets validates its practical applicability and scalability. This work not only advances the automated detection of contradictions in formal requirements but also sets a precedent for the application of AI in enhancing reasoning systems within product development. We advocate for ALICE’s scalability and adaptability, presenting it as a cornerstone for future endeavors in model customization and dataset labeling, thereby contributing a substantial foundation to requirements engineering.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10515-024-00452-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized design refactoring (ODR): a generic framework for automated search-based refactoring to optimize object-oriented software architectures 优化设计重构(ODR):基于搜索的自动重构通用框架,用于优化面向对象的软件架构
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-06 DOI: 10.1007/s10515-024-00446-9
Tarik Houichime, Younes El Amrani

Software design optimization (SDO) demands advanced abstract reasoning to define optimal design components’ structure and interactions. Modeling tools such as UML and MERISE, and to a degree, programming languages, are chiefly developed for lucid human–machine design dialogue. For effective automation of SDO, an abstract layer attuned to the machine’s computational prowess is crucial, allowing it to harness its swift calculation and inference in determining the best design. This paper contributes an innovative and universal framework for search-based software design refactoring with an emphasis on optimization. The framework accommodates 44% of Fowler’s cataloged refactorings. Owing to its adaptable and succinct structure, it integrates effortlessly with diverse optimization heuristics, eliminating the requirement for further adaptation. Distinctively, our framework offers an artifact representation that obviates the necessity for a separate solution representation, this unified dual-purpose representation not only streamlines the optimization process but also facilitates the computation of essential object-oriented metrics. This ensures a robust assessment of the optimized model through the construction of pertinent fitness functions. Moreover, the artifact representation supports parallel optimization processes and demonstrates commendable scalability with design expansion.

软件设计优化(SDO)需要高级抽象推理来定义最佳设计组件的结构和交互。UML 和 MERISE 等建模工具以及某种程度上的编程语言,主要是为清晰的人机设计对话而开发的。要实现 SDO 的有效自动化,一个与机器计算能力相适应的抽象层至关重要,它可以让机器在确定最佳设计时利用其快速计算和推理能力。本文为基于搜索的软件设计重构提供了一个创新的通用框架,重点在于优化。该框架适用于 44% 的 Fowler 目录重构。由于该框架具有适应性强、结构简洁的特点,它可以毫不费力地与各种优化启发式方法集成,从而消除了进一步调整的要求。与众不同的是,我们的框架提供的工件表示法无需单独的解决方案表示法,这种统一的两用表示法不仅简化了优化过程,还便于计算面向对象的基本指标。这确保了通过构建相关的适应度函数对优化模型进行稳健评估。此外,工件表示法还支持并行优化过程,并随着设计的扩展表现出令人称道的可扩展性。
{"title":"Optimized design refactoring (ODR): a generic framework for automated search-based refactoring to optimize object-oriented software architectures","authors":"Tarik Houichime,&nbsp;Younes El Amrani","doi":"10.1007/s10515-024-00446-9","DOIUrl":"10.1007/s10515-024-00446-9","url":null,"abstract":"<div><p>Software design optimization (SDO) demands advanced abstract reasoning to define optimal design components’ structure and interactions. Modeling tools such as UML and MERISE, and to a degree, programming languages, are chiefly developed for lucid human–machine design dialogue. For effective automation of SDO, an abstract layer attuned to the machine’s computational prowess is crucial, allowing it to harness its swift calculation and inference in determining the best design. This paper contributes an innovative and universal framework for search-based software design refactoring with an emphasis on optimization. The framework accommodates 44% of Fowler’s cataloged refactorings. Owing to its adaptable and succinct structure, it integrates effortlessly with diverse optimization heuristics, eliminating the requirement for further adaptation. Distinctively, our framework offers an artifact representation that obviates the necessity for a separate solution representation, this unified dual-purpose representation not only streamlines the optimization process but also facilitates the computation of essential object-oriented metrics. This ensures a robust assessment of the optimized model through the construction of pertinent fitness functions. Moreover, the artifact representation supports parallel optimization processes and demonstrates commendable scalability with design expansion.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141551371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the impact of data preprocessing techniques on composite classifier algorithms in cross-project defect prediction 探索数据预处理技术对跨项目缺陷预测中复合分类器算法的影响
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-06 DOI: 10.1007/s10515-024-00454-9
Andreea Vescan, Radu Găceanu, Camelia Şerban

Success in software projects is now an important challenge. The main focus of the engineering community is to predict software defects based on the history of classes and other code elements. However, these software defect prediction techniques are effective only as long as there is enough data to train the prediction model. To mitigate this problem, cross-project defect prediction is used. The purpose of this research investigation is twofold: first, to replicate the experiments in the original paper proposal, and second, to investigate other settings regarding defect prediction with the aim of providing new insights and results regarding the best approach. In this study, three composite algorithms, namely AvgVoting, MaxVoting and Bagging are used. These algorithms integrate multiple machine classifiers to improve cross-project defect prediction. The experiments use pre-processed methods (normalization and standardization) and also feature selection. The results of the replicated experiments confirm the original findings when using raw data for all three methods. When normalization is applied, better results than in the original paper are obtained. Even better results are obtained when feature selection is used. In the original paper, the MaxVoting approach shows the best performance in terms of the F-measure, and BaggingJ48 shows the best performance in terms of cost-effectiveness. The same results in terms of F-measure were obtained in the current experiments: best MaxVoting, followed by AvgVoting and then by BaggingJ48. Our results emphasize the previously obtained outcome; the original study is confirmed when using raw data. Moreover, we obtained better results when using preprocessing and feature selection.

目前,软件项目的成功是一项重要挑战。工程界的主要关注点是根据类和其他代码元素的历史预测软件缺陷。然而,这些软件缺陷预测技术只有在有足够的数据来训练预测模型时才有效。为了缓解这一问题,我们采用了跨项目缺陷预测技术。本研究调查的目的有两个:第一,复制原始论文提案中的实验;第二,调查有关缺陷预测的其他设置,目的是提供有关最佳方法的新见解和结果。本研究采用了三种复合算法,即 AvgVoting、MaxVoting 和 Bagging。这些算法集成了多个机器分类器,以改进跨项目缺陷预测。实验使用了预处理方法(规范化和标准化)以及特征选择。重复实验的结果证实了所有三种方法使用原始数据时的原始发现。在使用标准化方法时,实验结果比原始数据更好。在使用特征选择时,结果甚至更好。在原论文中,MaxVoting 方法在 F-measure 方面表现最佳,而 BaggingJ48 在成本效益方面表现最佳。本次实验在 F 测量方面也得到了相同的结果:MaxVoting 最佳,其次是 AvgVoting,然后是 BaggingJ48。我们的结果强调了之前获得的结果;原始研究在使用原始数据时得到了证实。此外,在使用预处理和特征选择时,我们获得了更好的结果。
{"title":"Exploring the impact of data preprocessing techniques on composite classifier algorithms in cross-project defect prediction","authors":"Andreea Vescan,&nbsp;Radu Găceanu,&nbsp;Camelia Şerban","doi":"10.1007/s10515-024-00454-9","DOIUrl":"10.1007/s10515-024-00454-9","url":null,"abstract":"<div><p>Success in software projects is now an important challenge. The main focus of the engineering community is to predict software defects based on the history of classes and other code elements. However, these software defect prediction techniques are effective only as long as there is enough data to train the prediction model. To mitigate this problem, cross-project defect prediction is used. The purpose of this research investigation is twofold: first, to replicate the experiments in the original paper proposal, and second, to investigate other settings regarding defect prediction with the aim of providing new insights and results regarding the best approach. In this study, three composite algorithms, namely AvgVoting, MaxVoting and Bagging are used. These algorithms integrate multiple machine classifiers to improve cross-project defect prediction. The experiments use pre-processed methods (normalization and standardization) and also feature selection. The results of the replicated experiments confirm the original findings when using raw data for all three methods. When normalization is applied, better results than in the original paper are obtained. Even better results are obtained when feature selection is used. In the original paper, the MaxVoting approach shows the best performance in terms of the F-measure, and BaggingJ48 shows the best performance in terms of cost-effectiveness. The same results in terms of F-measure were obtained in the current experiments: best MaxVoting, followed by AvgVoting and then by BaggingJ48. Our results emphasize the previously obtained outcome; the original study is confirmed when using raw data. Moreover, we obtained better results when using preprocessing and feature selection.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10515-024-00454-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing fault localization in microservices systems through span-level using graph convolutional networks 利用图卷积网络通过跨度级加强微服务系统的故障定位
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-05 DOI: 10.1007/s10515-024-00445-w
He Kong, Tong Li, Jingguo Ge, Lei Zhang, Liangxiong Li

In the domain of cloud computing and distributed systems, microservices architecture has become preeminent due to its scalability and flexibility. However, the distributed nature of microservices systems introduces significant challenges in maintaining operational reliability, especially in fault localization. Traditional methods for fault localization are insufficient due to time-intensive and prone to error. Addressing this gap, we present SpanGraph, a novel framework employing graph convolutional networks (GCN) to achieve efficient span-level fault localization. SpanGraph constructs a directed graph from system traces to capture invocation relationships and execution times. It then utilizes GCN for edge representation learning to detect anomalies. Experimental results demonstrate that SpanGraph outperforms all baseline approaches on both the Sockshop and TrainTicket datasets. We also conduct incremental experiments on SpanGraph using unseen traces to validate its generalizability and scalability. Furthermore, we perform an ablation study, sensitivity analysis, and complexity analysis for SpanGraph to further verify its robustness, effectiveness, and flexibility. Finally, we validate SpanGraph’s effectiveness in anomaly detection and fault location using real-world datasets.

在云计算和分布式系统领域,微服务架构因其可扩展性和灵活性而变得尤为重要。然而,微服务系统的分布式特性给维护运行可靠性带来了巨大挑战,尤其是在故障定位方面。传统的故障定位方法费时费力,而且容易出错。为了弥补这一不足,我们提出了一种采用图卷积网络(GCN)的新型框架--SpanGraph,以实现高效的跨度级故障定位。SpanGraph 从系统跟踪中构建有向图,以捕捉调用关系和执行时间。然后,它利用 GCN 进行边缘表示学习,以检测异常。实验结果表明,SpanGraph 在 Sockshop 和 TrainTicket 数据集上的表现优于所有基准方法。我们还使用未见痕迹对 SpanGraph 进行了增量实验,以验证其通用性和可扩展性。此外,我们还对 SpanGraph 进行了消融研究、敏感性分析和复杂性分析,以进一步验证其稳健性、有效性和灵活性。最后,我们使用真实数据集验证了 SpanGraph 在异常检测和故障定位方面的有效性。
{"title":"Enhancing fault localization in microservices systems through span-level using graph convolutional networks","authors":"He Kong,&nbsp;Tong Li,&nbsp;Jingguo Ge,&nbsp;Lei Zhang,&nbsp;Liangxiong Li","doi":"10.1007/s10515-024-00445-w","DOIUrl":"10.1007/s10515-024-00445-w","url":null,"abstract":"<div><p>In the domain of cloud computing and distributed systems, microservices architecture has become preeminent due to its scalability and flexibility. However, the distributed nature of microservices systems introduces significant challenges in maintaining operational reliability, especially in fault localization. Traditional methods for fault localization are insufficient due to time-intensive and prone to error. Addressing this gap, we present SpanGraph, a novel framework employing graph convolutional networks (GCN) to achieve efficient span-level fault localization. SpanGraph constructs a directed graph from system traces to capture invocation relationships and execution times. It then utilizes GCN for edge representation learning to detect anomalies. Experimental results demonstrate that SpanGraph outperforms all baseline approaches on both the Sockshop and TrainTicket datasets. We also conduct incremental experiments on SpanGraph using unseen traces to validate its generalizability and scalability. Furthermore, we perform an ablation study, sensitivity analysis, and complexity analysis for SpanGraph to further verify its robustness, effectiveness, and flexibility. Finally, we validate SpanGraph’s effectiveness in anomaly detection and fault location using real-world datasets.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141254404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive framework for inter-app ICC security analysis of Android apps 安卓应用程序间 ICC 安全分析综合框架
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-04 DOI: 10.1007/s10515-024-00439-8
Atefeh Nirumand, Bahman Zamani, Behrouz Tork Ladani

The Inter-Component Communication (ICC) model in Android enables the sharing of data and services among app components. However, it has been associated with several problems, including complexity, support for unconstrained communication, and difficulties for developers to understand. These issues have led to numerous security vulnerabilities in Android ICC. While existing research has focused on specific subsets of these vulnerabilities, it lacks comprehensive and scalable modeling of app specifications and interactions, which limits the precision of analysis. To tackle these problems, we introduce VAnDroid3, a Model-Driven Reverse Engineering (MDRE) framework. VAnDroid3 utilizes purposeful model-based representations to enhance the comprehension of apps and their interactions. We have made significant extensions to our previous work, which include the identification of six prominent ICC vulnerabilities and the consideration of both Intent and Data sharing mechanisms that facilitate ICCs. By employing MDRE techniques to create more efficient and accurate domain-specific models from apps, VAnDroid3 enables the analysis of ICC vulnerabilities on intra- and inter-app communication levels. We have implemented VAnDroid3 as an Eclipse-based tool and conducted extensive experiments to evaluate its correctness, scalability, and run-time performance. Additionally, we compared VAnDroid3 with state-of-the-art tools. The results substantiate VAnDroid3 as a promising framework for revealing Android inter-app ICC security issues.

安卓系统中的组件间通信(ICC)模型实现了应用程序组件之间的数据和服务共享。然而,它也存在一些问题,包括复杂性、支持无约束通信以及开发人员难以理解等。这些问题导致 Android ICC 中出现了许多安全漏洞。虽然现有研究侧重于这些漏洞的特定子集,但缺乏对应用程序规范和交互的全面、可扩展建模,从而限制了分析的精确性。为了解决这些问题,我们引入了模型驱动逆向工程(MDRE)框架 VAnDroid3。VAnDroid3 利用有目的的基于模型的表征来增强对应用程序及其交互的理解。我们对以前的工作进行了重大扩展,包括识别了六个突出的 ICC 漏洞,并考虑了促进 ICC 的意图和数据共享机制。通过采用 MDRE 技术从应用程序中创建更高效、更准确的特定领域模型,VAnDroid3 能够分析应用程序内部和应用程序之间通信层面的 ICC 漏洞。我们将 VAnDroid3 作为基于 Eclipse 的工具来实现,并进行了大量实验来评估其正确性、可扩展性和运行时性能。此外,我们还将 VAnDroid3 与最先进的工具进行了比较。结果证明,VAnDroid3 是揭示 Android 应用程序间 ICC 安全问题的理想框架。
{"title":"A comprehensive framework for inter-app ICC security analysis of Android apps","authors":"Atefeh Nirumand,&nbsp;Bahman Zamani,&nbsp;Behrouz Tork Ladani","doi":"10.1007/s10515-024-00439-8","DOIUrl":"10.1007/s10515-024-00439-8","url":null,"abstract":"<div><p>The Inter-Component Communication (ICC) model in Android enables the sharing of data and services among app components. However, it has been associated with several problems, including complexity, support for unconstrained communication, and difficulties for developers to understand. These issues have led to numerous security vulnerabilities in Android ICC. While existing research has focused on specific subsets of these vulnerabilities, it lacks comprehensive and scalable modeling of app specifications and interactions, which limits the precision of analysis. To tackle these problems, we introduce VAnDroid3, a Model-Driven Reverse Engineering (MDRE) framework. VAnDroid3 utilizes purposeful model-based representations to enhance the comprehension of apps and their interactions. We have made significant extensions to our previous work, which include the identification of six prominent ICC vulnerabilities and the consideration of both Intent and Data sharing mechanisms that facilitate ICCs. By employing MDRE techniques to create more efficient and accurate domain-specific models from apps, VAnDroid3 enables the analysis of ICC vulnerabilities on intra- and inter-app communication levels. We have implemented VAnDroid3 as an Eclipse-based tool and conducted extensive experiments to evaluate its correctness, scalability, and run-time performance. Additionally, we compared VAnDroid3 with state-of-the-art tools. The results substantiate VAnDroid3 as a promising framework for revealing Android inter-app ICC security issues.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141254413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A systematic literature review on software security testing using metaheuristics 关于使用元搜索技术进行软件安全测试的系统性文献综述
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-23 DOI: 10.1007/s10515-024-00433-0
Fatma Ahsan, Faisal Anwer

The security of an application is critical for its success, as breaches cause loss for organizations and individuals. Search-based software security testing (SBSST) is the field that utilizes metaheuristics to generate test cases for the software testing for some pre-specified security test adequacy criteria This paper conducts a systematic literature review to compare metaheuristics and fitness functions used in software security testing, exploring their distinctive capabilities and impact on vulnerability detection and code coverage. The aim is to provide insights for fortifying software systems against emerging threats in the rapidly evolving technological landscape. This paper examines how search-based algorithms have been explored in the context of code coverage and software security testing. Moreover, the study highlights different metaheuristics and fitness functions for security testing and code coverage. This paper follows the standard guidelines from Kitchenham to conduct SLR and obtained 122 primary studies related to SBSST after a multi-stage selection process. The papers were from different sources journals, conference proceedings, workshops, summits, and researchers’ webpages published between 2001 and 2022. The outcomes demonstrate that the main tackled vulnerabilities using metaheuristics are XSS, SQLI, program crash, and XMLI. The findings have suggested several areas for future research directions, including detecting server-side request forgery and security testing of third-party components. Moreover, new metaheuristics must also need to be explored to detect security vulnerabilities that are still unexplored or explored significantly less. Furthermore, metaheuristics can be combined with machine learning and reinforcement learning techniques for better results. Some metaheuristics can be designed by looking at the complexity of security testing and exploiting more fitness functions related to detecting different vulnerabilities.

应用程序的安全性对其成功至关重要,因为漏洞会给组织和个人造成损失。基于搜索的软件安全测试(SBSST)是一个利用元搜索技术为软件测试生成测试用例的领域,测试用例要符合某些预先指定的安全测试充分性标准。本文通过系统的文献综述,比较了软件安全测试中使用的元搜索技术和拟合函数,探讨了它们的独特功能及其对漏洞检测和代码覆盖的影响。目的是为在快速发展的技术环境中强化软件系统抵御新兴威胁提供见解。本文探讨了在代码覆盖率和软件安全测试中如何探索基于搜索的算法。此外,本研究还重点介绍了用于安全测试和代码覆盖的不同元搜索算法和拟合函数。本文遵循 Kitchenham 的标准指南进行 SLR,经过多阶段筛选,获得了 122 篇与 SBSST 相关的主要研究论文。这些论文来自 2001 年至 2022 年间发表的不同来源的期刊、会议论文集、研讨会、峰会和研究人员的网页。研究结果表明,利用元搜索技术解决的主要漏洞有 XSS、SQLI、程序崩溃和 XMLI。研究结果提出了未来研究的几个方向,包括检测服务器端请求伪造和第三方组件的安全测试。此外,还必须探索新的元启发式方法,以检测尚未探索或探索较少的安全漏洞。此外,元启发式方法还可与机器学习和强化学习技术相结合,以获得更好的效果。一些元启发式方法可以通过研究安全测试的复杂性和利用更多与检测不同漏洞相关的适应度函数来设计。
{"title":"A systematic literature review on software security testing using metaheuristics","authors":"Fatma Ahsan,&nbsp;Faisal Anwer","doi":"10.1007/s10515-024-00433-0","DOIUrl":"10.1007/s10515-024-00433-0","url":null,"abstract":"<div><p>The security of an application is critical for its success, as breaches cause loss for organizations and individuals. Search-based software security testing (SBSST) is the field that utilizes metaheuristics to generate test cases for the software testing for some pre-specified security test adequacy criteria This paper conducts a systematic literature review to compare metaheuristics and fitness functions used in software security testing, exploring their distinctive capabilities and impact on vulnerability detection and code coverage. The aim is to provide insights for fortifying software systems against emerging threats in the rapidly evolving technological landscape. This paper examines how search-based algorithms have been explored in the context of code coverage and software security testing. Moreover, the study highlights different metaheuristics and fitness functions for security testing and code coverage. This paper follows the standard guidelines from Kitchenham to conduct SLR and obtained 122 primary studies related to SBSST after a multi-stage selection process. The papers were from different sources journals, conference proceedings, workshops, summits, and researchers’ webpages published between 2001 and 2022. The outcomes demonstrate that the main tackled vulnerabilities using metaheuristics are XSS, SQLI, program crash, and XMLI. The findings have suggested several areas for future research directions, including detecting server-side request forgery and security testing of third-party components. Moreover, new metaheuristics must also need to be explored to detect security vulnerabilities that are still unexplored or explored significantly less. Furthermore, metaheuristics can be combined with machine learning and reinforcement learning techniques for better results. Some metaheuristics can be designed by looking at the complexity of security testing and exploiting more fitness functions related to detecting different vulnerabilities.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141107156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel automated framework for fine-grained sentiment analysis of application reviews using deep neural networks 利用深度神经网络对应用评论进行细粒度情感分析的新型自动框架
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-16 DOI: 10.1007/s10515-024-00444-x
Haochen Zou, Yongli Wang

The substantial volume of user feedback contained in application reviews significantly contributes to the development of human-centred software requirement engineering. The abundance of unstructured text data necessitates an automated analytical framework for decision-making. Language models can automatically extract fine-grained aspect-based sentiment information from application reviews. Existing approaches are constructed based on the general domain corpus, and are challenging to elucidate the internal technique of the recognition process, along with the factors contributing to the analysis results. To fully utilize software engineering domain-specific knowledge and accurately identify aspect-sentiment pairs from application reviews, we design a dependency-enhanced heterogeneous graph neural networks architecture based on the dual-level attention mechanism. The heterogeneous information network with knowledge resources from the software engineering field is embedded into graph convolutional networks to consider the attribute characteristics of different node types. The relationship between aspect terms and sentiment terms in application reviews is determined by adjusting the dual-level attention mechanism. Semantic dependency enhancement is introduced to comprehensively model contextual relationships and analyze sentence structure, thereby distinguishing important contextual information. To our knowledge, this marks initial efforts to leverage software engineering domain knowledge resources to deep neural networks to address fine-grained sentiment analysis issues. The experimental results on multiple public benchmark datasets indicate the effectiveness of the proposed automated framework in aspect-based sentiment analysis tasks for application reviews.

应用评论中包含的大量用户反馈信息,极大地促进了以人为本的软件需求工程的发展。大量的非结构化文本数据需要一个用于决策的自动化分析框架。语言模型可以从应用评论中自动提取细粒度的基于方面的情感信息。现有的方法是基于一般领域的语料库构建的,在阐明识别过程的内部技术以及导致分析结果的因素方面具有挑战性。为了充分利用软件工程领域的特定知识,准确识别应用评论中的方面-情感对,我们设计了一种基于双层关注机制的依赖增强型异构图神经网络架构。将包含软件工程领域知识资源的异构信息网络嵌入图卷积网络,以考虑不同节点类型的属性特征。通过调整双层注意机制,确定应用评论中方面术语和情感术语之间的关系。此外,还引入了语义依赖增强技术,以全面模拟上下文关系并分析句子结构,从而区分重要的上下文信息。据我们所知,这标志着利用软件工程领域知识资源的深度神经网络解决细粒度情感分析问题的初步尝试。在多个公共基准数据集上的实验结果表明,所提出的自动化框架在基于方面的应用评论情感分析任务中非常有效。
{"title":"A novel automated framework for fine-grained sentiment analysis of application reviews using deep neural networks","authors":"Haochen Zou,&nbsp;Yongli Wang","doi":"10.1007/s10515-024-00444-x","DOIUrl":"10.1007/s10515-024-00444-x","url":null,"abstract":"<div><p>The substantial volume of user feedback contained in application reviews significantly contributes to the development of human-centred software requirement engineering. The abundance of unstructured text data necessitates an automated analytical framework for decision-making. Language models can automatically extract fine-grained aspect-based sentiment information from application reviews. Existing approaches are constructed based on the general domain corpus, and are challenging to elucidate the internal technique of the recognition process, along with the factors contributing to the analysis results. To fully utilize software engineering domain-specific knowledge and accurately identify aspect-sentiment pairs from application reviews, we design a dependency-enhanced heterogeneous graph neural networks architecture based on the dual-level attention mechanism. The heterogeneous information network with knowledge resources from the software engineering field is embedded into graph convolutional networks to consider the attribute characteristics of different node types. The relationship between aspect terms and sentiment terms in application reviews is determined by adjusting the dual-level attention mechanism. Semantic dependency enhancement is introduced to comprehensively model contextual relationships and analyze sentence structure, thereby distinguishing important contextual information. To our knowledge, this marks initial efforts to leverage software engineering domain knowledge resources to deep neural networks to address fine-grained sentiment analysis issues. The experimental results on multiple public benchmark datasets indicate the effectiveness of the proposed automated framework in aspect-based sentiment analysis tasks for application reviews.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140968914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Automated Software Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1