首页 > 最新文献

Swarm and Evolutionary Computation最新文献

英文 中文
Generating logic circuit classifiers from dendritic neural model via multi-objective optimization 通过多目标优化从树突状神经模型生成逻辑电路分类器
IF 8.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-26 DOI: 10.1016/j.swevo.2024.101740
Inspired by biological neurons, a novel dendritic neural model (DNM) was proposed in our previous research to pursue a classification technique with simpler architecture, fewer parameters, and higher computation speed. The trained DNM can be transitioned to logic circuit classifiers (LCCs) by discarding unnecessary synapses and dendrites. Unlike conventional artificial neural networks with floating-point calculations, the LCC operates entirely in binary so it can be easily implemented in hardware, which has significant advantages in dealing with a high velocity of data due to its high computational speed. However, oversimplifying the model architecture will lead to the performance degeneration of LCC, and how to balance the architecture and performance is not well understood in practical applications. Therefore, the primary motivation of this study is twofold. First, a theoretical analysis is presented that the transition of LCCs from DNM can be regarded as a specific regularization problem. Second, a multiobjective optimization framework that can simultaneously optimize the classification performance and model the complexity of LCC is proposed to solve the problem. Comprehensive experiments have been conducted to validate the effectiveness and superiority of the proposed framework.
受生物神经元的启发,我们在之前的研究中提出了一种新型树突神经模型(DNM),以追求一种结构更简单、参数更少、计算速度更快的分类技术。训练好的 DNM 可以通过舍弃不必要的突触和树突过渡到逻辑电路分类器(LCC)。与采用浮点运算的传统人工神经网络不同,逻辑电路分类器完全以二进制方式运行,因此很容易在硬件中实现,由于其计算速度快,在处理高速数据时具有显著优势。然而,过度简化模型架构会导致 LCC 性能下降,而在实际应用中如何平衡架构和性能并不十分清楚。因此,本研究的主要动机有两个方面。首先,本文从理论上分析了 LCC 从 DNM 的过渡可视为一个特定的正则化问题。其次,提出了一个多目标优化框架来解决这个问题,该框架可以同时优化分类性能和 LCC 的复杂性模型。通过综合实验验证了所提框架的有效性和优越性。
{"title":"Generating logic circuit classifiers from dendritic neural model via multi-objective optimization","authors":"","doi":"10.1016/j.swevo.2024.101740","DOIUrl":"10.1016/j.swevo.2024.101740","url":null,"abstract":"<div><div>Inspired by biological neurons, a novel dendritic neural model (DNM) was proposed in our previous research to pursue a classification technique with simpler architecture, fewer parameters, and higher computation speed. The trained DNM can be transitioned to logic circuit classifiers (LCCs) by discarding unnecessary synapses and dendrites. Unlike conventional artificial neural networks with floating-point calculations, the LCC operates entirely in binary so it can be easily implemented in hardware, which has significant advantages in dealing with a high velocity of data due to its high computational speed. However, oversimplifying the model architecture will lead to the performance degeneration of LCC, and how to balance the architecture and performance is not well understood in practical applications. Therefore, the primary motivation of this study is twofold. First, a theoretical analysis is presented that the transition of LCCs from DNM can be regarded as a specific regularization problem. Second, a multiobjective optimization framework that can simultaneously optimize the classification performance and model the complexity of LCC is proposed to solve the problem. Comprehensive experiments have been conducted to validate the effectiveness and superiority of the proposed framework.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":8.2,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142323303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large language models as surrogate models in evolutionary algorithms: A preliminary study 进化算法中作为代用模型的大型语言模型:初步研究
IF 8.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-26 DOI: 10.1016/j.swevo.2024.101741
Large Language Models (LLMs) have demonstrated remarkable advancements across diverse domains, manifesting considerable capabilities in evolutionary computation, notably in generating new solutions and automating algorithm design. Surrogate-assisted selection plays a pivotal role in evolutionary algorithms (EAs), especially in addressing expensive optimization problems by reducing the number of real function evaluations. However, whether LLMs can serve as surrogate models remains an unknown. In this study, we propose a novel surrogate model based purely on LLM inference capabilities, eliminating the need for training. Specifically, we formulate model-assisted selection as a classification problem or a regression problem, utilizing LLMs to directly evaluate the quality of new solutions based on historical data. This involves predicting whether a solution is good or bad, or approximating its value. This approach is then integrated into EAs, termed LLM-assisted EA (LAEA). Detailed experiments compared the visualization results of 2D data from 9 mainstream LLMs, as well as their performance on 5-10 dimensional problems. The experimental results demonstrate that LLMs have significant potential as surrogate models in evolutionary computation, achieving performance comparable to traditional surrogate models only using inference. This work offers new insights into the application of LLMs in evolutionary computation. Code is available at: https://github.com/hhyqhh/LAEA.git.
大型语言模型(LLMs)在不同领域都取得了显著进步,在进化计算中表现出相当强的能力,特别是在生成新解决方案和自动化算法设计方面。代理辅助选择在进化算法(EAs)中发挥着举足轻重的作用,尤其是通过减少实际函数评估次数来解决昂贵的优化问题。然而,LLM 能否作为代用模型仍是一个未知数。在本研究中,我们提出了一种纯粹基于 LLM 推理能力的新型代理模型,无需训练。具体来说,我们将模型辅助选择表述为分类问题或回归问题,利用 LLM 直接评估基于历史数据的新解决方案的质量。这包括预测解决方案的好坏或近似其价值。然后将这种方法集成到 EA 中,称为 LLM 辅助 EA(LAEA)。详细实验比较了 9 种主流 LLM 的 2D 数据可视化结果,以及它们在 5-10 维问题上的性能。实验结果表明,LLM 在进化计算中作为代用模型具有巨大潜力,其性能可与仅使用推理的传统代用模型相媲美。这项工作为 LLM 在进化计算中的应用提供了新的见解。代码见:https://github.com/hhyqhh/LAEA.git。
{"title":"Large language models as surrogate models in evolutionary algorithms: A preliminary study","authors":"","doi":"10.1016/j.swevo.2024.101741","DOIUrl":"10.1016/j.swevo.2024.101741","url":null,"abstract":"<div><div>Large Language Models (LLMs) have demonstrated remarkable advancements across diverse domains, manifesting considerable capabilities in evolutionary computation, notably in generating new solutions and automating algorithm design. Surrogate-assisted selection plays a pivotal role in evolutionary algorithms (EAs), especially in addressing expensive optimization problems by reducing the number of real function evaluations. However, whether LLMs can serve as surrogate models remains an unknown. In this study, we propose a novel surrogate model based purely on LLM inference capabilities, eliminating the need for training. Specifically, we formulate model-assisted selection as a classification problem or a regression problem, utilizing LLMs to directly evaluate the quality of new solutions based on historical data. This involves predicting whether a solution is good or bad, or approximating its value. This approach is then integrated into EAs, termed LLM-assisted EA (LAEA). Detailed experiments compared the visualization results of 2D data from 9 mainstream LLMs, as well as their performance on 5-10 dimensional problems. The experimental results demonstrate that LLMs have significant potential as surrogate models in evolutionary computation, achieving performance comparable to traditional surrogate models only using inference. This work offers new insights into the application of LLMs in evolutionary computation. Code is available at: <span><span>https://github.com/hhyqhh/LAEA.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":8.2,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142323302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent cross-entropy optimizer: A novel machine learning-based meta-heuristic for global optimization 智能交叉熵优化器:基于机器学习的新型全局优化元启发式
IF 8.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-25 DOI: 10.1016/j.swevo.2024.101739
Machine Learning (ML) features are extensively applied in various domains, notably in the context of Metaheuristic (MH) optimization methods. While MHs are known for their exploitation and exploration capabilities in navigating large and complex search spaces, they are not without their inherent weaknesses. These weaknesses include slow convergence rates and a struggle to strike an optimal balance between exploration and exploitation, as well as the challenge of effective knowledge extraction from complex data. To address these shortcomings, an AI-based global optimization technique is introduced, known as the Intelligent Cross-Entropy Optimizer (ICEO). This method draws inspiration from the concept of Cross Entropy (CE), a strategy that uses Kullback–Leibler or cross-entropy divergence as a measure of closeness between two sampling distributions, and it uses the potential of Machine Learning (ML) to facilitate the extraction of knowledge from the search data to learn and guide dynamically within complex search spaces. ICEO employs the Self-Organizing Map (SOM), to train and map the intricate, high-dimensional relationships within the search space onto a reduced lattice structure. This combination empowers ICEO to effectively address the weaknesses of traditional MH algorithms. To validate the effectiveness of ICEO, a rigorous evaluation involving well-established benchmark functions, including the CEC 2017 test suite, as well as real-world engineering problems have been conducted. A comprehensive statistical analysis, employing the Wilcoxon test, ranks ICEO against other prominent optimization approaches. The results demonstrate the superiority of ICEO in achieving the optimal balance between computational efficiency, precision, and reliability. In particular, it excels in enhancing convergence rates and exploration-exploitation balance.
机器学习(ML)功能被广泛应用于各个领域,尤其是元启发式(MH)优化方法。虽然 MH 因其在大型复杂搜索空间中的利用和探索能力而闻名,但它们并非没有固有的弱点。这些弱点包括收敛速度慢,难以在探索和利用之间取得最佳平衡,以及难以从复杂数据中有效提取知识。为了解决这些不足,我们引入了一种基于人工智能的全局优化技术,即智能交叉熵优化器(ICEO)。这种方法从交叉熵(Cross Entropy,CE)的概念中汲取灵感,CE 是一种使用库尔贝克-莱布勒(Kullback-Leibler)或交叉熵发散(cross-entropy divergence)来衡量两个采样分布之间接近程度的策略,它利用机器学习(Machine Learning,ML)的潜力来促进从搜索数据中提取知识,从而在复杂的搜索空间内进行动态学习和指导。ICEO 采用自组织图(SOM)来训练搜索空间内错综复杂的高维关系,并将其映射到缩小的网格结构上。这种组合使 ICEO 能够有效解决传统 MH 算法的弱点。为了验证 ICEO 的有效性,我们进行了一项严格的评估,其中涉及包括 CEC 2017 测试套件在内的成熟基准函数以及实际工程问题。综合统计分析采用 Wilcoxon 检验,将 ICEO 与其他著名优化方法进行了比较。结果表明,ICEO 在实现计算效率、精度和可靠性之间的最佳平衡方面更具优势。特别是,它在提高收敛速度和探索-开发平衡方面表现出色。
{"title":"Intelligent cross-entropy optimizer: A novel machine learning-based meta-heuristic for global optimization","authors":"","doi":"10.1016/j.swevo.2024.101739","DOIUrl":"10.1016/j.swevo.2024.101739","url":null,"abstract":"<div><div>Machine Learning (ML) features are extensively applied in various domains, notably in the context of Metaheuristic (MH) optimization methods. While MHs are known for their exploitation and exploration capabilities in navigating large and complex search spaces, they are not without their inherent weaknesses. These weaknesses include slow convergence rates and a struggle to strike an optimal balance between exploration and exploitation, as well as the challenge of effective knowledge extraction from complex data. To address these shortcomings, an AI-based global optimization technique is introduced, known as the Intelligent Cross-Entropy Optimizer (ICEO). This method draws inspiration from the concept of Cross Entropy (CE), a strategy that uses Kullback–Leibler or cross-entropy divergence as a measure of closeness between two sampling distributions, and it uses the potential of Machine Learning (ML) to facilitate the extraction of knowledge from the search data to learn and guide dynamically within complex search spaces. ICEO employs the Self-Organizing Map (SOM), to train and map the intricate, high-dimensional relationships within the search space onto a reduced lattice structure. This combination empowers ICEO to effectively address the weaknesses of traditional MH algorithms. To validate the effectiveness of ICEO, a rigorous evaluation involving well-established benchmark functions, including the CEC 2017 test suite, as well as real-world engineering problems have been conducted. A comprehensive statistical analysis, employing the Wilcoxon test, ranks ICEO against other prominent optimization approaches. The results demonstrate the superiority of ICEO in achieving the optimal balance between computational efficiency, precision, and reliability. In particular, it excels in enhancing convergence rates and exploration-exploitation balance.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":8.2,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142319639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey of genetic algorithms for clustering: Taxonomy and empirical analysis 聚类遗传算法调查:分类与实证分析
IF 8.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-24 DOI: 10.1016/j.swevo.2024.101720
Clustering, an unsupervised learning technique, aims to group patterns into clusters where similar patterns are grouped together, while dissimilar ones are placed in different clusters. This task can present itself as a complex optimization problem due to the extensive search space generated by all potential data partitions. Genetic Algorithms (GAs) have emerged as efficient tools for addressing this task. Consequently, significant advancements and numerous proposals have been developed in this field.
This work offers a comprehensive and critical review of state-of-the-art mono-objective Genetic Algorithms (GAs) for partitional clustering. From a more theoretical standpoint, it examines 22 well-known proposals in detail, covering their encoding strategies, objective functions, genetic operators, local search methods, and parent selection strategies. Based on this information, a specific taxonomy is proposed. In addition, from a more practical standpoint, a detailed experimental study is conducted to discern the advantages and disadvantages of approaches. Specifically, 22 different cluster validation indices are considered to compare the performance of clustering techniques. This evaluation is performed across 94 datasets encompassing diverse configurations, including the number of classes, separation between classes, and pattern dimensionality. Results reveal interesting findings, such as the key role of local search in optimizing results and reducing search space. Additionally, representations based on centroids and labels demonstrate greater efficiency and crossover and mutation operators do not prove to be as relevant. Ultimately, while the results are satisfactory, real-world clustering problems introduce additional complexity, especially for algorithms aiming to determine the number of clusters, resulting in diminished performance and the need for new approaches to be explored. Code, datasets and instructions to run algorithms in the LEAL library are available in an associated repository, in order to facilitate future experiments in this environment.
聚类(Clustering)是一种无监督学习技术,其目的是将模式归类为簇,其中相似的模式归为一组,而不相似的模式则归入不同的簇。由于所有潜在的数据分区都会产生巨大的搜索空间,因此这项任务本身就是一个复杂的优化问题。遗传算法(GA)已成为解决这一任务的有效工具。因此,该领域取得了重大进展,并提出了许多建议。本研究对用于分区聚类的最先进的单目标遗传算法(GA)进行了全面而严谨的评述。从理论角度出发,它详细研究了 22 项著名的建议,包括编码策略、目标函数、遗传算子、局部搜索方法和父系选择策略。在此基础上,提出了具体的分类方法。此外,从更实际的角度出发,还进行了详细的实验研究,以辨别各种方法的优缺点。具体来说,考虑了 22 种不同的聚类验证指数,以比较聚类技术的性能。这项评估是在 94 个数据集上进行的,这些数据集包含不同的配置,包括类的数量、类之间的分离度和模式维度。结果揭示了一些有趣的发现,如局部搜索在优化结果和减少搜索空间方面的关键作用。此外,基于中心点和标签的表示法表现出更高的效率,而交叉和突变算子则没有那么重要。最终,虽然结果令人满意,但现实世界中的聚类问题带来了额外的复杂性,特别是对于旨在确定聚类数量的算法,导致性能下降,需要探索新的方法。LEAL 库中的代码、数据集和算法运行说明可在一个相关的资源库中获得,以方便未来在此环境中进行实验。
{"title":"A survey of genetic algorithms for clustering: Taxonomy and empirical analysis","authors":"","doi":"10.1016/j.swevo.2024.101720","DOIUrl":"10.1016/j.swevo.2024.101720","url":null,"abstract":"<div><div>Clustering, an unsupervised learning technique, aims to group patterns into clusters where similar patterns are grouped together, while dissimilar ones are placed in different clusters. This task can present itself as a complex optimization problem due to the extensive search space generated by all potential data partitions. Genetic Algorithms (GAs) have emerged as efficient tools for addressing this task. Consequently, significant advancements and numerous proposals have been developed in this field.</div><div>This work offers a comprehensive and critical review of state-of-the-art mono-objective Genetic Algorithms (GAs) for partitional clustering. From a more theoretical standpoint, it examines 22 well-known proposals in detail, covering their encoding strategies, objective functions, genetic operators, local search methods, and parent selection strategies. Based on this information, a specific taxonomy is proposed. In addition, from a more practical standpoint, a detailed experimental study is conducted to discern the advantages and disadvantages of approaches. Specifically, 22 different cluster validation indices are considered to compare the performance of clustering techniques. This evaluation is performed across 94 datasets encompassing diverse configurations, including the number of classes, separation between classes, and pattern dimensionality. Results reveal interesting findings, such as the key role of local search in optimizing results and reducing search space. Additionally, representations based on centroids and labels demonstrate greater efficiency and crossover and mutation operators do not prove to be as relevant. Ultimately, while the results are satisfactory, real-world clustering problems introduce additional complexity, especially for algorithms aiming to determine the number of clusters, resulting in diminished performance and the need for new approaches to be explored. Code, datasets and instructions to run algorithms in the LEAL library are available in an associated repository, in order to facilitate future experiments in this environment.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":8.2,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142314712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep reinforcement learning assisted novelty search in Voronoi regions for constrained multi-objective optimization 针对受限多目标优化的沃罗诺伊区域深度强化学习辅助新奇搜索
IF 8.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-24 DOI: 10.1016/j.swevo.2024.101732
Solving constrained multi-objective optimization problems (CMOPs) requires optimizing multiple conflicting objectives while satisfying various constraints. Existing constrained multi-objective evolutionary algorithms (CMOEAs) cross infeasible regions by ignoring constraints. However, these methods might neglect promising search directions, leading to insufficient exploration of the search space. To address this issue, this paper proposes a deep reinforcement learning assisted constrained multi-objective quality-diversity algorithm. The proposed algorithm designs a diversity maintenance mechanism to promote evenly coverage of the final solution set on the constrained Pareto front. Specifically, first, a novelty-oriented archive is created using a centroid Voronoi tessellation, which divides the search space into a desired number of Voronoi regions. Each region acts as a repository of non-dominated solutions with different phenotypic characteristics to provide diversity information and supplementary evolutionary trails. Secondly, to improve resource utilization, a deep Q-network is adopted to learn a policy to select suitable Voronoi regions for offspring generation based on their novelty scores. The exploration of these regions aims to find a set of diverse, high-performing solutions to accelerate convergence and escape local optima. Compared with eight state-of-the-art CMOEAs, experimental studies on four benchmark suites and nine real-world applications demonstrate that the proposed algorithm exhibits superior or at least competitive performance, especially on problems with discrete and narrow feasible regions.
解决约束多目标优化问题(CMOPs)需要优化多个相互冲突的目标,同时满足各种约束条件。现有的约束多目标进化算法(CMOEAs)通过忽略约束条件来跨越不可行区域。然而,这些方法可能会忽略有希望的搜索方向,导致对搜索空间的探索不足。为了解决这个问题,本文提出了一种深度强化学习辅助的约束多目标质量多样性算法。该算法设计了一种多样性维护机制,以促进最终解集在约束帕累托前沿的均匀覆盖。具体来说,首先,利用中心点 Voronoi 网格创建一个面向新颖性的档案,将搜索空间划分为所需数量的 Voronoi 区域。每个区域都作为具有不同表型特征的非主导解的存储库,以提供多样性信息和补充进化轨迹。其次,为了提高资源利用率,我们采用了深度 Q 网络来学习一种策略,根据新颖性得分选择合适的 Voronoi 区域生成子代。对这些区域的探索旨在找到一组多样化、高性能的解决方案,以加速收敛并摆脱局部最优状态。与八种最先进的 CMOEA 相比,对四种基准套件和九种实际应用的实验研究表明,所提出的算法表现出更优越或至少具有竞争力的性能,尤其是在可行区域离散且狭窄的问题上。
{"title":"Deep reinforcement learning assisted novelty search in Voronoi regions for constrained multi-objective optimization","authors":"","doi":"10.1016/j.swevo.2024.101732","DOIUrl":"10.1016/j.swevo.2024.101732","url":null,"abstract":"<div><div>Solving constrained multi-objective optimization problems (CMOPs) requires optimizing multiple conflicting objectives while satisfying various constraints. Existing constrained multi-objective evolutionary algorithms (CMOEAs) cross infeasible regions by ignoring constraints. However, these methods might neglect promising search directions, leading to insufficient exploration of the search space. To address this issue, this paper proposes a deep reinforcement learning assisted constrained multi-objective quality-diversity algorithm. The proposed algorithm designs a diversity maintenance mechanism to promote evenly coverage of the final solution set on the constrained Pareto front. Specifically, first, a novelty-oriented archive is created using a centroid Voronoi tessellation, which divides the search space into a desired number of Voronoi regions. Each region acts as a repository of non-dominated solutions with different phenotypic characteristics to provide diversity information and supplementary evolutionary trails. Secondly, to improve resource utilization, a deep Q-network is adopted to learn a policy to select suitable Voronoi regions for offspring generation based on their novelty scores. The exploration of these regions aims to find a set of diverse, high-performing solutions to accelerate convergence and escape local optima. Compared with eight state-of-the-art CMOEAs, experimental studies on four benchmark suites and nine real-world applications demonstrate that the proposed algorithm exhibits superior or at least competitive performance, especially on problems with discrete and narrow feasible regions.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":8.2,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142314719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Industrial activated sludge model identification using hyperparameter-tuned metaheuristics 利用超参数调整元搜索法识别工业活性污泥模型
IF 8.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-20 DOI: 10.1016/j.swevo.2024.101733

This study focuses on the parameter estimation of an industrial activated sludge model using hyperparameter-tuned metaheuristic techniques. The data used in this study were collected on-site from a textile industry wastewater treatment plant. A Modified Activated Sludge Model (M-ASM) was the 'first-principle model’ selected and implemented with suitable assumptions. Advanced metaheuristic techniques, as Adaptive Tunicate Swarm Optimization (ATSO), Whale Optimization Algorithm (WOA), Rao-3 Optimization (Rao-3) and Driving Training Based Optimization (DTBO) were implemented. The hyperparameter tuning was performed with Bayesian Optimization (BO). Optimized metaheuristic algorithms were implemented for model-parameter identification. The Bayesian optimized Rao-3(BO-Rao-3) algorithm provided the best validation results, with a Mean Absolute Percentage Error (MAPE) value of 7.0141 and Normalized Root Mean Square Error (NRMSE) value of 0.2629. It also had the least execution time. BO-Rao-3 is 0.93% to 4.7% better than the other implemented hyperparameter-tuned metaheuristic techniques.

本研究的重点是利用超参数调整元启发式技术对工业活性污泥模型进行参数估计。本研究使用的数据是从一家纺织工业污水处理厂现场收集的。修正的活性污泥模型(M-ASM)是 "第一原理模型",并在适当的假设条件下实施。采用了先进的元启发式技术,如自适应调谐蜂群优化算法(ATSO)、鲸鱼优化算法(WOA)、Rao-3 优化算法(Rao-3)和基于驾驶训练的优化算法(DTBO)。超参数调整采用贝叶斯优化算法(BO)。优化的元启发式算法用于模型参数识别。贝叶斯优化 Rao-3(BO-Rao-3)算法提供了最佳验证结果,其平均绝对百分比误差(MAPE)值为 7.0141,归一化均方根误差(NRMSE)值为 0.2629。它的执行时间也最短。BO-Rao-3 比其他已实施的超参数调整元启发式技术好 0.93% 到 4.7%。
{"title":"Industrial activated sludge model identification using hyperparameter-tuned metaheuristics","authors":"","doi":"10.1016/j.swevo.2024.101733","DOIUrl":"10.1016/j.swevo.2024.101733","url":null,"abstract":"<div><p>This study focuses on the parameter estimation of an industrial activated sludge model using hyperparameter-tuned metaheuristic techniques. The data used in this study were collected on-site from a textile industry wastewater treatment plant. A Modified Activated Sludge Model (M-ASM) was the 'first-principle model’ selected and implemented with suitable assumptions. Advanced metaheuristic techniques, as Adaptive Tunicate Swarm Optimization (ATSO), Whale Optimization Algorithm (WOA), Rao-3 Optimization (Rao-3) and Driving Training Based Optimization (DTBO) were implemented. The hyperparameter tuning was performed with Bayesian Optimization (BO). Optimized metaheuristic algorithms were implemented for model-parameter identification. The Bayesian optimized Rao-3(BO-Rao-3) algorithm provided the best validation results, with a Mean Absolute Percentage Error (MAPE) value of 7.0141 and Normalized Root Mean Square Error (NRMSE) value of 0.2629. It also had the least execution time. BO-Rao-3 is 0.93% to 4.7% better than the other implemented hyperparameter-tuned metaheuristic techniques.</p></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":8.2,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142272709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Constrained large-scale multiobjective optimization based on a competitive and cooperative swarm optimizer 基于竞争与合作蜂群优化器的有约束大规模多目标优化
IF 8.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-20 DOI: 10.1016/j.swevo.2024.101735

Many engineering application problems can be modeled as constrained multiobjective optimization problems (CMOPs), which have attracted much attention. In solving CMOPs, existing algorithms encounter difficulties in balancing conflicting objectives and constraints. Worse still, the performance of the algorithms deteriorates drastically when the size of the decision variables scales up. To address these issues, this study proposes a competitive and cooperative swarm optimizer for large-scale CMOPs. To balance conflict objectives and constraints, a bidirectional search mechanism based on competitive and cooperative swarms is designed. It involves two swarms, approximating the true Pareto front from two directions. To enhance the search efficiency in large-scale space, we propose a fast-converging competitive swarm optimizer. Unlike existing competitive swarm optimizers, the proposed optimizer updates the velocity and position of all particles at each iteration. Additionally, to reduce the search range of the decision space, a fuzzy decision variables operator is used. Comparison experiments have been performed on test instances with 100–1000 decision variables. Experiments demonstrate the superior performance of the proposed algorithm over five peer algorithms.

许多工程应用问题都可以建模为受约束的多目标优化问题(CMOPs),这些问题已经引起了广泛关注。在解决 CMOPs 时,现有算法在平衡相互冲突的目标和约束条件时会遇到困难。更糟糕的是,当决策变量的规模增大时,算法的性能会急剧下降。为了解决这些问题,本研究提出了一种适用于大规模 CMOP 的竞争与合作蜂群优化器。为了平衡冲突目标和约束条件,设计了一种基于竞争与合作蜂群的双向搜索机制。它包括两个蜂群,从两个方向逼近真正的帕累托前沿。为了提高大规模空间的搜索效率,我们提出了一种快速收敛的竞争性蜂群优化器。与现有的竞争群优化器不同,我们提出的优化器在每次迭代时都会更新所有粒子的速度和位置。此外,为了缩小决策空间的搜索范围,我们还使用了模糊决策变量算子。对 100-1000 个决策变量的测试实例进行了比较实验。实验证明,与五种同类算法相比,所提出的算法性能更优。
{"title":"Constrained large-scale multiobjective optimization based on a competitive and cooperative swarm optimizer","authors":"","doi":"10.1016/j.swevo.2024.101735","DOIUrl":"10.1016/j.swevo.2024.101735","url":null,"abstract":"<div><p>Many engineering application problems can be modeled as constrained multiobjective optimization problems (CMOPs), which have attracted much attention. In solving CMOPs, existing algorithms encounter difficulties in balancing conflicting objectives and constraints. Worse still, the performance of the algorithms deteriorates drastically when the size of the decision variables scales up. To address these issues, this study proposes a competitive and cooperative swarm optimizer for large-scale CMOPs. To balance conflict objectives and constraints, a bidirectional search mechanism based on competitive and cooperative swarms is designed. It involves two swarms, approximating the true Pareto front from two directions. To enhance the search efficiency in large-scale space, we propose a fast-converging competitive swarm optimizer. Unlike existing competitive swarm optimizers, the proposed optimizer updates the velocity and position of all particles at each iteration. Additionally, to reduce the search range of the decision space, a fuzzy decision variables operator is used. Comparison experiments have been performed on test instances with 100–1000 decision variables. Experiments demonstrate the superior performance of the proposed algorithm over five peer algorithms.</p></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":8.2,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142272708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Solving multi-objective robust optimization problems via Stakelberg-based game model 通过基于 Stakelberg 的博弈模型解决多目标鲁棒优化问题
IF 8.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-18 DOI: 10.1016/j.swevo.2024.101734

Real-world multi-objective engineering problems frequently involve uncertainties stemming from environmental factors, production inaccuracies, and other sources. A critical aspect of addressing these problems, termed Multi-Objective Robust Optimization (MORO) problems, is the development of solutions that are both optimal and resilient to uncertainties. This paper proposes addressing these uncertainties through the application of Stackelberg game models, a novel approach involving the interaction of two players. The Leader searches for optimal and robust solutions and the Follower generates uncertainties based on the Leader’s chosen solutions. The Follower seeks to tackle the most challenging uncertainties associated with the Leader’s candidate solutions. Additionally, this paper introduces a novel metric to assess the robustness of a given set of solutions concerning specified uncertainties.

Based on the proposed approach, a co-evolutionary algorithm is developed. A numerical study is then conducted to evaluate the algorithm by comparing its performance with those obtained by four benchmark algorithms on nine benchmark MORO problems. The numerical study also aims to assess its sensitivity to run parameter variations. The experimental results demonstrate the proposed approach’s effectiveness in identifying a non-dominated robust set of solutions.

现实世界中的多目标工程问题经常涉及由环境因素、生产误差和其他来源引起的不确定性。这些问题被称为多目标鲁棒优化(MORO)问题,解决这些问题的一个关键方面是制定既是最优的又能应对不确定性的解决方案。本文建议通过应用斯塔克尔伯格博弈模型来解决这些不确定性,这是一种涉及两个参与者互动的新方法。领导者寻找最佳和稳健的解决方案,追随者根据领导者选择的解决方案产生不确定性。追随者试图解决与领导者候选解决方案相关的最具挑战性的不确定性。此外,本文还引入了一种新的度量方法,用于评估给定解决方案集在特定不确定性情况下的鲁棒性。然后进行了数值研究,通过比较该算法与四个基准算法在九个基准 MORO 问题上获得的性能来评估该算法。数值研究还旨在评估算法对运行参数变化的敏感性。实验结果表明,所提出的方法能有效识别出一组非主导稳健解。
{"title":"Solving multi-objective robust optimization problems via Stakelberg-based game model","authors":"","doi":"10.1016/j.swevo.2024.101734","DOIUrl":"10.1016/j.swevo.2024.101734","url":null,"abstract":"<div><p>Real-world multi-objective engineering problems frequently involve uncertainties stemming from environmental factors, production inaccuracies, and other sources. A critical aspect of addressing these problems, termed Multi-Objective Robust Optimization (MORO) problems, is the development of solutions that are both optimal and resilient to uncertainties. This paper proposes addressing these uncertainties through the application of Stackelberg game models, a novel approach involving the interaction of two players. The Leader searches for optimal and robust solutions and the Follower generates uncertainties based on the Leader’s chosen solutions. The Follower seeks to tackle the most challenging uncertainties associated with the Leader’s candidate solutions. Additionally, this paper introduces a novel metric to assess the robustness of a given set of solutions concerning specified uncertainties.</p><p>Based on the proposed approach, a co-evolutionary algorithm is developed. A numerical study is then conducted to evaluate the algorithm by comparing its performance with those obtained by four benchmark algorithms on nine benchmark MORO problems. The numerical study also aims to assess its sensitivity to run parameter variations. The experimental results demonstrate the proposed approach’s effectiveness in identifying a non-dominated robust set of solutions.</p></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":8.2,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142244171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SaDENAS: A self-adaptive differential evolution algorithm for neural architecture search SaDENAS:用于神经架构搜索的自适应差分进化算法
IF 8.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-18 DOI: 10.1016/j.swevo.2024.101736

Evolutionary neural architecture search (ENAS) and differentiable architecture search (DARTS) are all prominent algorithms in neural architecture search, enabling the automated design of deep neural networks. To leverage the strengths of both methods, there exists a framework called continuous ENAS, which alternates between using gradient descent to optimize the supernet and employing evolutionary algorithms to optimize the architectural encodings. However, in continuous ENAS, there exists a premature convergence issue accompanied by the small model trap, which is a common issue in NAS. To address this issue, this paper proposes a self-adaptive differential evolution algorithm for neural architecture search (SaDENAS), which can reduce the interference caused by small models to other individuals during the optimization process, thereby avoiding premature convergence. Specifically, SaDENAS treats architectures within the search space as architectural encodings, leveraging vector differences between encodings as the basis for evolutionary operators. To achieve a trade-off between exploration and exploitation, we integrate both local and global search strategies with a mutation scaling factor to adaptively balance these two strategies. Empirical findings demonstrate that our proposed algorithm achieves better performance with superior convergence compared to other algorithms.

进化神经架构搜索(ENAS)和可微分架构搜索(DARTS)都是神经架构搜索领域的著名算法,可实现深度神经网络的自动设计。为了充分利用这两种方法的优势,目前存在一种称为连续 ENAS 的框架,该框架交替使用梯度下降算法优化超级网络和进化算法优化架构编码。然而,在连续 ENAS 中,存在一个过早收敛的问题,同时伴随着小模型陷阱,这也是 NAS 中的一个常见问题。针对这一问题,本文提出了一种用于神经架构搜索的自适应差分进化算法(SaDENAS),它可以在优化过程中减少小模型对其他个体的干扰,从而避免过早收敛。具体来说,SaDENAS 将搜索空间内的架构视为架构编码,利用编码间的向量差异作为进化算子的基础。为了在探索和利用之间取得平衡,我们将局部搜索和全局搜索策略与突变比例因子结合起来,自适应地平衡这两种策略。实证研究结果表明,与其他算法相比,我们提出的算法性能更好,收敛性更强。
{"title":"SaDENAS: A self-adaptive differential evolution algorithm for neural architecture search","authors":"","doi":"10.1016/j.swevo.2024.101736","DOIUrl":"10.1016/j.swevo.2024.101736","url":null,"abstract":"<div><p>Evolutionary neural architecture search (ENAS) and differentiable architecture search (DARTS) are all prominent algorithms in neural architecture search, enabling the automated design of deep neural networks. To leverage the strengths of both methods, there exists a framework called continuous ENAS, which alternates between using gradient descent to optimize the supernet and employing evolutionary algorithms to optimize the architectural encodings. However, in continuous ENAS, there exists a premature convergence issue accompanied by the small model trap, which is a common issue in NAS. To address this issue, this paper proposes a self-adaptive differential evolution algorithm for neural architecture search (SaDENAS), which can reduce the interference caused by small models to other individuals during the optimization process, thereby avoiding premature convergence. Specifically, SaDENAS treats architectures within the search space as architectural encodings, leveraging vector differences between encodings as the basis for evolutionary operators. To achieve a trade-off between exploration and exploitation, we integrate both local and global search strategies with a mutation scaling factor to adaptively balance these two strategies. Empirical findings demonstrate that our proposed algorithm achieves better performance with superior convergence compared to other algorithms.</p></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":8.2,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142244172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A dimensionality reduction assisted evolutionary algorithm for high-dimensional expensive multi/many-objective optimization 用于高维昂贵多目标优化的降维辅助进化算法
IF 8.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-16 DOI: 10.1016/j.swevo.2024.101729

Surrogate-assisted multi/many-objective evolutionary algorithms (SA-MOEAs) have shown significant progress in tackling expensive optimization problems. However, existing research primarily focuses on low-dimensional optimization problems. The main reason lies in the fact that some surrogate techniques used in SA-MOEAs, such as the Kriging model, are not applicable for exploring high-dimensional decision space. This paper introduces a surrogate-assisted multi-objective evolutionary algorithm with dimensionality reduction to address high-dimensional expensive optimization problems. The proposed algorithm includes two key insights. Firstly, we propose a dimensionality reduction framework containing three different feature extraction algorithms and a feature drift strategy to map the high-dimensional decision space into a low-dimensional decision space; this strategy helps to improve the robustness of surrogates. Secondly, we propose a sub-region search strategy to define a series of promising sub-regions in the high-dimensional decision space; this strategy helps to improve the exploration ability of the proposed SA-MOEA. Experimental results demonstrate the effectiveness of our proposed algorithm in comparison to several state-of-the-art algorithms.

代理辅助多目标进化算法(SA-MOEAs)在解决昂贵的优化问题方面取得了重大进展。然而,现有研究主要集中在低维优化问题上。主要原因在于,SA-MOEAs 中使用的一些代用技术(如克里金模型)不适用于探索高维决策空间。本文介绍了一种具有降维功能的代用辅助多目标进化算法,以解决高维昂贵的优化问题。所提出的算法包括两个关键见解。首先,我们提出了一个降维框架,其中包含三种不同的特征提取算法和一种特征漂移策略,用于将高维决策空间映射到低维决策空间;这种策略有助于提高代用指标的鲁棒性。其次,我们提出了一种子区域搜索策略,用于在高维决策空间中定义一系列有前景的子区域;这一策略有助于提高拟议的 SA-MOEA 的探索能力。实验结果表明,与几种最先进的算法相比,我们提出的算法非常有效。
{"title":"A dimensionality reduction assisted evolutionary algorithm for high-dimensional expensive multi/many-objective optimization","authors":"","doi":"10.1016/j.swevo.2024.101729","DOIUrl":"10.1016/j.swevo.2024.101729","url":null,"abstract":"<div><p>Surrogate-assisted multi/many-objective evolutionary algorithms (SA-MOEAs) have shown significant progress in tackling expensive optimization problems. However, existing research primarily focuses on low-dimensional optimization problems. The main reason lies in the fact that some surrogate techniques used in SA-MOEAs, such as the Kriging model, are not applicable for exploring high-dimensional decision space. This paper introduces a surrogate-assisted multi-objective evolutionary algorithm with dimensionality reduction to address high-dimensional expensive optimization problems. The proposed algorithm includes two key insights. Firstly, we propose a dimensionality reduction framework containing three different feature extraction algorithms and a feature drift strategy to map the high-dimensional decision space into a low-dimensional decision space; this strategy helps to improve the robustness of surrogates. Secondly, we propose a sub-region search strategy to define a series of promising sub-regions in the high-dimensional decision space; this strategy helps to improve the exploration ability of the proposed SA-MOEA. Experimental results demonstrate the effectiveness of our proposed algorithm in comparison to several state-of-the-art algorithms.</p></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":8.2,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142244170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Swarm and Evolutionary Computation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1