首页 > 最新文献

Evolutionary Computation最新文献

英文 中文
Exploring Automated Algorithm Design Synergizing Large Language Models and Evolutionary Algorithms: Survey and Insights. 探索协同大型语言模型和进化算法的自动算法设计:调查和见解。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-31 DOI: 10.1162/EVCO.a.370
He Yu, Jing Liu

Designing algorithms for optimization problems, no matter heuristic or meta-heuristic, often relies on manual design and domain expertise, limiting their scalability and adaptability. The integration of Large Language Models (LLMs) and Evolutionary Algorithms (EAs) presents a promising new way to overcome these limitations to make optimization be more automated, where LLMs function as dynamic agents capable of generating, refining, and interpreting optimization strategies, while EAs explore complex searching spaces efficiently through evolutionary operators. Since this synergy enables a more efficient and creative searching process, we first review important developments in this direction, and then summarize an LLM-EA paradigm for automated optimization algorithm design. We conduct an in-depth analysis on innovative methods for four key EA modules, namely, individual representation, selection, variation operators, and fitness evaluation, addressing challenges related to optimization algorithm design, particularly from the perspective of LLM prompts, analyzing how the prompt flow evolving with the evolutionary process, adjusting based on evolutionary feedback (e.g., population diversity, convergence rate). Furthermore, we analyze how LLMs, through flexible prompt-driven roles, introduce semantic intelligence into fundamental EA characteristics, including diversity, convergence, adaptability, and scalability. Our systematic review and thorough analysis into the paradigm can help researchers better understand the current research and boost the development of synergizing LLMs with EAs for automated optimization algorithm design.

优化问题的算法设计,无论是启发式还是元启发式,往往依赖于人工设计和领域专业知识,限制了它们的可扩展性和适应性。大型语言模型(llm)和进化算法(ea)的集成提供了一种有希望的新方法来克服这些限制,使优化更加自动化,其中llm作为能够生成,精炼和解释优化策略的动态代理,而ea通过进化算子有效地探索复杂的搜索空间。由于这种协同作用使搜索过程更加高效和创造性,我们首先回顾了这一方向的重要发展,然后总结了自动优化算法设计的LLM-EA范式。我们深入分析了个体表示、选择、变异算子和适应度评估四个关键EA模块的创新方法,解决了优化算法设计中的挑战,特别是从LLM提示符的角度,分析了提示流如何随着进化过程而演变,并根据进化反馈(如种群多样性、收敛速度)进行调整。此外,我们分析了llm如何通过灵活的提示驱动角色,将语义智能引入EA的基本特征,包括多样性、收敛性、适应性和可扩展性。我们对该范式的系统回顾和深入分析可以帮助研究人员更好地了解当前的研究现状,并促进法学硕士与ea在自动优化算法设计方面的协同发展。
{"title":"Exploring Automated Algorithm Design Synergizing Large Language Models and Evolutionary Algorithms: Survey and Insights.","authors":"He Yu, Jing Liu","doi":"10.1162/EVCO.a.370","DOIUrl":"https://doi.org/10.1162/EVCO.a.370","url":null,"abstract":"<p><p>Designing algorithms for optimization problems, no matter heuristic or meta-heuristic, often relies on manual design and domain expertise, limiting their scalability and adaptability. The integration of Large Language Models (LLMs) and Evolutionary Algorithms (EAs) presents a promising new way to overcome these limitations to make optimization be more automated, where LLMs function as dynamic agents capable of generating, refining, and interpreting optimization strategies, while EAs explore complex searching spaces efficiently through evolutionary operators. Since this synergy enables a more efficient and creative searching process, we first review important developments in this direction, and then summarize an LLM-EA paradigm for automated optimization algorithm design. We conduct an in-depth analysis on innovative methods for four key EA modules, namely, individual representation, selection, variation operators, and fitness evaluation, addressing challenges related to optimization algorithm design, particularly from the perspective of LLM prompts, analyzing how the prompt flow evolving with the evolutionary process, adjusting based on evolutionary feedback (e.g., population diversity, convergence rate). Furthermore, we analyze how LLMs, through flexible prompt-driven roles, introduce semantic intelligence into fundamental EA characteristics, including diversity, convergence, adaptability, and scalability. Our systematic review and thorough analysis into the paradigm can help researchers better understand the current research and boost the development of synergizing LLMs with EAs for automated optimization algorithm design.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-27"},"PeriodicalIF":3.4,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145423417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GESR: A Geometric Evolution Model for Symbolic Regression. GESR:符号回归的几何演化模型。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-31 DOI: 10.1162/EVCO.a.367
Zhitong Ma, Jinghui Zhong

Symbolic regression is a challenging task in machine learning that aims to automatically discover highly interpretable mathematical equations from limited data. Keen efforts have been devoted to addressing this issue, yielding promising results. However, there are still bottlenecks that current methods struggle with, especially when dealing with the datasets that characterize intricate mathematical expressions. In this work, we propose a novel Geometric Evolution Symbolic Regression algorithm. Leveraging geometric semantics, the process of symbolic regression in GESR is transformed into an approximation to an unimodal target in n-dimensional semantic space. Then, three key modules are presented to enhance the approximation: (1) a new semantic gradient concept, proposed from the observation of inaccurate approximation results within semantic backpropagation, to assist the exploration in the semantic space and improve the accuracy of semantic approximation; (2) a new geometric semantic search operator, tailored for efficiently approximating the target formula directly in the sparse semantic space, to obtain more accurate and interpretable solutions under strict program size constraints; (3) the Levenberg-Marquardt algorithm with L1 regularization, used for the adjustment of expression structures and the optimization of global subtree weights to assist the proposed geometric semantic search operator. Assisted with these modules, GESR achieves state-of-the-art accuracy performance on SRSD benchmark datasets. The implementation is available at https://github.com/MZT-srcount/GESR.

符号回归是机器学习中的一项具有挑战性的任务,旨在从有限的数据中自动发现高度可解释的数学方程。为解决这一问题作出了积极努力,并取得了可喜的成果。然而,目前的方法仍然存在瓶颈,特别是在处理具有复杂数学表达式特征的数据集时。在这项工作中,我们提出了一种新的几何进化符号回归算法。利用几何语义,将GESR中的符号回归过程转化为n维语义空间中对单峰目标的近似。然后,提出了三个增强逼近的关键模块:(1)通过观察语义反向传播中不准确的逼近结果,提出了新的语义梯度概念,以辅助语义空间的探索,提高语义逼近的精度;(2)一种新的几何语义搜索算子,为在稀疏语义空间中直接高效逼近目标公式而量身定制,在严格的程序大小约束下获得更精确和可解释的解;(3)带L1正则化的Levenberg-Marquardt算法,用于调整表达式结构和优化全局子树权重,以辅助所提出的几何语义搜索算子。借助这些模块,GESR在SRSD基准数据集上实现了最先进的精度性能。该实现可从https://github.com/MZT-srcount/GESR获得。
{"title":"GESR: A Geometric Evolution Model for Symbolic Regression.","authors":"Zhitong Ma, Jinghui Zhong","doi":"10.1162/EVCO.a.367","DOIUrl":"https://doi.org/10.1162/EVCO.a.367","url":null,"abstract":"<p><p>Symbolic regression is a challenging task in machine learning that aims to automatically discover highly interpretable mathematical equations from limited data. Keen efforts have been devoted to addressing this issue, yielding promising results. However, there are still bottlenecks that current methods struggle with, especially when dealing with the datasets that characterize intricate mathematical expressions. In this work, we propose a novel Geometric Evolution Symbolic Regression algorithm. Leveraging geometric semantics, the process of symbolic regression in GESR is transformed into an approximation to an unimodal target in n-dimensional semantic space. Then, three key modules are presented to enhance the approximation: (1) a new semantic gradient concept, proposed from the observation of inaccurate approximation results within semantic backpropagation, to assist the exploration in the semantic space and improve the accuracy of semantic approximation; (2) a new geometric semantic search operator, tailored for efficiently approximating the target formula directly in the sparse semantic space, to obtain more accurate and interpretable solutions under strict program size constraints; (3) the Levenberg-Marquardt algorithm with L1 regularization, used for the adjustment of expression structures and the optimization of global subtree weights to assist the proposed geometric semantic search operator. Assisted with these modules, GESR achieves state-of-the-art accuracy performance on SRSD benchmark datasets. The implementation is available at https://github.com/MZT-srcount/GESR.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-27"},"PeriodicalIF":3.4,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145423479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Pareto Optimization Using Sliding Window Selection for Problems with Determinstic and Stochastic Constraints. 基于滑动窗口选择的确定性和随机约束问题快速Pareto优化。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-31 DOI: 10.1162/EVCO.a.368
Frank Neumann, Carsten Witt

Submodular optimization problems play a key role in artificial intelligence as they allow to capture many important problems in machine learning, data science, and social networks. Pareto optimization using evolutionary multi-objective algorithms such as GSEMO (also called POMC) has been widely applied to solve constrained submodular optimization problems. A crucial factor determining the runtime of the used evolutionary algorithms to obtain good approximations is the population size of the algorithms which usually grows with the number of trade-offs that the algorithms encounter. In this paper, we introduce a sliding window speed up technique for recently introduced algorithms. We first examine the setting of deterministic constraints for which bi-objective formulations have been proposed in the literature. We prove that our technique eliminates the population size as a crucial factor negatively impacting the runtime bounds of the classical GSEMO algorithm and achieves the same theoretical performance guarantees as previous approaches within less computation time. Our experimental investigations for the classical maximum coverage problem confirm that our sliding window technique clearly leads to better results for a wide range of instances and constraint settings. After we have shown that the sliding approach leads to significant improvements for bi-objective formulations, we examine how to speed up a recently introduced 3-objective formulation for stochastic constraints. We show through theoretical and experimental investigations that the sliding window approach also leads to significant improvements for such 3-objective formulations as it allows for a more tailored parent selection that matches the optimization progress of the algorithm.

子模块优化问题在人工智能中发挥着关键作用,因为它们允许捕获机器学习,数据科学和社交网络中的许多重要问题。基于进化多目标算法的Pareto优化算法,如GSEMO(也称为POMC),已被广泛应用于求解约束子模优化问题。决定所使用的进化算法运行时间以获得良好近似的一个关键因素是算法的总体大小,该算法通常随着算法遇到的权衡数量而增长。在本文中,我们为最近引入的算法引入了滑动窗口加速技术。我们首先研究了确定性约束的设置,其中双目标公式已在文献中提出。我们证明了我们的技术消除了种群大小作为一个负面影响经典GSEMO算法运行时边界的关键因素,并在更少的计算时间内实现了与以前方法相同的理论性能保证。我们对经典最大覆盖问题的实验研究证实,我们的滑动窗口技术在广泛的实例和约束设置下明显导致更好的结果。在我们展示了滑动方法导致双目标公式的显着改进之后,我们研究了如何加速最近引入的随机约束的3目标公式。我们通过理论和实验研究表明,滑动窗口方法也导致这种3目标公式的显着改进,因为它允许更定制的父选择,以匹配算法的优化进度。
{"title":"Fast Pareto Optimization Using Sliding Window Selection for Problems with Determinstic and Stochastic Constraints.","authors":"Frank Neumann, Carsten Witt","doi":"10.1162/EVCO.a.368","DOIUrl":"https://doi.org/10.1162/EVCO.a.368","url":null,"abstract":"<p><p>Submodular optimization problems play a key role in artificial intelligence as they allow to capture many important problems in machine learning, data science, and social networks. Pareto optimization using evolutionary multi-objective algorithms such as GSEMO (also called POMC) has been widely applied to solve constrained submodular optimization problems. A crucial factor determining the runtime of the used evolutionary algorithms to obtain good approximations is the population size of the algorithms which usually grows with the number of trade-offs that the algorithms encounter. In this paper, we introduce a sliding window speed up technique for recently introduced algorithms. We first examine the setting of deterministic constraints for which bi-objective formulations have been proposed in the literature. We prove that our technique eliminates the population size as a crucial factor negatively impacting the runtime bounds of the classical GSEMO algorithm and achieves the same theoretical performance guarantees as previous approaches within less computation time. Our experimental investigations for the classical maximum coverage problem confirm that our sliding window technique clearly leads to better results for a wide range of instances and constraint settings. After we have shown that the sliding approach leads to significant improvements for bi-objective formulations, we examine how to speed up a recently introduced 3-objective formulation for stochastic constraints. We show through theoretical and experimental investigations that the sliding window approach also leads to significant improvements for such 3-objective formulations as it allows for a more tailored parent selection that matches the optimization progress of the algorithm.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-34"},"PeriodicalIF":3.4,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145423502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
R2 v2: The Pareto-compliant R2 Indicator for Better Benchmarking in Bi-objective Optimization. R2 v2:用于双目标优化中更好的基准测试的Pareto-compliant R2指标。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-09 DOI: 10.1162/EVCO.a.366
Lennart Schäpermeier, Pascal Kerschke

In multi-objective optimization, set-based quality indicators are a cornerstone of benchmarking and performance assessment. They capture the quality of a set of tradeoff solutions by reducing it to a scalar number. One of the most commonly used setbased metrics is the R2 indicator, which describes the expected utility of a solution set to a decision-maker under a distribution of utility functions. Typically, this indicator is applied by discretizing the latter distribution, yielding a weakly Pareto-compliant indicator. In consequence, adding a nondominated or dominating solution to a solution set may - but does not have to - improve the indicator's value. In this paper, we reinvestigate the R2 indicator under the premise that we have a continuous, uniform distribution of (Tchebycheff) utility functions. We analyze its properties in detail, demonstrating that this continuous variant is indeed Pareto-compliant - that is, any beneficial solution will improve the metric's value. Additionally, we provide efficient computational procedures that (a) compute this metric for bi-objective problems in O(NlogN), and (b) can perform incremental updates to the indicator whenever solutions are added to (or removed from) the current set of solutions, without needing to recompute the indicator for the entire set. As a result, this work contributes to the state-of-the-art Pareto-compliant unary performance metrics, such as the hypervolume indicator, offering an efficient and promising alternative.

在多目标优化中,基于集合的质量指标是对标和绩效评估的基础。它们通过将一组权衡解决方案简化为标量数来捕获其质量。最常用的基于集合的度量之一是R2指示器,它描述了在效用函数分布下解决方案集对决策者的预期效用。通常,该指标通过离散后一种分布来应用,产生弱帕累托顺应指标。因此,在一个解集中加入一个非支配或支配的解,可能——但不一定——提高指标值。本文在(Tchebycheff)效用函数连续均匀分布的前提下,重新研究了R2指标。我们详细分析了它的属性,证明了这个连续的变量确实是帕累托兼容的——也就是说,任何有益的解决方案都将提高度量的值。此外,我们提供了高效的计算过程,(a)在O(NlogN)内计算双目标问题的度量,(b)可以在解决方案添加到(或从)当前解决方案集中删除时对指标执行增量更新,而无需为整个集重新计算指标。因此,这项工作有助于实现最先进的pareto兼容的一元性能指标,例如hypervolume指标,提供了一种高效且有前途的替代方案。
{"title":"R2 v2: The Pareto-compliant R2 Indicator for Better Benchmarking in Bi-objective Optimization.","authors":"Lennart Schäpermeier, Pascal Kerschke","doi":"10.1162/EVCO.a.366","DOIUrl":"https://doi.org/10.1162/EVCO.a.366","url":null,"abstract":"<p><p>In multi-objective optimization, set-based quality indicators are a cornerstone of benchmarking and performance assessment. They capture the quality of a set of tradeoff solutions by reducing it to a scalar number. One of the most commonly used setbased metrics is the R2 indicator, which describes the expected utility of a solution set to a decision-maker under a distribution of utility functions. Typically, this indicator is applied by discretizing the latter distribution, yielding a weakly Pareto-compliant indicator. In consequence, adding a nondominated or dominating solution to a solution set may - but does not have to - improve the indicator's value. In this paper, we reinvestigate the R2 indicator under the premise that we have a continuous, uniform distribution of (Tchebycheff) utility functions. We analyze its properties in detail, demonstrating that this continuous variant is indeed Pareto-compliant - that is, any beneficial solution will improve the metric's value. Additionally, we provide efficient computational procedures that (a) compute this metric for bi-objective problems in O(NlogN), and (b) can perform incremental updates to the indicator whenever solutions are added to (or removed from) the current set of solutions, without needing to recompute the indicator for the entire set. As a result, this work contributes to the state-of-the-art Pareto-compliant unary performance metrics, such as the hypervolume indicator, offering an efficient and promising alternative.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-17"},"PeriodicalIF":3.4,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145259870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
All-Quadratic Mixed-Integer Problems: A Study on Evolution Strategies and Mathematical Programming. 全二次混合整数问题:进化策略与数学规划的研究。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-10 DOI: 10.1162/evco.a.29
Guy Zepko, Ofer M Shir

Mixed-integer (MI) quadratic models subject to quadratic constraints, known as All- Quadratic MI Programs, constitute a challenging class of NP-complete optimization problems. The particular scenario of unbounded integers defines a subclass that holds the distinction of being even undecidable. This complexity suggests a possible soft-spot for Mathematical Programming (MP) techniques, which otherwise constitute a good choice to treat MI problems. We consider the task of minimizing MI convex quadratic objective and constraint functions with unbounded decision variables. Given the theoretical weakness of white-box MP solvers to handle such models, we turn to black-box meta-heuristics of the Evolution Strategies (ESs) family, and question their capacity to solve this challenge. Through an empirical assessment of all-quadratic test-cases, across varying Hessian forms and condition numbers, we compare the performance of the CPLEX solver to modern MI ESs, which handle constraints by penalty. Our systematic investigation begins where the CPLEX solver encounters difficulties (timeouts as the search-space dimensionality increases, D < 30), and we report in detail on the D = 64 case. Overall, the empirical observations confirm that black-box and white-box solvers can be competitive over this MI problem class, exhibiting 67% similar performance in terms of the attained objective function values in a fixed-budget perspective. Despite consistent termination in timeouts, CPLEX demonstrated superior or comparable performance to the MIESs in 98% of the cases. This trend is flipped when unboundedness is amplified by a significant translation of the optima, leading to a totally inferior performance of CPLEX across 81% of the cases. We also conclude that conditioning and separability are not intuitive factors in determining the hardness degree of this MI problem class.

受二次约束的混合整数(MI)二次模型,被称为全二次MI规划,是一类具有挑战性的np完全优化问题。无界整数的特殊场景定义了一个子类,它具有甚至不可判定的区别。这种复杂性暗示了数学规划(MP)技术的一个可能的软点,否则它将构成处理MI问题的一个很好的选择。研究具有无界决策变量的MI凸二次目标和约束函数的最小化问题。鉴于白盒MP求解器在处理此类模型方面的理论弱点,我们转向进化策略(ESs)家族的黑盒元启发式,并质疑它们解决这一挑战的能力。通过对所有二次测试用例的经验评估,在不同的Hessian形式和条件数下,我们比较了CPLEX求解器与现代MI ESs的性能,后者通过惩罚来处理约束。我们的系统调查从CPLEX求解器遇到困难的地方开始(随着搜索空间维数的增加而超时,D < 30),我们详细报告了D = 64的情况。总体而言,经验观察证实,黑盒和白盒解决方案可以在这个MI问题类别中具有竞争力,在固定预算的角度下,就所获得的目标函数值而言,表现出67%的相似性能。尽管在超时终止时,CPLEX在98%的病例中表现出优于或与mess相当的性能。当无界性被最优值的显著平移放大时,这一趋势就会发生逆转,导致CPLEX在81%的情况下表现完全不佳。我们还得出结论,条件作用和可分离性不是决定该MI问题类别硬度的直观因素。
{"title":"All-Quadratic Mixed-Integer Problems: A Study on Evolution Strategies and Mathematical Programming.","authors":"Guy Zepko, Ofer M Shir","doi":"10.1162/evco.a.29","DOIUrl":"https://doi.org/10.1162/evco.a.29","url":null,"abstract":"<p><p>Mixed-integer (MI) quadratic models subject to quadratic constraints, known as All- Quadratic MI Programs, constitute a challenging class of NP-complete optimization problems. The particular scenario of unbounded integers defines a subclass that holds the distinction of being even undecidable. This complexity suggests a possible soft-spot for Mathematical Programming (MP) techniques, which otherwise constitute a good choice to treat MI problems. We consider the task of minimizing MI convex quadratic objective and constraint functions with unbounded decision variables. Given the theoretical weakness of white-box MP solvers to handle such models, we turn to black-box meta-heuristics of the Evolution Strategies (ESs) family, and question their capacity to solve this challenge. Through an empirical assessment of all-quadratic test-cases, across varying Hessian forms and condition numbers, we compare the performance of the CPLEX solver to modern MI ESs, which handle constraints by penalty. Our systematic investigation begins where the CPLEX solver encounters difficulties (timeouts as the search-space dimensionality increases, D < 30), and we report in detail on the D = 64 case. Overall, the empirical observations confirm that black-box and white-box solvers can be competitive over this MI problem class, exhibiting 67% similar performance in terms of the attained objective function values in a fixed-budget perspective. Despite consistent termination in timeouts, CPLEX demonstrated superior or comparable performance to the MIESs in 98% of the cases. This trend is flipped when unboundedness is amplified by a significant translation of the optima, leading to a totally inferior performance of CPLEX across 81% of the cases. We also conclude that conditioning and separability are not intuitive factors in determining the hardness degree of this MI problem class.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-27"},"PeriodicalIF":3.4,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145034668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
P-NP Instance Decomposition Based on the Fourier Transform for Solving the Linear Ordering Problem. 基于傅里叶变换的P-NP实例分解求解线性排序问题。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-02 DOI: 10.1162/evco_a_00368
Xabier Benavides, Leticia Hernando, Josu Ceberio, Jose A Lozano

The Fourier transform over finite groups has proved to be a useful tool for analyzing combinatorial optimization problems. However, few heuristic and metaheuristic algorithms have been proposed in the literature that utilize the information provided by this technique to guide the search process. In this work, we attempt to address this research gap by considering the case study of the Linear Ordering Problem (LOP). Based on the Fourier transform, we propose an instance decomposition strategy that divides any LOP instance into the sum of two LOP instances associated with a P and an NP-Hard optimization problem. By linearly aggregating the instances obtained from the decomposition, it is possible to create artificial instances with modified proportions of the P and NP-Hard components. Conducted experiments show that increasing the weight of the P component leads to a less rugged fitness landscape suitable for local search-based optimization. We take advantage of this phenomenon by presenting a new metaheuristic algorithm called P-Descent Search (PDS). The proposed method, first, optimizes a surrogate instance with a high proportion of the P component, and then, gradually increases the weight of the NP-Hard component until the original instance is reached. The multi-start version of PDS shows a promising and predictable performance that appears to be correlated to specific characteristics of the problem, which could open the door to an automatic tuning of its hyperparameters.

有限群上的傅里叶变换已被证明是分析组合优化问题的一个有用工具。然而,文献中很少提出启发式和元启发式算法,利用该技术提供的信息来指导搜索过程。在这项工作中,我们试图通过考虑线性排序问题(LOP)的案例研究来解决这一研究缺口。基于傅里叶变换,我们提出了一种实例分解策略,将任意LOP实例分解为与P和NP-Hard优化问题相关的两个LOP实例的和。通过线性聚合从分解中获得的实例,可以创建具有修改过的P和NP-Hard组件比例的人工实例。进行的实验表明,增加P分量的权重会导致适合局部搜索优化的适应度景观不那么粗糙。我们利用这一现象,提出了一种新的元启发式算法,称为p下降搜索(PDS)。该方法首先优化具有较高P分量比例的代理实例,然后逐步增加NP-Hard分量的权重,直至达到原始实例。PDS的多启动版本显示出有希望且可预测的性能,似乎与问题的特定特征相关,这可能为自动调整其超参数打开了大门。
{"title":"P-NP Instance Decomposition Based on the Fourier Transform for Solving the Linear Ordering Problem.","authors":"Xabier Benavides, Leticia Hernando, Josu Ceberio, Jose A Lozano","doi":"10.1162/evco_a_00368","DOIUrl":"10.1162/evco_a_00368","url":null,"abstract":"<p><p>The Fourier transform over finite groups has proved to be a useful tool for analyzing combinatorial optimization problems. However, few heuristic and metaheuristic algorithms have been proposed in the literature that utilize the information provided by this technique to guide the search process. In this work, we attempt to address this research gap by considering the case study of the Linear Ordering Problem (LOP). Based on the Fourier transform, we propose an instance decomposition strategy that divides any LOP instance into the sum of two LOP instances associated with a P and an NP-Hard optimization problem. By linearly aggregating the instances obtained from the decomposition, it is possible to create artificial instances with modified proportions of the P and NP-Hard components. Conducted experiments show that increasing the weight of the P component leads to a less rugged fitness landscape suitable for local search-based optimization. We take advantage of this phenomenon by presenting a new metaheuristic algorithm called P-Descent Search (PDS). The proposed method, first, optimizes a surrogate instance with a high proportion of the P component, and then, gradually increases the weight of the NP-Hard component until the original instance is reached. The multi-start version of PDS shows a promising and predictable performance that appears to be correlated to specific characteristics of the problem, which could open the door to an automatic tuning of its hyperparameters.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"395-423"},"PeriodicalIF":3.4,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143469897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Genetic Programming for Automatically Evolving Multiple Features to Classification. 遗传编程自动演化分类的多重特征
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-02 DOI: 10.1162/evco_a_00359
Peng Wang, Bing Xue, Jing Liang, Mengjie Zhang

Performing classification on high-dimensional data poses a significant challenge due to the huge search space. Moreover, complex feature interactions introduce an additional obstacle. The problems can be addressed by using feature selection to select relevant features or feature construction to construct a small set of high-level features. However, performing feature selection or feature construction might only make the feature set suboptimal. To remedy this problem, this study investigates the use of genetic programming for simultaneous feature selection and feature construction in addressing different classification tasks. The proposed approach is tested on 16 datasets and compared with seven methods including both feature selection and feature construction techniques. The results show that the obtained feature sets with the constructed and/or selected features can significantly increase the classification accuracy and reduce the dimensionality of the datasets. Further analysis reveals the complementarity of the obtained features leading to the promising classification performance of the proposed method.

由于搜索空间巨大,对高维数据进行分类是一项重大挑战。此外,复杂的特征交互也带来了额外的障碍。要解决这些问题,可以使用特征选择来选择相关特征,或者使用特征构建来构建一小部分高级特征集。然而,仅进行特征选择或特征构建可能会使特征集不够理想。为了解决这个问题,本研究探讨了使用遗传编程同时进行特征选择和特征构建,以解决不同的分类任务。所提出的方法在 16 个数据集上进行了测试,并与包括特征选择和特征构建技术在内的七种方法进行了比较。结果表明,利用构建和/或选择的特征获得的特征集可显著提高分类准确率并降低数据集的维度。进一步的分析表明,所获得的特征具有互补性,因此建议的方法具有良好的分类性能。
{"title":"Genetic Programming for Automatically Evolving Multiple Features to Classification.","authors":"Peng Wang, Bing Xue, Jing Liang, Mengjie Zhang","doi":"10.1162/evco_a_00359","DOIUrl":"10.1162/evco_a_00359","url":null,"abstract":"<p><p>Performing classification on high-dimensional data poses a significant challenge due to the huge search space. Moreover, complex feature interactions introduce an additional obstacle. The problems can be addressed by using feature selection to select relevant features or feature construction to construct a small set of high-level features. However, performing feature selection or feature construction might only make the feature set suboptimal. To remedy this problem, this study investigates the use of genetic programming for simultaneous feature selection and feature construction in addressing different classification tasks. The proposed approach is tested on 16 datasets and compared with seven methods including both feature selection and feature construction techniques. The results show that the obtained feature sets with the constructed and/or selected features can significantly increase the classification accuracy and reduce the dimensionality of the datasets. Further analysis reveals the complementarity of the obtained features leading to the promising classification performance of the proposed method.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"335-362"},"PeriodicalIF":3.4,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142299919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Monotone Chance-Constrained Submodular Functions Using Evolutionary Multiobjective Algorithms. 利用进化多目标算法优化单调机会受限子模函数
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-02 DOI: 10.1162/evco_a_00360
Aneta Neumann, Frank Neumann

Many real-world optimization problems can be stated in terms of submodular functions. Furthermore, these real-world problems often involve uncertainties which may lead to the violation of given constraints. A lot of evolutionary multiobjective algorithms following the Pareto optimization approach have recently been analyzed and applied to submodular problems with different types of constraints. We present a first runtime analysis of evolutionary multiobjective algorithms based on Pareto optimization for chance-constrained submodular functions. Here the constraint involves stochastic components and the constraint can only be violated with a small probability of α. We investigate the classical GSEMO algorithm for two different bi-objective formulations using tail bounds to determine the feasibility of solutions. We show that the algorithm GSEMO obtains the same worst case performance guarantees for monotone submodular functions as recently analyzed greedy algorithms for the case of uniform IID weights and uniformly distributed weights with the same dispersion when using the appropriate bi-objective formulation. As part of our investigations, we also point out situations where the use of tail bounds in the first bi-objective formulation can prevent GSEMO from obtaining good solutions in the case of uniformly distributed weights with the same dispersion if the objective function is submodular but non-monotone due to a single element impacting monotonicity. Furthermore, we investigate the behavior of the evolutionary multiobjective algorithms GSEMO, NSGA-II, and SPEA2 on different submodular chance-constrained network problems. Our experimental results show that the use of evolutionary multiobjective algorithms leads to significant performance improvements compared to state-of-the-art greedy algorithms for submodular optimization.

现实世界中的许多优化问题都可以用亚模态函数来表述。此外,这些现实世界的问题往往涉及不确定性,可能导致违反给定的约束条件。最近,很多采用帕累托优化方法的进化多目标算法被分析并应用于具有不同类型约束条件的亚模态问题。我们首次对基于帕累托优化的进化多目标算法进行了运行分析,以解决偶然性约束的亚模态函数问题。这里的约束涉及随机成分,并且约束只能以很小的概率 α 违反。我们针对两种不同的双目标公式研究了经典的 GSEMO 算法,利用尾边界来确定解的可行性。我们发现,在使用适当的双目标公式时,对于单调亚模态函数,GSEMO 算法可以获得与最近分析过的贪婪算法相同的最坏情况性能保证,即均匀 IID 权重和具有相同分散性的均匀分布权重。作为研究的一部分,我们还指出,如果目标函数是亚模态的,但由于单个元素影响了单调性,在具有相同离散度的均匀分布权重情况下,在第一个双目标公式中使用尾边界可能会阻止 GSEMO 获得良好的解决方案。此外,我们还研究了进化多目标算法 GSEMO、NSGA-II 和 SPEA2 在不同的亚模态偶然受限网络问题上的表现。实验结果表明,与最先进的子模块优化贪婪算法相比,使用进化多目标算法能显著提高性能。
{"title":"Optimizing Monotone Chance-Constrained Submodular Functions Using Evolutionary Multiobjective Algorithms.","authors":"Aneta Neumann, Frank Neumann","doi":"10.1162/evco_a_00360","DOIUrl":"10.1162/evco_a_00360","url":null,"abstract":"<p><p>Many real-world optimization problems can be stated in terms of submodular functions. Furthermore, these real-world problems often involve uncertainties which may lead to the violation of given constraints. A lot of evolutionary multiobjective algorithms following the Pareto optimization approach have recently been analyzed and applied to submodular problems with different types of constraints. We present a first runtime analysis of evolutionary multiobjective algorithms based on Pareto optimization for chance-constrained submodular functions. Here the constraint involves stochastic components and the constraint can only be violated with a small probability of α. We investigate the classical GSEMO algorithm for two different bi-objective formulations using tail bounds to determine the feasibility of solutions. We show that the algorithm GSEMO obtains the same worst case performance guarantees for monotone submodular functions as recently analyzed greedy algorithms for the case of uniform IID weights and uniformly distributed weights with the same dispersion when using the appropriate bi-objective formulation. As part of our investigations, we also point out situations where the use of tail bounds in the first bi-objective formulation can prevent GSEMO from obtaining good solutions in the case of uniformly distributed weights with the same dispersion if the objective function is submodular but non-monotone due to a single element impacting monotonicity. Furthermore, we investigate the behavior of the evolutionary multiobjective algorithms GSEMO, NSGA-II, and SPEA2 on different submodular chance-constrained network problems. Our experimental results show that the use of evolutionary multiobjective algorithms leads to significant performance improvements compared to state-of-the-art greedy algorithms for submodular optimization.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"363-393"},"PeriodicalIF":3.4,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142331627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large-Scale Multiobjective Evolutionary Algorithm Guided by Low-Dimensional Surrogates of Scalarization Functions. 以低维标度化函数替代物为指导的大规模多目标进化算法
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-02 DOI: 10.1162/evco_a_00354
Haoran Gu, Handing Wang, Cheng He, Bo Yuan, Yaochu Jin

Recently, computationally intensive multiobjective optimization problems have been efficiently solved by surrogate-assisted multiobjective evolutionary algorithms. However, most of those algorithms can handle no more than 200 decision variables. As the number of decision variables increases further, unreliable surrogate models will result in a dramatic deterioration of their performance, which makes large-scale expensive multiobjective optimization challenging. To address this challenge, we develop a large-scale multiobjective evolutionary algorithm guided by low-dimensional surrogate models of scalarization functions. The proposed algorithm (termed LDS-AF) reduces the dimension of the original decision space based on principal component analysis, and then directly approximates the scalarization functions in a decomposition-based multiobjective evolutionary algorithm. With the help of a two-stage modeling strategy and convergence control strategy, LDS-AF can keep a good balance between convergence and diversity, and achieve a promising performance without being trapped in a local optimum prematurely. The experimental results on a set of test instances have demonstrated its superiority over eight state-of-the-art algorithms on multiobjective optimization problems with up to 1,000 decision variables using only 500 real function evaluations.

最近,计算密集型多目标优化问题已通过代理辅助多目标进化算法得到有效解决。然而,这些算法大多只能处理不超过 200 个决策变量。随着决策变量数量的进一步增加,不可靠的代用模型将导致其性能急剧下降,从而使大规模昂贵的多目标优化面临挑战。为了应对这一挑战,我们开发了一种以标量化函数的低维代理模型为指导的大规模多目标进化算法。所提出的算法(称为 LDS-AF)基于主成分分析降低了原始决策空间的维度,然后在基于分解的多目标进化算法中直接逼近标量化函数。借助两阶段建模策略和收敛控制策略,LDS-AF 可以在收敛性和多样性之间保持良好的平衡,并在不过早陷入局部最优的情况下取得可喜的性能。在一组测试实例上的实验结果表明,在多达 1000 个决策变量的多目标优化问题上,LDS-AF 只用了 500 次实际函数评估,就优于八种最先进的算法。
{"title":"Large-Scale Multiobjective Evolutionary Algorithm Guided by Low-Dimensional Surrogates of Scalarization Functions.","authors":"Haoran Gu, Handing Wang, Cheng He, Bo Yuan, Yaochu Jin","doi":"10.1162/evco_a_00354","DOIUrl":"10.1162/evco_a_00354","url":null,"abstract":"<p><p>Recently, computationally intensive multiobjective optimization problems have been efficiently solved by surrogate-assisted multiobjective evolutionary algorithms. However, most of those algorithms can handle no more than 200 decision variables. As the number of decision variables increases further, unreliable surrogate models will result in a dramatic deterioration of their performance, which makes large-scale expensive multiobjective optimization challenging. To address this challenge, we develop a large-scale multiobjective evolutionary algorithm guided by low-dimensional surrogate models of scalarization functions. The proposed algorithm (termed LDS-AF) reduces the dimension of the original decision space based on principal component analysis, and then directly approximates the scalarization functions in a decomposition-based multiobjective evolutionary algorithm. With the help of a two-stage modeling strategy and convergence control strategy, LDS-AF can keep a good balance between convergence and diversity, and achieve a promising performance without being trapped in a local optimum prematurely. The experimental results on a set of test instances have demonstrated its superiority over eight state-of-the-art algorithms on multiobjective optimization problems with up to 1,000 decision variables using only 500 real function evaluations.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"309-334"},"PeriodicalIF":3.4,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141421720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Use of the Doubly Stochastic Matrix Models for the Quadratic Assignment Problem. 双随机矩阵模型在二次分配问题中的应用。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-02 DOI: 10.1162/evco_a_00369
Valentino Santucci, Josu Ceberio

Permutation problems have captured the attention of the combinatorial optimization community for decades due to the challenge they pose. Although their solutions are naturally encoded as permutations, in each problem, the information to be used to optimize them can vary substantially. In this paper, we consider the Quadratic Assignment Problem (QAP) as a case study, and propose using Doubly Stochastic Matrices (DSMs) under the framework of Estimation of Distribution Algorithms. To that end, we design efficient learning and sampling schemes that enable an effective iterative update of the probability model. Conducted experiments on commonly adopted benchmarks for the QAP prove doubly stochastic matrices to be preferred to the other four models for permutations, both in terms of effectiveness and computational efficiency. Moreover, additional analyses performed on the structure of the QAP and the Linear Ordering Problem (LOP) show that DSMs are good to deal with assignment problems, but they have interesting capabilities to deal also with ordering problems such as the LOP. The paper concludes with a description of the potential uses of DSMs for other optimization paradigms, such as genetic algorithms or model-based gradient search.

几十年来,排列问题一直受到组合优化界的关注,因为它们带来了挑战。虽然它们的解决方案自然地编码为排列,但在每个问题中,用于优化它们的信息可能会有很大的不同。本文以二次分配问题(QAP)为例,提出在分布估计算法框架下使用双随机矩阵(DSMs)。为此,我们设计了有效的学习和采样方案,使概率模型能够有效地迭代更新。在常用的QAP基准上进行的实验证明,无论是在有效性还是计算效率方面,双随机矩阵都优于其他四种排列模型。此外,对QAP和线性排序问题(LOP)的结构进行的额外分析表明,dsm很适合处理分配问题,但它们也具有处理排序问题(如LOP)的有趣功能。文章最后描述了dsm在其他优化范例(如遗传算法或基于模型的梯度搜索)中的潜在用途。
{"title":"On the Use of the Doubly Stochastic Matrix Models for the Quadratic Assignment Problem.","authors":"Valentino Santucci, Josu Ceberio","doi":"10.1162/evco_a_00369","DOIUrl":"10.1162/evco_a_00369","url":null,"abstract":"<p><p>Permutation problems have captured the attention of the combinatorial optimization community for decades due to the challenge they pose. Although their solutions are naturally encoded as permutations, in each problem, the information to be used to optimize them can vary substantially. In this paper, we consider the Quadratic Assignment Problem (QAP) as a case study, and propose using Doubly Stochastic Matrices (DSMs) under the framework of Estimation of Distribution Algorithms. To that end, we design efficient learning and sampling schemes that enable an effective iterative update of the probability model. Conducted experiments on commonly adopted benchmarks for the QAP prove doubly stochastic matrices to be preferred to the other four models for permutations, both in terms of effectiveness and computational efficiency. Moreover, additional analyses performed on the structure of the QAP and the Linear Ordering Problem (LOP) show that DSMs are good to deal with assignment problems, but they have interesting capabilities to deal also with ordering problems such as the LOP. The paper concludes with a description of the potential uses of DSMs for other optimization paradigms, such as genetic algorithms or model-based gradient search.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"425-457"},"PeriodicalIF":3.4,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143469891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Evolutionary Computation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1