首页 > 最新文献

Evolutionary Computation最新文献

英文 中文
Improving Performance of Algorithm Selection Pipelines on Large Instance Sets via Dynamic Reallocation of Budget. 基于预算动态再分配的大实例集算法选择管道性能改进。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-18 DOI: 10.1162/EVCO.a.378
Quentin Renau, Emma Hart

Special Issue PPSN 2024: Algorithm-selection (AS) methods are essential in order to obtain the best performance from a portfolio of solvers. When considering large sets of instances that either arrive in a stream or in a single batch, there is significant potential to save the function evaluation budget on some instances and reallocate it to others, thereby improving overall performance. We propose an AS pipeline which (1) identifies easy instances which are solved using the single best solver, avoiding the need to run a selector; (2) curtails runs on both easy and hard instances if they become stalled in the search space and/or are predicted to remain in a stalled state thereby saving budget; (3) reallocates budget saved from both previous steps to downstream instances, using an intelligent strategy to predict which instances will benefit most from extra function evaluations. Experiments using the BBOB dataset in two settings (batch and streaming) show that augmenting an AS pipeline with strategies to save and reallocate budget obtains significantly better results in both settings compared to a standard pipeline.

特刊PPSN 2024:算法选择(AS)方法是必不可少的,以便从解决方案组合中获得最佳性能。当考虑到达流或单个批处理的大型实例集时,有很大的潜力可以节省某些实例上的函数评估预算,并将其重新分配给其他实例,从而提高整体性能。我们提出了一个AS管道,它(1)识别使用单个最佳求解器解决的简单实例,避免了运行选择器的需要;(2)如果简单实例和困难实例在搜索空间中停滞不前和/或预计将保持停滞状态,从而节省预算,则减少它们的运行;(3)将从前两个步骤节省的预算重新分配到下游实例,使用智能策略预测哪些实例将从额外的功能评估中获益最多。在两种设置(批处理和流)中使用BBOB数据集的实验表明,与标准管道相比,在两种设置中使用节省和重新分配预算的策略来增加AS管道可以获得明显更好的结果。
{"title":"Improving Performance of Algorithm Selection Pipelines on Large Instance Sets via Dynamic Reallocation of Budget.","authors":"Quentin Renau, Emma Hart","doi":"10.1162/EVCO.a.378","DOIUrl":"https://doi.org/10.1162/EVCO.a.378","url":null,"abstract":"<p><p>Special Issue PPSN 2024: Algorithm-selection (AS) methods are essential in order to obtain the best performance from a portfolio of solvers. When considering large sets of instances that either arrive in a stream or in a single batch, there is significant potential to save the function evaluation budget on some instances and reallocate it to others, thereby improving overall performance. We propose an AS pipeline which (1) identifies easy instances which are solved using the single best solver, avoiding the need to run a selector; (2) curtails runs on both easy and hard instances if they become stalled in the search space and/or are predicted to remain in a stalled state thereby saving budget; (3) reallocates budget saved from both previous steps to downstream instances, using an intelligent strategy to predict which instances will benefit most from extra function evaluations. Experiments using the BBOB dataset in two settings (batch and streaming) show that augmenting an AS pipeline with strategies to save and reallocate budget obtains significantly better results in both settings compared to a standard pipeline.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-23"},"PeriodicalIF":3.4,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145795763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Double XCSF on Target? 双XCSF瞄准目标?
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-15 DOI: 10.1162/EVCO.a.377
Connor Schönberner, Sven Tomforde

The XCS Classifier System (XCS), the most prominent Learning Classifier System (LCS), originally focused on Reinforcement Learning (RL) problems. Over time, emphasis shifted heavily to supervised learning, with some applications in unsupervised learning. Following rekindled interest in LCSs for RL domains, we intend to capitalise on the close relationship between Q-learning and XCS. Except for Experience Replay, hardly any advances built on Q-learning have been investigated in XCS variants such as XCSF. Recognising this, we introduce three extensions inspired by Q-learning derivates: Target prediction inspired by DQN's target networks to improve the learning stability and double target prediction inspired by Double DQN as well as a Double Q-learning mechanism as countermeasures against overestimation. Addressing these two issues, aims to improve the performance of XCSF and the high variance between runs. We apply them to the Maze Problem, Frozen Lake, and Cart Pole. Our observations indicate mixed results: The Double Q-learning mechanism leads to no improvement. Target and double target prediction can lead to observable and also significantly improved performance and can provide variance reduction. This underscores that improving the RL capabilities of XCSF is non-trivial but indicates that adapting Deep Reinforcement Learning mechanisms for XCSF can be advantageous.

XCS分类器系统(XCS)是最著名的学习分类器系统(LCS),最初专注于强化学习(RL)问题。随着时间的推移,重点转向了监督学习,在无监督学习中也有一些应用。随着人们对RL领域的lcs重新燃起兴趣,我们打算利用Q-learning和XCS之间的密切关系。除了Experience Replay之外,几乎没有在XCS变体(如XCSF)中研究过基于Q-learning的任何进展。认识到这一点,我们引入了受q学习派生启发的三个扩展:受DQN目标网络启发的目标预测,以提高学习稳定性;受双DQN启发的双目标预测,以及双q学习机制,以防止高估。解决这两个问题,旨在提高XCSF的性能和运行之间的高方差。我们将它们应用于迷宫问题、冰湖问题和车杆问题。我们的观察结果好坏参半:双q学习机制没有带来任何改善。目标和双目标预测可以显著提高性能,减少方差。这强调了提高XCSF的RL能力并非易事,但也表明为XCSF采用深度强化学习机制可能是有利的。
{"title":"Double XCSF on Target?","authors":"Connor Schönberner, Sven Tomforde","doi":"10.1162/EVCO.a.377","DOIUrl":"https://doi.org/10.1162/EVCO.a.377","url":null,"abstract":"<p><p>The XCS Classifier System (XCS), the most prominent Learning Classifier System (LCS), originally focused on Reinforcement Learning (RL) problems. Over time, emphasis shifted heavily to supervised learning, with some applications in unsupervised learning. Following rekindled interest in LCSs for RL domains, we intend to capitalise on the close relationship between Q-learning and XCS. Except for Experience Replay, hardly any advances built on Q-learning have been investigated in XCS variants such as XCSF. Recognising this, we introduce three extensions inspired by Q-learning derivates: Target prediction inspired by DQN's target networks to improve the learning stability and double target prediction inspired by Double DQN as well as a Double Q-learning mechanism as countermeasures against overestimation. Addressing these two issues, aims to improve the performance of XCSF and the high variance between runs. We apply them to the Maze Problem, Frozen Lake, and Cart Pole. Our observations indicate mixed results: The Double Q-learning mechanism leads to no improvement. Target and double target prediction can lead to observable and also significantly improved performance and can provide variance reduction. This underscores that improving the RL capabilities of XCSF is non-trivial but indicates that adapting Deep Reinforcement Learning mechanisms for XCSF can be advantageous.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-36"},"PeriodicalIF":3.4,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145764330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Runtime Analysis of Evolutionary Diversity Optimization on the Multi-objective (LeadingOnes, TrailingZeros) Problem. 多目标(领先1,落后0)问题的进化多样性优化运行时分析
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-04 DOI: 10.1162/EVCO.a.376
Denis Antipov, Aneta Neumann, Frank Neumann, Andrew M Sutton

Diversity optimization is the class of optimization problems in which we aim to find a diverse set of good solutions. One of the frequently-used approaches to solve such problems is to use evolutionary algorithms that evolve a desired diverse population. This approach is called evolutionary diversity optimization (EDO). In this paper, we analyze EDO on a three-objective function LOTZk, which is a modification of the two-objective benchmark function (LeadingOnes, TrailingZeros). We prove that the GSEMO computes a set of all Pareto-optimal solutions in O(kn3) expected iterations. We also analyze the runtime of the GSEMOD algorithm (a modification of the GSEMO for diversity optimization) until it finds a population with the best possible diversity for two different diversity measures: the total imbalance and the sorted imbalances vector. For the first measure we show that the GSEMOD optimizes it in O(kn2log⁡(n)) expected iterations (which is asymptotically faster than the upper bound on the runtime until it finds a Pareto-optimal population), and for the second measure we show an upper bound of O(k2n3log⁡(n)) expected iterations. We complement our theoretical analysis with an empirical study, which shows a very similar behavior for both diversity measures. The results of experiments suggest that our bounds for the total imbalance measure are tight, while the bounds for the imbalances vector are too pessimistic.

多样性优化是一类优化问题,我们的目标是找到一组不同的好解。解决此类问题的常用方法之一是使用进化算法来进化出所需的多样化种群。这种方法被称为进化多样性优化(EDO)。在本文中,我们分析了三目标函数LOTZk上的EDO, LOTZk是对两目标基准函数(LeadingOnes, TrailingZeros)的改进。我们证明了GSEMO算法在O(kn3)次期望迭代中计算所有pareto最优解的集合。我们还分析了GSEMOD算法(针对多样性优化的GSEMO的修改)的运行时,直到它找到具有两种不同多样性度量的最佳多样性的种群:总不平衡和排序不平衡向量。对于第一个度量,我们展示了GSEMOD在O(kn2log (n))次预期迭代中对其进行优化(在找到帕累托最优种群之前,它的速度渐近地快于运行时的上界),对于第二个度量,我们展示了O(k2n3log (n))次预期迭代的上界。我们用一项实证研究来补充我们的理论分析,该研究表明,两种多样性措施的行为非常相似。实验结果表明,我们对总不平衡度量的界限很紧,而对不平衡向量的界限过于悲观。
{"title":"Runtime Analysis of Evolutionary Diversity Optimization on the Multi-objective (LeadingOnes, TrailingZeros) Problem.","authors":"Denis Antipov, Aneta Neumann, Frank Neumann, Andrew M Sutton","doi":"10.1162/EVCO.a.376","DOIUrl":"https://doi.org/10.1162/EVCO.a.376","url":null,"abstract":"<p><p>Diversity optimization is the class of optimization problems in which we aim to find a diverse set of good solutions. One of the frequently-used approaches to solve such problems is to use evolutionary algorithms that evolve a desired diverse population. This approach is called evolutionary diversity optimization (EDO). In this paper, we analyze EDO on a three-objective function LOTZk, which is a modification of the two-objective benchmark function (LeadingOnes, TrailingZeros). We prove that the GSEMO computes a set of all Pareto-optimal solutions in O(kn3) expected iterations. We also analyze the runtime of the GSEMOD algorithm (a modification of the GSEMO for diversity optimization) until it finds a population with the best possible diversity for two different diversity measures: the total imbalance and the sorted imbalances vector. For the first measure we show that the GSEMOD optimizes it in O(kn2log⁡(n)) expected iterations (which is asymptotically faster than the upper bound on the runtime until it finds a Pareto-optimal population), and for the second measure we show an upper bound of O(k2n3log⁡(n)) expected iterations. We complement our theoretical analysis with an empirical study, which shows a very similar behavior for both diversity measures. The results of experiments suggest that our bounds for the total imbalance measure are tight, while the bounds for the imbalances vector are too pessimistic.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-23"},"PeriodicalIF":3.4,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145679325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Representation Genetic Programming: A Case Study on Tree-Based and Linear Representations 交叉表示遗传规划:基于树和线性表示的案例研究。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-01 DOI: 10.1162/evco.a.25
Zhixing Huang;Yi Mei;Fangfang Zhang;Mengjie Zhang;Wolfgang Banzhaf
Existing genetic programming (GP) methods are typically designed based on a certain representation, such as tree-based or linear representations. These representations show various pros and cons in different domains. However, due to the complicated relationships among representation and fitness landscapes of GP, it is hard to intuitively determine which GP representation is the most suitable for solving a certain problem. Evolving programs (or models) with multiple representations simultaneously can alternatively search on different fitness landscapes since representations are highly related to the search space that essentially defines the fitness landscape. Fully using the latent synergies among different GP individual representations might be helpful for GP to search for better solutions. However, existing GP literature rarely investigates the simultaneous effective evolution of multiple representations. To fill this gap, this paper proposes a cross-representation GP algorithm based on tree-based and linear representations, which are two commonly used GP representations. In addition, we develop a new cross-representation crossover operator to harness the interplay between tree-based and linear representations. Empirical results show that navigating the learned knowledge between basic tree-based and linear representations successfully improves the effectiveness of GP with solely tree-based or linear representation in solving symbolic regression and dynamic job shop scheduling problems.
现有的遗传规划(GP)方法通常基于一定的表示,如基于树的或线性的表示。这些表示显示了不同领域的各种优点和缺点。然而,由于GP的表示和适应度景观之间的复杂关系,很难直观地确定哪种GP表示最适合解决某一问题。同时具有多个表示的进化程序(或模型)可以选择搜索不同的适应度景观,因为表示与本质上定义适应度景观的搜索空间高度相关。充分利用不同GP个体表征之间的潜在协同效应,有助于GP寻求更好的解决方案。然而,现有的GP文献很少研究多重表征的同时有效演化。为了填补这一空白,本文提出了一种基于树表示和线性表示的交叉表示GP算法。此外,我们开发了一种新的交叉表示交叉算子来利用基于树的表示和线性表示之间的相互作用。实证结果表明,在基本树表示和线性表示之间成功地导航所学知识,提高了仅基于树表示或线性表示的GP在解决符号回归和动态作业车间调度问题中的有效性。
{"title":"Cross-Representation Genetic Programming: A Case Study on Tree-Based and Linear Representations","authors":"Zhixing Huang;Yi Mei;Fangfang Zhang;Mengjie Zhang;Wolfgang Banzhaf","doi":"10.1162/evco.a.25","DOIUrl":"10.1162/evco.a.25","url":null,"abstract":"Existing genetic programming (GP) methods are typically designed based on a certain representation, such as tree-based or linear representations. These representations show various pros and cons in different domains. However, due to the complicated relationships among representation and fitness landscapes of GP, it is hard to intuitively determine which GP representation is the most suitable for solving a certain problem. Evolving programs (or models) with multiple representations simultaneously can alternatively search on different fitness landscapes since representations are highly related to the search space that essentially defines the fitness landscape. Fully using the latent synergies among different GP individual representations might be helpful for GP to search for better solutions. However, existing GP literature rarely investigates the simultaneous effective evolution of multiple representations. To fill this gap, this paper proposes a cross-representation GP algorithm based on tree-based and linear representations, which are two commonly used GP representations. In addition, we develop a new cross-representation crossover operator to harness the interplay between tree-based and linear representations. Empirical results show that navigating the learned knowledge between basic tree-based and linear representations successfully improves the effectiveness of GP with solely tree-based or linear representation in solving symbolic regression and dynamic job shop scheduling problems.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"33 4","pages":"541-568"},"PeriodicalIF":3.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Stochastic Operators, Fitness Landscapes, and Optimization Heuristics Performances 关于随机算子,适应度景观和优化启发式性能。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-01 DOI: 10.1162/evco.a.24
Brahim Aboutaib;Sébastien Verel;Cyril Fonlupt;Bilel Derbel;Arnaud Liefooghe;Belaïd Ahiod
Stochastic operators are the backbone of many optimization algorithms. Besides the existing theoretical analysis that studies the asymptotic runtime of such algorithms, characterizing their performance using fitness landscape analysis is far away. The fitness landscape approach proceeds by considering multiple characteristics to understand and explain an optimization algorithm’s performance or the difficulty of an optimization problem, and hence would provide a richer explanation. This paper analyzes the fitness landscapes of stochastic operators by focusing on the number of local optima and their relation to the optimization performance. The search spaces of two combinatorial problems are studied: the NK-landscape and the Quadratic Assignment Problem, using binary string-based and permutation-based stochastic operators. The classical bit-flip search operator is considered for binary strings, and a generalization of the deterministic exchange operator for permutation representations is devised. We study small instances, ranging from randomly generated to real-like instances, and large instances from the NK-landscape. For large instances, we propose using an adaptive walk process to estimate the number of locally optimal solutions. Given that stochastic operators are usually used within population and single-solution-based evolutionary optimization algorithms, we contrast the performance of the (μ+λ)-EA, and an Iterated Local Search, versus the landscape properties of large size instances of the NK-landscapes. Our analysis shows that characterizing the fitness landscapes induced by stochastic search operators can effectively explain the optimization performances of the algorithms we considered.
随机算子是许多随机优化算法的基础。除了现有的分析算法渐近运行时的理论分析外,用适应度景观分析来表征其性能还很遥远。适应度景观方法通过考虑多个特征来理解和解释优化算法的性能或优化问题的难度,从而提供更丰富的解释。本文从局部最优的数量及其与优化性能的关系出发,分析了随机算子的适应度格局。利用基于二进制字符串和基于置换的随机算子,研究了两个组合问题的搜索空间,即nk景观和二次分配问题。考虑了二进制字符串的经典位翻转搜索算子,并对置换表示的确定性交换算子进行了推广。我们研究小的实例,从随机生成的到真实的实例,以及来自nk景观的大实例。对于大型实例,我们建议使用自适应行走过程来估计局部最优解的数量。考虑到随机算子通常在种群和基于单解的进化优化算法中使用,我们比较了(μ + λ)-EA和迭代局部搜索的性能与大型nk -景观实例的景观特性。我们的分析表明,表征随机搜索算子引起的适应度景观可以有效地解释我们所考虑的算法的优化性能。
{"title":"On Stochastic Operators, Fitness Landscapes, and Optimization Heuristics Performances","authors":"Brahim Aboutaib;Sébastien Verel;Cyril Fonlupt;Bilel Derbel;Arnaud Liefooghe;Belaïd Ahiod","doi":"10.1162/evco.a.24","DOIUrl":"10.1162/evco.a.24","url":null,"abstract":"Stochastic operators are the backbone of many optimization algorithms. Besides the existing theoretical analysis that studies the asymptotic runtime of such algorithms, characterizing their performance using fitness landscape analysis is far away. The fitness landscape approach proceeds by considering multiple characteristics to understand and explain an optimization algorithm’s performance or the difficulty of an optimization problem, and hence would provide a richer explanation. This paper analyzes the fitness landscapes of stochastic operators by focusing on the number of local optima and their relation to the optimization performance. The search spaces of two combinatorial problems are studied: the NK-landscape and the Quadratic Assignment Problem, using binary string-based and permutation-based stochastic operators. The classical bit-flip search operator is considered for binary strings, and a generalization of the deterministic exchange operator for permutation representations is devised. We study small instances, ranging from randomly generated to real-like instances, and large instances from the NK-landscape. For large instances, we propose using an adaptive walk process to estimate the number of locally optimal solutions. Given that stochastic operators are usually used within population and single-solution-based evolutionary optimization algorithms, we contrast the performance of the (μ+λ)-EA, and an Iterated Local Search, versus the landscape properties of large size instances of the NK-landscapes. Our analysis shows that characterizing the fitness landscapes induced by stochastic search operators can effectively explain the optimization performances of the algorithms we considered.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"33 4","pages":"459-484"},"PeriodicalIF":3.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Machine Learning Methods to Assess Module Performance Contribution in Modular Optimization Frameworks 使用机器学习方法评估模块化优化框架中的模块性能贡献。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-01 DOI: 10.1162/evco_a_00356
Ana Kostovska;Diederick Vermetten;Peter Korošec;Sašo Džeroski;Carola Doerr;Tome Eftimov
Modular algorithm frameworks not only allow for combinations never tested in manually selected algorithm portfolios, but they also provide a structured approach to assess which algorithmic ideas are crucial for the observed performance of algorithms. In this paper, we propose a methodology for analyzing the impact of the different modules on the overall performance. We consider modular frameworks for two widely used families of derivative-free, black-box optimization algorithms, the covariance matrix adaptation evolution strategy (CMA-ES) and differential evolution (DE). More specifically, we use performance data of 324 modCMA-ES and 576 modDE algorithm variants (with each variant corresponding to a specific configuration of modules) obtained on the 24 BBOB problems for six different runtime budgets in two dimensions. Our analysis of these data reveals that the impact of individual modules on overall algorithm performance varies significantly. Notably, among the examined modules, the elitism module in CMA-ES and the linear population size reduction module in DE exhibit the most significant impact on performance. Furthermore, our exploratory data analysis of problem landscape data suggests that the most relevant landscape features remain consistent regardless of the configuration of individual modules, but the influence that these features have on regression accuracy varies. In addition, we apply classifiers that exploit feature importance with respect to the trained models for performance prediction and performance data, to predict the modular configurations of CMA-ES and DE algorithm variants. The results show that the predicted configurations do not exhibit a statistically significant difference in performance compared to the true configurations, with the percentage varying depending on the setup (from 49.1% to 95.5% for modCMA and 21.7% to 77.1% for DE).
模块化算法框架不仅可以实现人工选择的算法组合中从未测试过的组合,而且还提供了一种结构化方法,用于评估哪些算法思想对观察到的算法性能至关重要。在本研究中,我们提出了一种分析不同模块对整体性能影响的方法。我们考虑了两个广泛使用的无衍生黑箱优化算法系列的模块框架,即协方差矩阵适应进化策略(CMA-ES)和微分进化(DE)。更具体地说,我们使用了 324 个 modCMA-ES 和 576 个 modDE 算法变体(每个变体对应一个特定的模块配置)的性能数据,这些数据是在 24 个 BBOB 问题上针对 6 种不同的运行时间预算在 2 维度上获得的。我们对这些数据的分析表明,各个模块对算法整体性能的影响差别很大。值得注意的是,在所考察的模块中,CMA-ES 中的精英模块和 DE 中的线性种群规模缩减模块对性能的影响最为显著。此外,我们对问题景观数据的探索性数据分析表明,无论单个模块的配置如何,最相关的景观特征保持一致,但这些特征对回归精度的影响各不相同。此外,我们应用分类器,利用性能预测和性能数据训练模型的特征重要性,来预测 CMA-ES 和 DE 算法变体的模块配置。结果表明,与真实配置相比,预测的配置在性能上没有表现出显著的统计学差异,其百分比因设置而异(mod-CMA 从 49.1% 到 95.5%,DE 从 21.7% 到 77.1%)。
{"title":"Using Machine Learning Methods to Assess Module Performance Contribution in Modular Optimization Frameworks","authors":"Ana Kostovska;Diederick Vermetten;Peter Korošec;Sašo Džeroski;Carola Doerr;Tome Eftimov","doi":"10.1162/evco_a_00356","DOIUrl":"10.1162/evco_a_00356","url":null,"abstract":"Modular algorithm frameworks not only allow for combinations never tested in manually selected algorithm portfolios, but they also provide a structured approach to assess which algorithmic ideas are crucial for the observed performance of algorithms. In this paper, we propose a methodology for analyzing the impact of the different modules on the overall performance. We consider modular frameworks for two widely used families of derivative-free, black-box optimization algorithms, the covariance matrix adaptation evolution strategy (CMA-ES) and differential evolution (DE). More specifically, we use performance data of 324 modCMA-ES and 576 modDE algorithm variants (with each variant corresponding to a specific configuration of modules) obtained on the 24 BBOB problems for six different runtime budgets in two dimensions. Our analysis of these data reveals that the impact of individual modules on overall algorithm performance varies significantly. Notably, among the examined modules, the elitism module in CMA-ES and the linear population size reduction module in DE exhibit the most significant impact on performance. Furthermore, our exploratory data analysis of problem landscape data suggests that the most relevant landscape features remain consistent regardless of the configuration of individual modules, but the influence that these features have on regression accuracy varies. In addition, we apply classifiers that exploit feature importance with respect to the trained models for performance prediction and performance data, to predict the modular configurations of CMA-ES and DE algorithm variants. The results show that the predicted configurations do not exhibit a statistically significant difference in performance compared to the true configurations, with the percentage varying depending on the setup (from 49.1% to 95.5% for modCMA and 21.7% to 77.1% for DE).","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"33 4","pages":"485-512"},"PeriodicalIF":3.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141890747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep-ELA: Deep Exploratory Landscape Analysis with Self-Supervised Pretrained Transformers for Single- and Multiobjective Continuous Optimization Problems Deep- ela:基于自监督预训练变压器的单目标和多目标连续优化问题深度探索性景观分析。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-01 DOI: 10.1162/evco_a_00367
Moritz Vinzent Seiler;Pascal Kerschke;Heike Trautmann
In many recent works, the potential of Exploratory Landscape Analysis (ELA) features to numerically characterize single-objective continuous optimization problems has been demonstrated. These numerical features provide the input for all kinds of machine learning tasks in the domain of continuous optimization problems, ranging, for example, from High-level Property Prediction to Automated Algorithm Selection and Automated Algorithm Configuration. Without ELA features, analyzing and understanding the characteristics of single-objective continuous optimization problems is—to the best of our knowledge—very limited. Yet, despite their usefulness, as demonstrated in several past works, ELA features suffer from several drawbacks. These include, in particular, (1) a strong correlation between multiple features, as well as (2) its very limited applicability to multiobjective continuous optimization problems. As a remedy, recent works proposed deep learning-based approaches as alternatives to ELA. In these works, among others, point-cloud transformers were used to characterize an optimization problem’s fitness landscape. However, these approaches require a large amount of labeled training data. Within this work, we propose a hybrid approach, Deep-ELA, which combines (the benefits of) deep learning and ELA features. We pre-trained four transformers on millions of randomly generated optimization problems to learn deep representations of the landscapes of continuous single- and multiobjective optimization problems. Our proposed framework can either be used out of the box for analyzing single- and multiobjective continuous optimization problems, or subsequently fine-tuned to various tasks focusing on algorithm behavior and problem understanding.
在最近的许多工作中,探索性景观分析(ELA)特征在数值上表征单目标连续优化问题的潜力已经得到证明。这些数值特征为连续优化问题领域的各种机器学习任务提供了输入,例如,从高级属性预测到自动算法选择和自动算法配置。如果没有ELA特征,单目标连续优化问题的特征分析和理解就我们所知是非常有限的。然而,尽管它们很有用,正如在过去的几篇文章中所展示的那样,ELA特性仍然存在一些缺点。这些包括,特别是,(1)多个特征之间的强相关性,以及(2)它对多目标连续优化问题的非常有限的适用性。作为补救措施,最近的研究提出了基于深度学习的方法作为ELA的替代方案。在这些工作中,除其他外,点云变压器被用来表征优化问题的适应度景观。然而,这些方法需要大量的标记训练数据。在这项工作中,我们提出了一种混合方法,deep -ELA,它结合了深度学习和ELA特征的(好处)。我们在数百万个随机生成的优化问题上预训练了四个变压器,以学习连续单目标和多目标优化问题的深度表示。我们提出的框架既可以用于开箱即用的分析单目标和多目标连续优化问题,也可以随后微调到专注于算法行为和问题理解的各种任务。
{"title":"Deep-ELA: Deep Exploratory Landscape Analysis with Self-Supervised Pretrained Transformers for Single- and Multiobjective Continuous Optimization Problems","authors":"Moritz Vinzent Seiler;Pascal Kerschke;Heike Trautmann","doi":"10.1162/evco_a_00367","DOIUrl":"10.1162/evco_a_00367","url":null,"abstract":"In many recent works, the potential of Exploratory Landscape Analysis (ELA) features to numerically characterize single-objective continuous optimization problems has been demonstrated. These numerical features provide the input for all kinds of machine learning tasks in the domain of continuous optimization problems, ranging, for example, from High-level Property Prediction to Automated Algorithm Selection and Automated Algorithm Configuration. Without ELA features, analyzing and understanding the characteristics of single-objective continuous optimization problems is—to the best of our knowledge—very limited. Yet, despite their usefulness, as demonstrated in several past works, ELA features suffer from several drawbacks. These include, in particular, (1) a strong correlation between multiple features, as well as (2) its very limited applicability to multiobjective continuous optimization problems. As a remedy, recent works proposed deep learning-based approaches as alternatives to ELA. In these works, among others, point-cloud transformers were used to characterize an optimization problem’s fitness landscape. However, these approaches require a large amount of labeled training data. Within this work, we propose a hybrid approach, Deep-ELA, which combines (the benefits of) deep learning and ELA features. We pre-trained four transformers on millions of randomly generated optimization problems to learn deep representations of the landscapes of continuous single- and multiobjective optimization problems. Our proposed framework can either be used out of the box for analyzing single- and multiobjective continuous optimization problems, or subsequently fine-tuned to various tasks focusing on algorithm behavior and problem understanding.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"33 4","pages":"513-540"},"PeriodicalIF":3.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143191085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Automated Algorithm Design Synergizing Large Language Models and Evolutionary Algorithms: Survey and Insights. 探索协同大型语言模型和进化算法的自动算法设计:调查和见解。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-31 DOI: 10.1162/EVCO.a.370
He Yu, Jing Liu

Designing algorithms for optimization problems, no matter heuristic or meta-heuristic, often relies on manual design and domain expertise, limiting their scalability and adaptability. The integration of Large Language Models (LLMs) and Evolutionary Algorithms (EAs) presents a promising new way to overcome these limitations to make optimization be more automated, where LLMs function as dynamic agents capable of generating, refining, and interpreting optimization strategies, while EAs explore complex searching spaces efficiently through evolutionary operators. Since this synergy enables a more efficient and creative searching process, we first review important developments in this direction, and then summarize an LLM-EA paradigm for automated optimization algorithm design. We conduct an in-depth analysis on innovative methods for four key EA modules, namely, individual representation, selection, variation operators, and fitness evaluation, addressing challenges related to optimization algorithm design, particularly from the perspective of LLM prompts, analyzing how the prompt flow evolving with the evolutionary process, adjusting based on evolutionary feedback (e.g., population diversity, convergence rate). Furthermore, we analyze how LLMs, through flexible prompt-driven roles, introduce semantic intelligence into fundamental EA characteristics, including diversity, convergence, adaptability, and scalability. Our systematic review and thorough analysis into the paradigm can help researchers better understand the current research and boost the development of synergizing LLMs with EAs for automated optimization algorithm design.

优化问题的算法设计,无论是启发式还是元启发式,往往依赖于人工设计和领域专业知识,限制了它们的可扩展性和适应性。大型语言模型(llm)和进化算法(ea)的集成提供了一种有希望的新方法来克服这些限制,使优化更加自动化,其中llm作为能够生成,精炼和解释优化策略的动态代理,而ea通过进化算子有效地探索复杂的搜索空间。由于这种协同作用使搜索过程更加高效和创造性,我们首先回顾了这一方向的重要发展,然后总结了自动优化算法设计的LLM-EA范式。我们深入分析了个体表示、选择、变异算子和适应度评估四个关键EA模块的创新方法,解决了优化算法设计中的挑战,特别是从LLM提示符的角度,分析了提示流如何随着进化过程而演变,并根据进化反馈(如种群多样性、收敛速度)进行调整。此外,我们分析了llm如何通过灵活的提示驱动角色,将语义智能引入EA的基本特征,包括多样性、收敛性、适应性和可扩展性。我们对该范式的系统回顾和深入分析可以帮助研究人员更好地了解当前的研究现状,并促进法学硕士与ea在自动优化算法设计方面的协同发展。
{"title":"Exploring Automated Algorithm Design Synergizing Large Language Models and Evolutionary Algorithms: Survey and Insights.","authors":"He Yu, Jing Liu","doi":"10.1162/EVCO.a.370","DOIUrl":"https://doi.org/10.1162/EVCO.a.370","url":null,"abstract":"<p><p>Designing algorithms for optimization problems, no matter heuristic or meta-heuristic, often relies on manual design and domain expertise, limiting their scalability and adaptability. The integration of Large Language Models (LLMs) and Evolutionary Algorithms (EAs) presents a promising new way to overcome these limitations to make optimization be more automated, where LLMs function as dynamic agents capable of generating, refining, and interpreting optimization strategies, while EAs explore complex searching spaces efficiently through evolutionary operators. Since this synergy enables a more efficient and creative searching process, we first review important developments in this direction, and then summarize an LLM-EA paradigm for automated optimization algorithm design. We conduct an in-depth analysis on innovative methods for four key EA modules, namely, individual representation, selection, variation operators, and fitness evaluation, addressing challenges related to optimization algorithm design, particularly from the perspective of LLM prompts, analyzing how the prompt flow evolving with the evolutionary process, adjusting based on evolutionary feedback (e.g., population diversity, convergence rate). Furthermore, we analyze how LLMs, through flexible prompt-driven roles, introduce semantic intelligence into fundamental EA characteristics, including diversity, convergence, adaptability, and scalability. Our systematic review and thorough analysis into the paradigm can help researchers better understand the current research and boost the development of synergizing LLMs with EAs for automated optimization algorithm design.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-27"},"PeriodicalIF":3.4,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145423417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GESR: A Geometric Evolution Model for Symbolic Regression. GESR:符号回归的几何演化模型。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-31 DOI: 10.1162/EVCO.a.367
Zhitong Ma, Jinghui Zhong

Symbolic regression is a challenging task in machine learning that aims to automatically discover highly interpretable mathematical equations from limited data. Keen efforts have been devoted to addressing this issue, yielding promising results. However, there are still bottlenecks that current methods struggle with, especially when dealing with the datasets that characterize intricate mathematical expressions. In this work, we propose a novel Geometric Evolution Symbolic Regression algorithm. Leveraging geometric semantics, the process of symbolic regression in GESR is transformed into an approximation to an unimodal target in n-dimensional semantic space. Then, three key modules are presented to enhance the approximation: (1) a new semantic gradient concept, proposed from the observation of inaccurate approximation results within semantic backpropagation, to assist the exploration in the semantic space and improve the accuracy of semantic approximation; (2) a new geometric semantic search operator, tailored for efficiently approximating the target formula directly in the sparse semantic space, to obtain more accurate and interpretable solutions under strict program size constraints; (3) the Levenberg-Marquardt algorithm with L1 regularization, used for the adjustment of expression structures and the optimization of global subtree weights to assist the proposed geometric semantic search operator. Assisted with these modules, GESR achieves state-of-the-art accuracy performance on SRSD benchmark datasets. The implementation is available at https://github.com/MZT-srcount/GESR.

符号回归是机器学习中的一项具有挑战性的任务,旨在从有限的数据中自动发现高度可解释的数学方程。为解决这一问题作出了积极努力,并取得了可喜的成果。然而,目前的方法仍然存在瓶颈,特别是在处理具有复杂数学表达式特征的数据集时。在这项工作中,我们提出了一种新的几何进化符号回归算法。利用几何语义,将GESR中的符号回归过程转化为n维语义空间中对单峰目标的近似。然后,提出了三个增强逼近的关键模块:(1)通过观察语义反向传播中不准确的逼近结果,提出了新的语义梯度概念,以辅助语义空间的探索,提高语义逼近的精度;(2)一种新的几何语义搜索算子,为在稀疏语义空间中直接高效逼近目标公式而量身定制,在严格的程序大小约束下获得更精确和可解释的解;(3)带L1正则化的Levenberg-Marquardt算法,用于调整表达式结构和优化全局子树权重,以辅助所提出的几何语义搜索算子。借助这些模块,GESR在SRSD基准数据集上实现了最先进的精度性能。该实现可从https://github.com/MZT-srcount/GESR获得。
{"title":"GESR: A Geometric Evolution Model for Symbolic Regression.","authors":"Zhitong Ma, Jinghui Zhong","doi":"10.1162/EVCO.a.367","DOIUrl":"https://doi.org/10.1162/EVCO.a.367","url":null,"abstract":"<p><p>Symbolic regression is a challenging task in machine learning that aims to automatically discover highly interpretable mathematical equations from limited data. Keen efforts have been devoted to addressing this issue, yielding promising results. However, there are still bottlenecks that current methods struggle with, especially when dealing with the datasets that characterize intricate mathematical expressions. In this work, we propose a novel Geometric Evolution Symbolic Regression algorithm. Leveraging geometric semantics, the process of symbolic regression in GESR is transformed into an approximation to an unimodal target in n-dimensional semantic space. Then, three key modules are presented to enhance the approximation: (1) a new semantic gradient concept, proposed from the observation of inaccurate approximation results within semantic backpropagation, to assist the exploration in the semantic space and improve the accuracy of semantic approximation; (2) a new geometric semantic search operator, tailored for efficiently approximating the target formula directly in the sparse semantic space, to obtain more accurate and interpretable solutions under strict program size constraints; (3) the Levenberg-Marquardt algorithm with L1 regularization, used for the adjustment of expression structures and the optimization of global subtree weights to assist the proposed geometric semantic search operator. Assisted with these modules, GESR achieves state-of-the-art accuracy performance on SRSD benchmark datasets. The implementation is available at https://github.com/MZT-srcount/GESR.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-27"},"PeriodicalIF":3.4,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145423479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Pareto Optimization Using Sliding Window Selection for Problems with Determinstic and Stochastic Constraints. 基于滑动窗口选择的确定性和随机约束问题快速Pareto优化。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-31 DOI: 10.1162/EVCO.a.368
Frank Neumann, Carsten Witt

Submodular optimization problems play a key role in artificial intelligence as they allow to capture many important problems in machine learning, data science, and social networks. Pareto optimization using evolutionary multi-objective algorithms such as GSEMO (also called POMC) has been widely applied to solve constrained submodular optimization problems. A crucial factor determining the runtime of the used evolutionary algorithms to obtain good approximations is the population size of the algorithms which usually grows with the number of trade-offs that the algorithms encounter. In this paper, we introduce a sliding window speed up technique for recently introduced algorithms. We first examine the setting of deterministic constraints for which bi-objective formulations have been proposed in the literature. We prove that our technique eliminates the population size as a crucial factor negatively impacting the runtime bounds of the classical GSEMO algorithm and achieves the same theoretical performance guarantees as previous approaches within less computation time. Our experimental investigations for the classical maximum coverage problem confirm that our sliding window technique clearly leads to better results for a wide range of instances and constraint settings. After we have shown that the sliding approach leads to significant improvements for bi-objective formulations, we examine how to speed up a recently introduced 3-objective formulation for stochastic constraints. We show through theoretical and experimental investigations that the sliding window approach also leads to significant improvements for such 3-objective formulations as it allows for a more tailored parent selection that matches the optimization progress of the algorithm.

子模块优化问题在人工智能中发挥着关键作用,因为它们允许捕获机器学习,数据科学和社交网络中的许多重要问题。基于进化多目标算法的Pareto优化算法,如GSEMO(也称为POMC),已被广泛应用于求解约束子模优化问题。决定所使用的进化算法运行时间以获得良好近似的一个关键因素是算法的总体大小,该算法通常随着算法遇到的权衡数量而增长。在本文中,我们为最近引入的算法引入了滑动窗口加速技术。我们首先研究了确定性约束的设置,其中双目标公式已在文献中提出。我们证明了我们的技术消除了种群大小作为一个负面影响经典GSEMO算法运行时边界的关键因素,并在更少的计算时间内实现了与以前方法相同的理论性能保证。我们对经典最大覆盖问题的实验研究证实,我们的滑动窗口技术在广泛的实例和约束设置下明显导致更好的结果。在我们展示了滑动方法导致双目标公式的显着改进之后,我们研究了如何加速最近引入的随机约束的3目标公式。我们通过理论和实验研究表明,滑动窗口方法也导致这种3目标公式的显着改进,因为它允许更定制的父选择,以匹配算法的优化进度。
{"title":"Fast Pareto Optimization Using Sliding Window Selection for Problems with Determinstic and Stochastic Constraints.","authors":"Frank Neumann, Carsten Witt","doi":"10.1162/EVCO.a.368","DOIUrl":"https://doi.org/10.1162/EVCO.a.368","url":null,"abstract":"<p><p>Submodular optimization problems play a key role in artificial intelligence as they allow to capture many important problems in machine learning, data science, and social networks. Pareto optimization using evolutionary multi-objective algorithms such as GSEMO (also called POMC) has been widely applied to solve constrained submodular optimization problems. A crucial factor determining the runtime of the used evolutionary algorithms to obtain good approximations is the population size of the algorithms which usually grows with the number of trade-offs that the algorithms encounter. In this paper, we introduce a sliding window speed up technique for recently introduced algorithms. We first examine the setting of deterministic constraints for which bi-objective formulations have been proposed in the literature. We prove that our technique eliminates the population size as a crucial factor negatively impacting the runtime bounds of the classical GSEMO algorithm and achieves the same theoretical performance guarantees as previous approaches within less computation time. Our experimental investigations for the classical maximum coverage problem confirm that our sliding window technique clearly leads to better results for a wide range of instances and constraint settings. After we have shown that the sliding approach leads to significant improvements for bi-objective formulations, we examine how to speed up a recently introduced 3-objective formulation for stochastic constraints. We show through theoretical and experimental investigations that the sliding window approach also leads to significant improvements for such 3-objective formulations as it allows for a more tailored parent selection that matches the optimization progress of the algorithm.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-34"},"PeriodicalIF":3.4,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145423502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Evolutionary Computation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1