首页 > 最新文献

Evolutionary Computation最新文献

英文 中文
Evolving Populations of Solved Subgraphs with Crossover and Constraint Repair. 具有交叉和约束修复的已解子图进化种群。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-14 DOI: 10.1162/EVCO.a.380
Jiwon Lee, Mahya Salimi Gamasaei, Andrew M Sutton

We introduce a population-based approach to solving parameterized graph problems for which the goal is to identify a small set of vertices subject to a feasibility criterion. The idea is to evolve a population of individuals where each individual corresponds to an optimal solution to a subgraph of the original problem. The crossover operation then combines both solutions and subgraphs with the hope to generate an optimal solution for a slightly larger graph. In order to correctly combine solutions and subgraphs, we propose a new crossover operator called generalized allelic crossover which generalizes uniform crossover by associating a probability at each locus depending on the combined alleles of the parents. We prove for graphs with nvertices and medges, the approach solves the k-vertex cover problem in expected time O(4km+m4logn)using a simple RLS-style mutation. This bound can be improved to O(4km+m2nklogn)by using standard mutation constrained to the vertices of the graph. We also prove that a slight modification of the algorithm can be used to find k-coverable subgraphs of arbitrary graphs that are maximal under edge inclusion. We prove that a runtime budget of Ω(2km3log2n)suffices to generate a maximal k-coverable subgraph with high probability. Finally, we empirically show that these subgraphs often retain a number of structural properties of the source graph. This has direct implications for benchmarking, as it allows for the generation of graphs that maintain certain correlation properties while controlling for the optimal cover size.

我们引入了一种基于人口的方法来解决参数化图问题,其目标是识别一组符合可行性标准的小顶点。其思想是进化一个个体群体,其中每个个体对应于原始问题的子图的最优解。然后交叉操作将解和子图结合起来,希望为稍大的图生成最优解。为了正确地组合解和子图,我们提出了一种新的交叉算子,称为广义等位基因交叉算子,它通过在每个位点关联一个依赖于亲本组合等位基因的概率来推广均匀交叉。我们证明了对于具有顶点和混合点的图,该方法使用简单的rls风格突变在期望时间O(4km+m4logn)内解决了k顶点覆盖问题。通过使用约束于图顶点的标准突变,可以将该界改进为O(4km+m2nklogn)。我们还证明了对该算法稍加修改,可以用来求任意图的k个可覆盖子图,这些子图在边包含下是极大的。我们证明了运行时间预算Ω(2km3log2n)足以高概率地生成最大k可覆盖子图。最后,我们的经验表明,这些子图通常保留了源图的一些结构属性。这对基准测试有直接的影响,因为它允许生成在控制最佳覆盖大小的同时保持某些相关属性的图形。
{"title":"Evolving Populations of Solved Subgraphs with Crossover and Constraint Repair.","authors":"Jiwon Lee, Mahya Salimi Gamasaei, Andrew M Sutton","doi":"10.1162/EVCO.a.380","DOIUrl":"https://doi.org/10.1162/EVCO.a.380","url":null,"abstract":"<p><p>We introduce a population-based approach to solving parameterized graph problems for which the goal is to identify a small set of vertices subject to a feasibility criterion. The idea is to evolve a population of individuals where each individual corresponds to an optimal solution to a subgraph of the original problem. The crossover operation then combines both solutions and subgraphs with the hope to generate an optimal solution for a slightly larger graph. In order to correctly combine solutions and subgraphs, we propose a new crossover operator called generalized allelic crossover which generalizes uniform crossover by associating a probability at each locus depending on the combined alleles of the parents. We prove for graphs with nvertices and medges, the approach solves the k-vertex cover problem in expected time O(4km+m4logn)using a simple RLS-style mutation. This bound can be improved to O(4km+m2nklogn)by using standard mutation constrained to the vertices of the graph. We also prove that a slight modification of the algorithm can be used to find k-coverable subgraphs of arbitrary graphs that are maximal under edge inclusion. We prove that a runtime budget of Ω(2km3log2n)suffices to generate a maximal k-coverable subgraph with high probability. Finally, we empirically show that these subgraphs often retain a number of structural properties of the source graph. This has direct implications for benchmarking, as it allows for the generation of graphs that maintain certain correlation properties while controlling for the optimal cover size.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-28"},"PeriodicalIF":3.4,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EvolCAF: Automatic Cost-Aware Acquisition Function Design Using Large Language Models. EvolCAF:使用大型语言模型的自动成本感知获取功能设计。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-08 DOI: 10.1162/EVCO.a.379
Yiming Yao, Fei Liu, Ji Cheng, Qingfu Zhang

To address optimization problems that involve expensive evaluations with unknown and heterogeneous costs, cost-aware Bayesian optimization (BO) emerges as a prominent solution in many real-world scenarios. However, as a critical step in developing BO algorithms, the design of efficient cost-aware acquisition functions (AFs) remains a significant challenge. This paper introduces EvolCAF, a novel framework that integrates large language models (LLMs) with evolutionary computation (EC) to automatically design cost-aware AFs. Leveraging the crossover and mutation in the algorithmic space, EvolCAF offers a novel design fashion, significantly reducing the reliance on domain expertise and labor-intensive trial-and-error process in the traditional manual design paradigm. We find the best AF designed by EvolCAF effectively utilizes the available information, including historical data, surrogate models and budget details. It introduces novel ideas not previously explored in the existing literature on acquisition function design, allowing for clear interpretations to provide insights into its behavior and decision-making process. In comparison to the well-known EIpu and EI-cool methods designed by human experts, our approach showcases remarkable efficiency and generalization across various synthetic and real-world tasks.

为了解决涉及未知成本和异构成本的昂贵评估的优化问题,成本感知贝叶斯优化(BO)在许多现实场景中成为一种突出的解决方案。然而,作为开发BO算法的关键步骤,高效成本感知采集函数(AFs)的设计仍然是一个重大挑战。本文介绍了一种将大型语言模型(llm)与进化计算(EC)相结合的新型框架EvolCAF,该框架可以自动设计成本感知的af。利用算法空间中的交叉和突变,EvolCAF提供了一种新颖的设计时尚,显着减少了对领域专业知识的依赖,以及传统手工设计范式中劳动密集型的试错过程。我们发现EvolCAF设计的最佳AF有效地利用了可用信息,包括历史数据,代理模型和预算细节。它引入了以前在现有的获取功能设计文献中没有探索过的新思想,允许清晰的解释,以提供对其行为和决策过程的见解。与人类专家设计的众所周知的EIpu和EI-cool方法相比,我们的方法在各种合成和现实世界的任务中显示出显着的效率和通用性。
{"title":"EvolCAF: Automatic Cost-Aware Acquisition Function Design Using Large Language Models.","authors":"Yiming Yao, Fei Liu, Ji Cheng, Qingfu Zhang","doi":"10.1162/EVCO.a.379","DOIUrl":"https://doi.org/10.1162/EVCO.a.379","url":null,"abstract":"<p><p>To address optimization problems that involve expensive evaluations with unknown and heterogeneous costs, cost-aware Bayesian optimization (BO) emerges as a prominent solution in many real-world scenarios. However, as a critical step in developing BO algorithms, the design of efficient cost-aware acquisition functions (AFs) remains a significant challenge. This paper introduces EvolCAF, a novel framework that integrates large language models (LLMs) with evolutionary computation (EC) to automatically design cost-aware AFs. Leveraging the crossover and mutation in the algorithmic space, EvolCAF offers a novel design fashion, significantly reducing the reliance on domain expertise and labor-intensive trial-and-error process in the traditional manual design paradigm. We find the best AF designed by EvolCAF effectively utilizes the available information, including historical data, surrogate models and budget details. It introduces novel ideas not previously explored in the existing literature on acquisition function design, allowing for clear interpretations to provide insights into its behavior and decision-making process. In comparison to the well-known EIpu and EI-cool methods designed by human experts, our approach showcases remarkable efficiency and generalization across various synthetic and real-world tasks.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-24"},"PeriodicalIF":3.4,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145936054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Performance of Algorithm Selection Pipelines on Large Instance Sets via Dynamic Reallocation of Budget. 基于预算动态再分配的大实例集算法选择管道性能改进。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-18 DOI: 10.1162/EVCO.a.378
Quentin Renau, Emma Hart

Special Issue PPSN 2024: Algorithm-selection (AS) methods are essential in order to obtain the best performance from a portfolio of solvers. When considering large sets of instances that either arrive in a stream or in a single batch, there is significant potential to save the function evaluation budget on some instances and reallocate it to others, thereby improving overall performance. We propose an AS pipeline which (1) identifies easy instances which are solved using the single best solver, avoiding the need to run a selector; (2) curtails runs on both easy and hard instances if they become stalled in the search space and/or are predicted to remain in a stalled state thereby saving budget; (3) reallocates budget saved from both previous steps to downstream instances, using an intelligent strategy to predict which instances will benefit most from extra function evaluations. Experiments using the BBOB dataset in two settings (batch and streaming) show that augmenting an AS pipeline with strategies to save and reallocate budget obtains significantly better results in both settings compared to a standard pipeline.

特刊PPSN 2024:算法选择(AS)方法是必不可少的,以便从解决方案组合中获得最佳性能。当考虑到达流或单个批处理的大型实例集时,有很大的潜力可以节省某些实例上的函数评估预算,并将其重新分配给其他实例,从而提高整体性能。我们提出了一个AS管道,它(1)识别使用单个最佳求解器解决的简单实例,避免了运行选择器的需要;(2)如果简单实例和困难实例在搜索空间中停滞不前和/或预计将保持停滞状态,从而节省预算,则减少它们的运行;(3)将从前两个步骤节省的预算重新分配到下游实例,使用智能策略预测哪些实例将从额外的功能评估中获益最多。在两种设置(批处理和流)中使用BBOB数据集的实验表明,与标准管道相比,在两种设置中使用节省和重新分配预算的策略来增加AS管道可以获得明显更好的结果。
{"title":"Improving Performance of Algorithm Selection Pipelines on Large Instance Sets via Dynamic Reallocation of Budget.","authors":"Quentin Renau, Emma Hart","doi":"10.1162/EVCO.a.378","DOIUrl":"https://doi.org/10.1162/EVCO.a.378","url":null,"abstract":"<p><p>Special Issue PPSN 2024: Algorithm-selection (AS) methods are essential in order to obtain the best performance from a portfolio of solvers. When considering large sets of instances that either arrive in a stream or in a single batch, there is significant potential to save the function evaluation budget on some instances and reallocate it to others, thereby improving overall performance. We propose an AS pipeline which (1) identifies easy instances which are solved using the single best solver, avoiding the need to run a selector; (2) curtails runs on both easy and hard instances if they become stalled in the search space and/or are predicted to remain in a stalled state thereby saving budget; (3) reallocates budget saved from both previous steps to downstream instances, using an intelligent strategy to predict which instances will benefit most from extra function evaluations. Experiments using the BBOB dataset in two settings (batch and streaming) show that augmenting an AS pipeline with strategies to save and reallocate budget obtains significantly better results in both settings compared to a standard pipeline.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-23"},"PeriodicalIF":3.4,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145795763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Double XCSF on Target? 双XCSF瞄准目标?
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-15 DOI: 10.1162/EVCO.a.377
Connor Schönberner, Sven Tomforde

The XCS Classifier System (XCS), the most prominent Learning Classifier System (LCS), originally focused on Reinforcement Learning (RL) problems. Over time, emphasis shifted heavily to supervised learning, with some applications in unsupervised learning. Following rekindled interest in LCSs for RL domains, we intend to capitalise on the close relationship between Q-learning and XCS. Except for Experience Replay, hardly any advances built on Q-learning have been investigated in XCS variants such as XCSF. Recognising this, we introduce three extensions inspired by Q-learning derivates: Target prediction inspired by DQN's target networks to improve the learning stability and double target prediction inspired by Double DQN as well as a Double Q-learning mechanism as countermeasures against overestimation. Addressing these two issues, aims to improve the performance of XCSF and the high variance between runs. We apply them to the Maze Problem, Frozen Lake, and Cart Pole. Our observations indicate mixed results: The Double Q-learning mechanism leads to no improvement. Target and double target prediction can lead to observable and also significantly improved performance and can provide variance reduction. This underscores that improving the RL capabilities of XCSF is non-trivial but indicates that adapting Deep Reinforcement Learning mechanisms for XCSF can be advantageous.

XCS分类器系统(XCS)是最著名的学习分类器系统(LCS),最初专注于强化学习(RL)问题。随着时间的推移,重点转向了监督学习,在无监督学习中也有一些应用。随着人们对RL领域的lcs重新燃起兴趣,我们打算利用Q-learning和XCS之间的密切关系。除了Experience Replay之外,几乎没有在XCS变体(如XCSF)中研究过基于Q-learning的任何进展。认识到这一点,我们引入了受q学习派生启发的三个扩展:受DQN目标网络启发的目标预测,以提高学习稳定性;受双DQN启发的双目标预测,以及双q学习机制,以防止高估。解决这两个问题,旨在提高XCSF的性能和运行之间的高方差。我们将它们应用于迷宫问题、冰湖问题和车杆问题。我们的观察结果好坏参半:双q学习机制没有带来任何改善。目标和双目标预测可以显著提高性能,减少方差。这强调了提高XCSF的RL能力并非易事,但也表明为XCSF采用深度强化学习机制可能是有利的。
{"title":"Double XCSF on Target?","authors":"Connor Schönberner, Sven Tomforde","doi":"10.1162/EVCO.a.377","DOIUrl":"https://doi.org/10.1162/EVCO.a.377","url":null,"abstract":"<p><p>The XCS Classifier System (XCS), the most prominent Learning Classifier System (LCS), originally focused on Reinforcement Learning (RL) problems. Over time, emphasis shifted heavily to supervised learning, with some applications in unsupervised learning. Following rekindled interest in LCSs for RL domains, we intend to capitalise on the close relationship between Q-learning and XCS. Except for Experience Replay, hardly any advances built on Q-learning have been investigated in XCS variants such as XCSF. Recognising this, we introduce three extensions inspired by Q-learning derivates: Target prediction inspired by DQN's target networks to improve the learning stability and double target prediction inspired by Double DQN as well as a Double Q-learning mechanism as countermeasures against overestimation. Addressing these two issues, aims to improve the performance of XCSF and the high variance between runs. We apply them to the Maze Problem, Frozen Lake, and Cart Pole. Our observations indicate mixed results: The Double Q-learning mechanism leads to no improvement. Target and double target prediction can lead to observable and also significantly improved performance and can provide variance reduction. This underscores that improving the RL capabilities of XCSF is non-trivial but indicates that adapting Deep Reinforcement Learning mechanisms for XCSF can be advantageous.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-36"},"PeriodicalIF":3.4,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145764330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Runtime Analysis of Evolutionary Diversity Optimization on the Multi-objective (LeadingOnes, TrailingZeros) Problem. 多目标(领先1,落后0)问题的进化多样性优化运行时分析
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-04 DOI: 10.1162/EVCO.a.376
Denis Antipov, Aneta Neumann, Frank Neumann, Andrew M Sutton

Diversity optimization is the class of optimization problems in which we aim to find a diverse set of good solutions. One of the frequently-used approaches to solve such problems is to use evolutionary algorithms that evolve a desired diverse population. This approach is called evolutionary diversity optimization (EDO). In this paper, we analyze EDO on a three-objective function LOTZk, which is a modification of the two-objective benchmark function (LeadingOnes, TrailingZeros). We prove that the GSEMO computes a set of all Pareto-optimal solutions in O(kn3) expected iterations. We also analyze the runtime of the GSEMOD algorithm (a modification of the GSEMO for diversity optimization) until it finds a population with the best possible diversity for two different diversity measures: the total imbalance and the sorted imbalances vector. For the first measure we show that the GSEMOD optimizes it in O(kn2log⁡(n)) expected iterations (which is asymptotically faster than the upper bound on the runtime until it finds a Pareto-optimal population), and for the second measure we show an upper bound of O(k2n3log⁡(n)) expected iterations. We complement our theoretical analysis with an empirical study, which shows a very similar behavior for both diversity measures. The results of experiments suggest that our bounds for the total imbalance measure are tight, while the bounds for the imbalances vector are too pessimistic.

多样性优化是一类优化问题,我们的目标是找到一组不同的好解。解决此类问题的常用方法之一是使用进化算法来进化出所需的多样化种群。这种方法被称为进化多样性优化(EDO)。在本文中,我们分析了三目标函数LOTZk上的EDO, LOTZk是对两目标基准函数(LeadingOnes, TrailingZeros)的改进。我们证明了GSEMO算法在O(kn3)次期望迭代中计算所有pareto最优解的集合。我们还分析了GSEMOD算法(针对多样性优化的GSEMO的修改)的运行时,直到它找到具有两种不同多样性度量的最佳多样性的种群:总不平衡和排序不平衡向量。对于第一个度量,我们展示了GSEMOD在O(kn2log (n))次预期迭代中对其进行优化(在找到帕累托最优种群之前,它的速度渐近地快于运行时的上界),对于第二个度量,我们展示了O(k2n3log (n))次预期迭代的上界。我们用一项实证研究来补充我们的理论分析,该研究表明,两种多样性措施的行为非常相似。实验结果表明,我们对总不平衡度量的界限很紧,而对不平衡向量的界限过于悲观。
{"title":"Runtime Analysis of Evolutionary Diversity Optimization on the Multi-objective (LeadingOnes, TrailingZeros) Problem.","authors":"Denis Antipov, Aneta Neumann, Frank Neumann, Andrew M Sutton","doi":"10.1162/EVCO.a.376","DOIUrl":"https://doi.org/10.1162/EVCO.a.376","url":null,"abstract":"<p><p>Diversity optimization is the class of optimization problems in which we aim to find a diverse set of good solutions. One of the frequently-used approaches to solve such problems is to use evolutionary algorithms that evolve a desired diverse population. This approach is called evolutionary diversity optimization (EDO). In this paper, we analyze EDO on a three-objective function LOTZk, which is a modification of the two-objective benchmark function (LeadingOnes, TrailingZeros). We prove that the GSEMO computes a set of all Pareto-optimal solutions in O(kn3) expected iterations. We also analyze the runtime of the GSEMOD algorithm (a modification of the GSEMO for diversity optimization) until it finds a population with the best possible diversity for two different diversity measures: the total imbalance and the sorted imbalances vector. For the first measure we show that the GSEMOD optimizes it in O(kn2log⁡(n)) expected iterations (which is asymptotically faster than the upper bound on the runtime until it finds a Pareto-optimal population), and for the second measure we show an upper bound of O(k2n3log⁡(n)) expected iterations. We complement our theoretical analysis with an empirical study, which shows a very similar behavior for both diversity measures. The results of experiments suggest that our bounds for the total imbalance measure are tight, while the bounds for the imbalances vector are too pessimistic.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-23"},"PeriodicalIF":3.4,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145679325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Representation Genetic Programming: A Case Study on Tree-Based and Linear Representations 交叉表示遗传规划:基于树和线性表示的案例研究。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-01 DOI: 10.1162/evco.a.25
Zhixing Huang;Yi Mei;Fangfang Zhang;Mengjie Zhang;Wolfgang Banzhaf
Existing genetic programming (GP) methods are typically designed based on a certain representation, such as tree-based or linear representations. These representations show various pros and cons in different domains. However, due to the complicated relationships among representation and fitness landscapes of GP, it is hard to intuitively determine which GP representation is the most suitable for solving a certain problem. Evolving programs (or models) with multiple representations simultaneously can alternatively search on different fitness landscapes since representations are highly related to the search space that essentially defines the fitness landscape. Fully using the latent synergies among different GP individual representations might be helpful for GP to search for better solutions. However, existing GP literature rarely investigates the simultaneous effective evolution of multiple representations. To fill this gap, this paper proposes a cross-representation GP algorithm based on tree-based and linear representations, which are two commonly used GP representations. In addition, we develop a new cross-representation crossover operator to harness the interplay between tree-based and linear representations. Empirical results show that navigating the learned knowledge between basic tree-based and linear representations successfully improves the effectiveness of GP with solely tree-based or linear representation in solving symbolic regression and dynamic job shop scheduling problems.
现有的遗传规划(GP)方法通常基于一定的表示,如基于树的或线性的表示。这些表示显示了不同领域的各种优点和缺点。然而,由于GP的表示和适应度景观之间的复杂关系,很难直观地确定哪种GP表示最适合解决某一问题。同时具有多个表示的进化程序(或模型)可以选择搜索不同的适应度景观,因为表示与本质上定义适应度景观的搜索空间高度相关。充分利用不同GP个体表征之间的潜在协同效应,有助于GP寻求更好的解决方案。然而,现有的GP文献很少研究多重表征的同时有效演化。为了填补这一空白,本文提出了一种基于树表示和线性表示的交叉表示GP算法。此外,我们开发了一种新的交叉表示交叉算子来利用基于树的表示和线性表示之间的相互作用。实证结果表明,在基本树表示和线性表示之间成功地导航所学知识,提高了仅基于树表示或线性表示的GP在解决符号回归和动态作业车间调度问题中的有效性。
{"title":"Cross-Representation Genetic Programming: A Case Study on Tree-Based and Linear Representations","authors":"Zhixing Huang;Yi Mei;Fangfang Zhang;Mengjie Zhang;Wolfgang Banzhaf","doi":"10.1162/evco.a.25","DOIUrl":"10.1162/evco.a.25","url":null,"abstract":"Existing genetic programming (GP) methods are typically designed based on a certain representation, such as tree-based or linear representations. These representations show various pros and cons in different domains. However, due to the complicated relationships among representation and fitness landscapes of GP, it is hard to intuitively determine which GP representation is the most suitable for solving a certain problem. Evolving programs (or models) with multiple representations simultaneously can alternatively search on different fitness landscapes since representations are highly related to the search space that essentially defines the fitness landscape. Fully using the latent synergies among different GP individual representations might be helpful for GP to search for better solutions. However, existing GP literature rarely investigates the simultaneous effective evolution of multiple representations. To fill this gap, this paper proposes a cross-representation GP algorithm based on tree-based and linear representations, which are two commonly used GP representations. In addition, we develop a new cross-representation crossover operator to harness the interplay between tree-based and linear representations. Empirical results show that navigating the learned knowledge between basic tree-based and linear representations successfully improves the effectiveness of GP with solely tree-based or linear representation in solving symbolic regression and dynamic job shop scheduling problems.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"33 4","pages":"541-568"},"PeriodicalIF":3.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Stochastic Operators, Fitness Landscapes, and Optimization Heuristics Performances 关于随机算子,适应度景观和优化启发式性能。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-01 DOI: 10.1162/evco.a.24
Brahim Aboutaib;Sébastien Verel;Cyril Fonlupt;Bilel Derbel;Arnaud Liefooghe;Belaïd Ahiod
Stochastic operators are the backbone of many optimization algorithms. Besides the existing theoretical analysis that studies the asymptotic runtime of such algorithms, characterizing their performance using fitness landscape analysis is far away. The fitness landscape approach proceeds by considering multiple characteristics to understand and explain an optimization algorithm’s performance or the difficulty of an optimization problem, and hence would provide a richer explanation. This paper analyzes the fitness landscapes of stochastic operators by focusing on the number of local optima and their relation to the optimization performance. The search spaces of two combinatorial problems are studied: the NK-landscape and the Quadratic Assignment Problem, using binary string-based and permutation-based stochastic operators. The classical bit-flip search operator is considered for binary strings, and a generalization of the deterministic exchange operator for permutation representations is devised. We study small instances, ranging from randomly generated to real-like instances, and large instances from the NK-landscape. For large instances, we propose using an adaptive walk process to estimate the number of locally optimal solutions. Given that stochastic operators are usually used within population and single-solution-based evolutionary optimization algorithms, we contrast the performance of the (μ+λ)-EA, and an Iterated Local Search, versus the landscape properties of large size instances of the NK-landscapes. Our analysis shows that characterizing the fitness landscapes induced by stochastic search operators can effectively explain the optimization performances of the algorithms we considered.
随机算子是许多随机优化算法的基础。除了现有的分析算法渐近运行时的理论分析外,用适应度景观分析来表征其性能还很遥远。适应度景观方法通过考虑多个特征来理解和解释优化算法的性能或优化问题的难度,从而提供更丰富的解释。本文从局部最优的数量及其与优化性能的关系出发,分析了随机算子的适应度格局。利用基于二进制字符串和基于置换的随机算子,研究了两个组合问题的搜索空间,即nk景观和二次分配问题。考虑了二进制字符串的经典位翻转搜索算子,并对置换表示的确定性交换算子进行了推广。我们研究小的实例,从随机生成的到真实的实例,以及来自nk景观的大实例。对于大型实例,我们建议使用自适应行走过程来估计局部最优解的数量。考虑到随机算子通常在种群和基于单解的进化优化算法中使用,我们比较了(μ + λ)-EA和迭代局部搜索的性能与大型nk -景观实例的景观特性。我们的分析表明,表征随机搜索算子引起的适应度景观可以有效地解释我们所考虑的算法的优化性能。
{"title":"On Stochastic Operators, Fitness Landscapes, and Optimization Heuristics Performances","authors":"Brahim Aboutaib;Sébastien Verel;Cyril Fonlupt;Bilel Derbel;Arnaud Liefooghe;Belaïd Ahiod","doi":"10.1162/evco.a.24","DOIUrl":"10.1162/evco.a.24","url":null,"abstract":"Stochastic operators are the backbone of many optimization algorithms. Besides the existing theoretical analysis that studies the asymptotic runtime of such algorithms, characterizing their performance using fitness landscape analysis is far away. The fitness landscape approach proceeds by considering multiple characteristics to understand and explain an optimization algorithm’s performance or the difficulty of an optimization problem, and hence would provide a richer explanation. This paper analyzes the fitness landscapes of stochastic operators by focusing on the number of local optima and their relation to the optimization performance. The search spaces of two combinatorial problems are studied: the NK-landscape and the Quadratic Assignment Problem, using binary string-based and permutation-based stochastic operators. The classical bit-flip search operator is considered for binary strings, and a generalization of the deterministic exchange operator for permutation representations is devised. We study small instances, ranging from randomly generated to real-like instances, and large instances from the NK-landscape. For large instances, we propose using an adaptive walk process to estimate the number of locally optimal solutions. Given that stochastic operators are usually used within population and single-solution-based evolutionary optimization algorithms, we contrast the performance of the (μ+λ)-EA, and an Iterated Local Search, versus the landscape properties of large size instances of the NK-landscapes. Our analysis shows that characterizing the fitness landscapes induced by stochastic search operators can effectively explain the optimization performances of the algorithms we considered.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"33 4","pages":"459-484"},"PeriodicalIF":3.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Machine Learning Methods to Assess Module Performance Contribution in Modular Optimization Frameworks 使用机器学习方法评估模块化优化框架中的模块性能贡献。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-01 DOI: 10.1162/evco_a_00356
Ana Kostovska;Diederick Vermetten;Peter Korošec;Sašo Džeroski;Carola Doerr;Tome Eftimov
Modular algorithm frameworks not only allow for combinations never tested in manually selected algorithm portfolios, but they also provide a structured approach to assess which algorithmic ideas are crucial for the observed performance of algorithms. In this paper, we propose a methodology for analyzing the impact of the different modules on the overall performance. We consider modular frameworks for two widely used families of derivative-free, black-box optimization algorithms, the covariance matrix adaptation evolution strategy (CMA-ES) and differential evolution (DE). More specifically, we use performance data of 324 modCMA-ES and 576 modDE algorithm variants (with each variant corresponding to a specific configuration of modules) obtained on the 24 BBOB problems for six different runtime budgets in two dimensions. Our analysis of these data reveals that the impact of individual modules on overall algorithm performance varies significantly. Notably, among the examined modules, the elitism module in CMA-ES and the linear population size reduction module in DE exhibit the most significant impact on performance. Furthermore, our exploratory data analysis of problem landscape data suggests that the most relevant landscape features remain consistent regardless of the configuration of individual modules, but the influence that these features have on regression accuracy varies. In addition, we apply classifiers that exploit feature importance with respect to the trained models for performance prediction and performance data, to predict the modular configurations of CMA-ES and DE algorithm variants. The results show that the predicted configurations do not exhibit a statistically significant difference in performance compared to the true configurations, with the percentage varying depending on the setup (from 49.1% to 95.5% for modCMA and 21.7% to 77.1% for DE).
模块化算法框架不仅可以实现人工选择的算法组合中从未测试过的组合,而且还提供了一种结构化方法,用于评估哪些算法思想对观察到的算法性能至关重要。在本研究中,我们提出了一种分析不同模块对整体性能影响的方法。我们考虑了两个广泛使用的无衍生黑箱优化算法系列的模块框架,即协方差矩阵适应进化策略(CMA-ES)和微分进化(DE)。更具体地说,我们使用了 324 个 modCMA-ES 和 576 个 modDE 算法变体(每个变体对应一个特定的模块配置)的性能数据,这些数据是在 24 个 BBOB 问题上针对 6 种不同的运行时间预算在 2 维度上获得的。我们对这些数据的分析表明,各个模块对算法整体性能的影响差别很大。值得注意的是,在所考察的模块中,CMA-ES 中的精英模块和 DE 中的线性种群规模缩减模块对性能的影响最为显著。此外,我们对问题景观数据的探索性数据分析表明,无论单个模块的配置如何,最相关的景观特征保持一致,但这些特征对回归精度的影响各不相同。此外,我们应用分类器,利用性能预测和性能数据训练模型的特征重要性,来预测 CMA-ES 和 DE 算法变体的模块配置。结果表明,与真实配置相比,预测的配置在性能上没有表现出显著的统计学差异,其百分比因设置而异(mod-CMA 从 49.1% 到 95.5%,DE 从 21.7% 到 77.1%)。
{"title":"Using Machine Learning Methods to Assess Module Performance Contribution in Modular Optimization Frameworks","authors":"Ana Kostovska;Diederick Vermetten;Peter Korošec;Sašo Džeroski;Carola Doerr;Tome Eftimov","doi":"10.1162/evco_a_00356","DOIUrl":"10.1162/evco_a_00356","url":null,"abstract":"Modular algorithm frameworks not only allow for combinations never tested in manually selected algorithm portfolios, but they also provide a structured approach to assess which algorithmic ideas are crucial for the observed performance of algorithms. In this paper, we propose a methodology for analyzing the impact of the different modules on the overall performance. We consider modular frameworks for two widely used families of derivative-free, black-box optimization algorithms, the covariance matrix adaptation evolution strategy (CMA-ES) and differential evolution (DE). More specifically, we use performance data of 324 modCMA-ES and 576 modDE algorithm variants (with each variant corresponding to a specific configuration of modules) obtained on the 24 BBOB problems for six different runtime budgets in two dimensions. Our analysis of these data reveals that the impact of individual modules on overall algorithm performance varies significantly. Notably, among the examined modules, the elitism module in CMA-ES and the linear population size reduction module in DE exhibit the most significant impact on performance. Furthermore, our exploratory data analysis of problem landscape data suggests that the most relevant landscape features remain consistent regardless of the configuration of individual modules, but the influence that these features have on regression accuracy varies. In addition, we apply classifiers that exploit feature importance with respect to the trained models for performance prediction and performance data, to predict the modular configurations of CMA-ES and DE algorithm variants. The results show that the predicted configurations do not exhibit a statistically significant difference in performance compared to the true configurations, with the percentage varying depending on the setup (from 49.1% to 95.5% for modCMA and 21.7% to 77.1% for DE).","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"33 4","pages":"485-512"},"PeriodicalIF":3.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141890747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep-ELA: Deep Exploratory Landscape Analysis with Self-Supervised Pretrained Transformers for Single- and Multiobjective Continuous Optimization Problems Deep- ela:基于自监督预训练变压器的单目标和多目标连续优化问题深度探索性景观分析。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-01 DOI: 10.1162/evco_a_00367
Moritz Vinzent Seiler;Pascal Kerschke;Heike Trautmann
In many recent works, the potential of Exploratory Landscape Analysis (ELA) features to numerically characterize single-objective continuous optimization problems has been demonstrated. These numerical features provide the input for all kinds of machine learning tasks in the domain of continuous optimization problems, ranging, for example, from High-level Property Prediction to Automated Algorithm Selection and Automated Algorithm Configuration. Without ELA features, analyzing and understanding the characteristics of single-objective continuous optimization problems is—to the best of our knowledge—very limited. Yet, despite their usefulness, as demonstrated in several past works, ELA features suffer from several drawbacks. These include, in particular, (1) a strong correlation between multiple features, as well as (2) its very limited applicability to multiobjective continuous optimization problems. As a remedy, recent works proposed deep learning-based approaches as alternatives to ELA. In these works, among others, point-cloud transformers were used to characterize an optimization problem’s fitness landscape. However, these approaches require a large amount of labeled training data. Within this work, we propose a hybrid approach, Deep-ELA, which combines (the benefits of) deep learning and ELA features. We pre-trained four transformers on millions of randomly generated optimization problems to learn deep representations of the landscapes of continuous single- and multiobjective optimization problems. Our proposed framework can either be used out of the box for analyzing single- and multiobjective continuous optimization problems, or subsequently fine-tuned to various tasks focusing on algorithm behavior and problem understanding.
在最近的许多工作中,探索性景观分析(ELA)特征在数值上表征单目标连续优化问题的潜力已经得到证明。这些数值特征为连续优化问题领域的各种机器学习任务提供了输入,例如,从高级属性预测到自动算法选择和自动算法配置。如果没有ELA特征,单目标连续优化问题的特征分析和理解就我们所知是非常有限的。然而,尽管它们很有用,正如在过去的几篇文章中所展示的那样,ELA特性仍然存在一些缺点。这些包括,特别是,(1)多个特征之间的强相关性,以及(2)它对多目标连续优化问题的非常有限的适用性。作为补救措施,最近的研究提出了基于深度学习的方法作为ELA的替代方案。在这些工作中,除其他外,点云变压器被用来表征优化问题的适应度景观。然而,这些方法需要大量的标记训练数据。在这项工作中,我们提出了一种混合方法,deep -ELA,它结合了深度学习和ELA特征的(好处)。我们在数百万个随机生成的优化问题上预训练了四个变压器,以学习连续单目标和多目标优化问题的深度表示。我们提出的框架既可以用于开箱即用的分析单目标和多目标连续优化问题,也可以随后微调到专注于算法行为和问题理解的各种任务。
{"title":"Deep-ELA: Deep Exploratory Landscape Analysis with Self-Supervised Pretrained Transformers for Single- and Multiobjective Continuous Optimization Problems","authors":"Moritz Vinzent Seiler;Pascal Kerschke;Heike Trautmann","doi":"10.1162/evco_a_00367","DOIUrl":"10.1162/evco_a_00367","url":null,"abstract":"In many recent works, the potential of Exploratory Landscape Analysis (ELA) features to numerically characterize single-objective continuous optimization problems has been demonstrated. These numerical features provide the input for all kinds of machine learning tasks in the domain of continuous optimization problems, ranging, for example, from High-level Property Prediction to Automated Algorithm Selection and Automated Algorithm Configuration. Without ELA features, analyzing and understanding the characteristics of single-objective continuous optimization problems is—to the best of our knowledge—very limited. Yet, despite their usefulness, as demonstrated in several past works, ELA features suffer from several drawbacks. These include, in particular, (1) a strong correlation between multiple features, as well as (2) its very limited applicability to multiobjective continuous optimization problems. As a remedy, recent works proposed deep learning-based approaches as alternatives to ELA. In these works, among others, point-cloud transformers were used to characterize an optimization problem’s fitness landscape. However, these approaches require a large amount of labeled training data. Within this work, we propose a hybrid approach, Deep-ELA, which combines (the benefits of) deep learning and ELA features. We pre-trained four transformers on millions of randomly generated optimization problems to learn deep representations of the landscapes of continuous single- and multiobjective optimization problems. Our proposed framework can either be used out of the box for analyzing single- and multiobjective continuous optimization problems, or subsequently fine-tuned to various tasks focusing on algorithm behavior and problem understanding.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"33 4","pages":"513-540"},"PeriodicalIF":3.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143191085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Automated Algorithm Design Synergizing Large Language Models and Evolutionary Algorithms: Survey and Insights. 探索协同大型语言模型和进化算法的自动算法设计:调查和见解。
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-31 DOI: 10.1162/EVCO.a.370
He Yu, Jing Liu

Designing algorithms for optimization problems, no matter heuristic or meta-heuristic, often relies on manual design and domain expertise, limiting their scalability and adaptability. The integration of Large Language Models (LLMs) and Evolutionary Algorithms (EAs) presents a promising new way to overcome these limitations to make optimization be more automated, where LLMs function as dynamic agents capable of generating, refining, and interpreting optimization strategies, while EAs explore complex searching spaces efficiently through evolutionary operators. Since this synergy enables a more efficient and creative searching process, we first review important developments in this direction, and then summarize an LLM-EA paradigm for automated optimization algorithm design. We conduct an in-depth analysis on innovative methods for four key EA modules, namely, individual representation, selection, variation operators, and fitness evaluation, addressing challenges related to optimization algorithm design, particularly from the perspective of LLM prompts, analyzing how the prompt flow evolving with the evolutionary process, adjusting based on evolutionary feedback (e.g., population diversity, convergence rate). Furthermore, we analyze how LLMs, through flexible prompt-driven roles, introduce semantic intelligence into fundamental EA characteristics, including diversity, convergence, adaptability, and scalability. Our systematic review and thorough analysis into the paradigm can help researchers better understand the current research and boost the development of synergizing LLMs with EAs for automated optimization algorithm design.

优化问题的算法设计,无论是启发式还是元启发式,往往依赖于人工设计和领域专业知识,限制了它们的可扩展性和适应性。大型语言模型(llm)和进化算法(ea)的集成提供了一种有希望的新方法来克服这些限制,使优化更加自动化,其中llm作为能够生成,精炼和解释优化策略的动态代理,而ea通过进化算子有效地探索复杂的搜索空间。由于这种协同作用使搜索过程更加高效和创造性,我们首先回顾了这一方向的重要发展,然后总结了自动优化算法设计的LLM-EA范式。我们深入分析了个体表示、选择、变异算子和适应度评估四个关键EA模块的创新方法,解决了优化算法设计中的挑战,特别是从LLM提示符的角度,分析了提示流如何随着进化过程而演变,并根据进化反馈(如种群多样性、收敛速度)进行调整。此外,我们分析了llm如何通过灵活的提示驱动角色,将语义智能引入EA的基本特征,包括多样性、收敛性、适应性和可扩展性。我们对该范式的系统回顾和深入分析可以帮助研究人员更好地了解当前的研究现状,并促进法学硕士与ea在自动优化算法设计方面的协同发展。
{"title":"Exploring Automated Algorithm Design Synergizing Large Language Models and Evolutionary Algorithms: Survey and Insights.","authors":"He Yu, Jing Liu","doi":"10.1162/EVCO.a.370","DOIUrl":"https://doi.org/10.1162/EVCO.a.370","url":null,"abstract":"<p><p>Designing algorithms for optimization problems, no matter heuristic or meta-heuristic, often relies on manual design and domain expertise, limiting their scalability and adaptability. The integration of Large Language Models (LLMs) and Evolutionary Algorithms (EAs) presents a promising new way to overcome these limitations to make optimization be more automated, where LLMs function as dynamic agents capable of generating, refining, and interpreting optimization strategies, while EAs explore complex searching spaces efficiently through evolutionary operators. Since this synergy enables a more efficient and creative searching process, we first review important developments in this direction, and then summarize an LLM-EA paradigm for automated optimization algorithm design. We conduct an in-depth analysis on innovative methods for four key EA modules, namely, individual representation, selection, variation operators, and fitness evaluation, addressing challenges related to optimization algorithm design, particularly from the perspective of LLM prompts, analyzing how the prompt flow evolving with the evolutionary process, adjusting based on evolutionary feedback (e.g., population diversity, convergence rate). Furthermore, we analyze how LLMs, through flexible prompt-driven roles, introduce semantic intelligence into fundamental EA characteristics, including diversity, convergence, adaptability, and scalability. Our systematic review and thorough analysis into the paradigm can help researchers better understand the current research and boost the development of synergizing LLMs with EAs for automated optimization algorithm design.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-27"},"PeriodicalIF":3.4,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145423417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Evolutionary Computation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1