首页 > 最新文献

Evolutionary Computation最新文献

英文 中文
Maximizing Drift Is Not Optimal for Solving OneMax 最大化漂移不是解决OneMax的最佳方案
IF 6.8 2区 计算机科学 Q1 Mathematics Pub Date : 2021-12-01 DOI: 10.1162/evco_a_00290
Nathan Buskulic;Carola Doerr
It seems very intuitive that for the maximization of the OneMax problem Om(x):=∑i=1nxi the best that an elitist unary unbiased search algorithm can do is to store a best so far solution, and to modify it with the operator that yields the best possible expected progress in function value. This assumption has been implicitly used in several empirical works. In Doerr et al. (2020), it was formally proven that this approach is indeed almost optimal. In this work, we prove that drift maximization is not optimal. More precisely, we show that for most fitness levels between n/2 and 2n/3 the optimal mutation strengths are larger than the drift-maximizing ones. This implies that the optimal RLS is more risk-affine than the variant maximizing the stepwise expected progress. We show similar results for the mutation rates of the classic (1+1) Evolutionary Algorithm (EA) and its resampling variant, the (1+1) EA>0. As a result of independent interest we show that the optimal mutation strengths, unlike the drift-maximizing ones, can be even.
看起来非常直观的是,对于OneMax问题的最大化,Om(x):=∑i=1nxi,精英一元无偏搜索算法所能做的最好的事情就是存储迄今为止最好的解,并用运算符对其进行修改,以在函数值上产生尽可能好的预期进展。这一假设在几部经验著作中得到了隐含的应用。在Doerr等人(2020)中,正式证明了这种方法几乎是最优的。在这项工作中,我们证明了漂移最大化不是最优的。更准确地说,我们表明,对于n/2和2n/3之间的大多数适应度水平,最优突变强度大于漂移最大化强度。这意味着最优RLS比使逐步预期进展最大化的变体更具仿射风险。我们对经典的(1+1)进化算法(EA)及其重采样变体(1+1)EA>0的突变率给出了类似的结果。作为独立兴趣的结果,我们表明,与漂移最大化的突变强度不同,最佳突变强度可以是均匀的。
{"title":"Maximizing Drift Is Not Optimal for Solving OneMax","authors":"Nathan Buskulic;Carola Doerr","doi":"10.1162/evco_a_00290","DOIUrl":"10.1162/evco_a_00290","url":null,"abstract":"It seems very intuitive that for the maximization of the OneMax problem Om(x):=∑i=1nxi the best that an elitist unary unbiased search algorithm can do is to store a best so far solution, and to modify it with the operator that yields the best possible expected progress in function value. This assumption has been implicitly used in several empirical works. In Doerr et al. (2020), it was formally proven that this approach is indeed almost optimal. In this work, we prove that drift maximization is not optimal. More precisely, we show that for most fitness levels between n/2 and 2n/3 the optimal mutation strengths are larger than the drift-maximizing ones. This implies that the optimal RLS is more risk-affine than the variant maximizing the stepwise expected progress. We show similar results for the mutation rates of the classic (1+1) Evolutionary Algorithm (EA) and its resampling variant, the (1+1) EA>0. As a result of independent interest we show that the optimal mutation strengths, unlike the drift-maximizing ones, can be even.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38847969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
The Univariate Marginal Distribution Algorithm Copes Well with Deception and Epistasis 单变量边际分布算法能很好地处理欺骗和溢出
IF 6.8 2区 计算机科学 Q1 Mathematics Pub Date : 2021-12-01 DOI: 10.1162/evco_a_00293
Benjamin Doerr;Martin S. Krejca
In their recent work, Lehre and Nguyen (2019) show that the univariate marginal distribution algorithm (UMDA) needs time exponential in the parent populations size to optimize the DeceptiveLeadingBlocks (DLB) problem. They conclude from this result that univariate EDAs have difficulties with deception and epistasis. In this work, we show that this negative finding is caused by the choice of the parameters of the UMDA. When the population sizes are chosen large enough to prevent genetic drift, then the UMDA optimizes the DLB problem with high probability with at most λ(n2+2elnn) fitness evaluations. Since an offspring population size λ of order nlogn can prevent genetic drift, the UMDA can solve the DLB problem with O(n2logn) fitness evaluations. In contrast, for classic evolutionary algorithms no better runtime guarantee than O(n3) is known (which we prove to be tight for the (1+1) EA), so our result rather suggests that the UMDA can cope well with deception and epistatis. From a broader perspective, our result shows that the UMDA can cope better with local optima than many classic evolutionary algorithms; such a result was previously known only for the compact genetic algorithm. Together with the lower bound of Lehre and Nguyen, our result for the first time rigorously proves that running EDAs in the regime with genetic drift can lead to drastic performance losses.
Lehre和Nguyen(2019)在他们最近的工作中表明,单变量边际分布算法(UMDA)需要父母群体规模的时间指数来优化欺骗引导块(DLB)问题。他们从这一结果得出结论,单变量EDA在欺骗和上位性方面存在困难。在这项工作中,我们证明了这种负面发现是由UMDA的参数选择引起的。当种群大小选择得足够大以防止遗传漂移时,UMDA以高概率优化DLB问题,最多进行λ(n2+2elnn)适应度评估。由于nlogn阶的后代种群大小λ可以防止遗传漂移,UMDA可以通过O(n2logn)适应度评估来解决DLB问题。相反,对于经典的进化算法,没有比O(n3)更好的运行时保证(我们证明它对于(1+1)EA是严格的),所以我们的结果表明UMDA可以很好地应对欺骗和书信。从更广泛的角度来看,我们的结果表明,UMDA比许多经典的进化算法能够更好地处理局部最优;这样的结果先前仅对于紧凑遗传算法是已知的。结合Lehre和Nguyen的下界,我们的结果首次严格证明了在具有遗传漂移的机制中运行EDA会导致巨大的性能损失。
{"title":"The Univariate Marginal Distribution Algorithm Copes Well with Deception and Epistasis","authors":"Benjamin Doerr;Martin S. Krejca","doi":"10.1162/evco_a_00293","DOIUrl":"10.1162/evco_a_00293","url":null,"abstract":"In their recent work, Lehre and Nguyen (2019) show that the univariate marginal distribution algorithm (UMDA) needs time exponential in the parent populations size to optimize the DeceptiveLeadingBlocks (DLB) problem. They conclude from this result that univariate EDAs have difficulties with deception and epistasis. In this work, we show that this negative finding is caused by the choice of the parameters of the UMDA. When the population sizes are chosen large enough to prevent genetic drift, then the UMDA optimizes the DLB problem with high probability with at most λ(n2+2elnn) fitness evaluations. Since an offspring population size λ of order nlogn can prevent genetic drift, the UMDA can solve the DLB problem with O(n2logn) fitness evaluations. In contrast, for classic evolutionary algorithms no better runtime guarantee than O(n3) is known (which we prove to be tight for the (1+1) EA), so our result rather suggests that the UMDA can cope well with deception and epistatis. From a broader perspective, our result shows that the UMDA can cope better with local optima than many classic evolutionary algorithms; such a result was previously known only for the compact genetic algorithm. Together with the lower bound of Lehre and Nguyen, our result for the first time rigorously proves that running EDAs in the regime with genetic drift can lead to drastic performance losses.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39499751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Multiobjective Evolutionary Algorithms Are Still Good: Maximizing Monotone Approximately Submodular Minus Modular Functions 多目标进化算法仍然很好:最大化单调近似子模负模函数
IF 6.8 2区 计算机科学 Q1 Mathematics Pub Date : 2021-12-01 DOI: 10.1162/evco_a_00288
Chao Qian
As evolutionary algorithms (EAs) are general-purpose optimization algorithms, recent theoretical studies have tried to analyze their performance for solving general problem classes, with the goal of providing a general theoretical explanation of the behavior of EAs. Particularly, a simple multiobjective EA, that is, GSEMO, has been shown to be able to achieve good polynomial-time approximation guarantees for submodular optimization, where the objective function is only required to satisfy some properties and its explicit formulation is not needed. Submodular optimization has wide applications in diverse areas, and previous studies have considered the cases where the objective functions are monotone submodular, monotone non-submodular, or non-monotone submodular. To complement this line of research, this article studies the problem class of maximizing monotone approximately submodular minus modular functions (i.e., g-c) with a size constraint, where g is a so-called non-negative monotone approximately submodular function and c is a so-called non-negative modular function, resulting in the objective function (g-c) being non-monotone non-submodular in general. Different from previous analyses, we prove that by optimizing the original objective function (g-c) and the size simultaneously, the GSEMO fails to achieve a good polynomial-time approximation guarantee. However, we also prove that by optimizing a distorted objective function and the size simultaneously, the GSEMO can still achieve the best-known polynomial-time approximation guarantee. Empirical studies on the applications of Bayesian experimental design and directed vertex cover show the excellent performance of the GSEMO.
由于进化算法是一种通用的优化算法,最近的理论研究试图分析它们在解决一般问题类时的性能,目的是为进化算法的行为提供一般的理论解释。特别是,一个简单的多目标EA,即GSEMO,已经被证明能够为子模优化实现良好的多项式时间近似保证,其中目标函数只需要满足一些性质,而不需要其显式公式。子模优化在不同领域有着广泛的应用,以前的研究已经考虑了目标函数是单调子模、单调非子模或非单调子模的情况。为了补充这一研究,本文研究了具有大小约束的单调近似子模负模函数(即g-c)的最大化问题类,其中g是所谓的非负单调近似子模块函数,c是所谓的无负模函数,导致目标函数(g-c)一般是非单调的非子模。与以往的分析不同,我们证明了通过同时优化原始目标函数(g-c)和大小,GSEMO无法实现良好的多项式时间近似保证。然而,我们也证明了通过同时优化失真的目标函数和大小,GSEMO仍然可以实现最著名的多项式时间近似保证。对贝叶斯实验设计和有向顶点覆盖应用的实证研究表明,GSEMO具有良好的性能。
{"title":"Multiobjective Evolutionary Algorithms Are Still Good: Maximizing Monotone Approximately Submodular Minus Modular Functions","authors":"Chao Qian","doi":"10.1162/evco_a_00288","DOIUrl":"10.1162/evco_a_00288","url":null,"abstract":"As evolutionary algorithms (EAs) are general-purpose optimization algorithms, recent theoretical studies have tried to analyze their performance for solving general problem classes, with the goal of providing a general theoretical explanation of the behavior of EAs. Particularly, a simple multiobjective EA, that is, GSEMO, has been shown to be able to achieve good polynomial-time approximation guarantees for submodular optimization, where the objective function is only required to satisfy some properties and its explicit formulation is not needed. Submodular optimization has wide applications in diverse areas, and previous studies have considered the cases where the objective functions are monotone submodular, monotone non-submodular, or non-monotone submodular. To complement this line of research, this article studies the problem class of maximizing monotone approximately submodular minus modular functions (i.e., g-c) with a size constraint, where g is a so-called non-negative monotone approximately submodular function and c is a so-called non-negative modular function, resulting in the objective function (g-c) being non-monotone non-submodular in general. Different from previous analyses, we prove that by optimizing the original objective function (g-c) and the size simultaneously, the GSEMO fails to achieve a good polynomial-time approximation guarantee. However, we also prove that by optimizing a distorted objective function and the size simultaneously, the GSEMO can still achieve the best-known polynomial-time approximation guarantee. Empirical studies on the applications of Bayesian experimental design and directed vertex cover show the excellent performance of the GSEMO.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Automatically Evolving Texture Image Descriptors Using the Multitree Representation in Genetic Programming Using Few Instances. 基于少实例遗传规划的纹理图像描述符多树自动进化。
IF 6.8 2区 计算机科学 Q1 Mathematics Pub Date : 2021-09-01 DOI: 10.1162/evco_a_00284
Harith Al-Sahaf, Ausama Al-Sahaf, Bing Xue, Mengjie Zhang

The performance of image classification is highly dependent on the quality of the extracted features that are used to build a model. Designing such features usually requires prior knowledge of the domain and is often undertaken by a domain expert who, if available, is very costly to employ. Automating the process of designing such features can largely reduce the cost and efforts associated with this task. Image descriptors, such as local binary patterns, have emerged in computer vision, and aim at detecting keypoints, for example, corners, line-segments, and shapes, in an image and extracting features from those keypoints. In this article, genetic programming (GP) is used to automatically evolve an image descriptor using only two instances per class by utilising a multitree program representation. The automatically evolved descriptor operates directly on the raw pixel values of an image and generates the corresponding feature vector. Seven well-known datasets were adapted to the few-shot setting and used to assess the performance of the proposed method and compared against six handcrafted and one evolutionary computation-based image descriptor as well as three convolutional neural network (CNN) based methods. The experimental results show that the new method has significantly outperformed the competitor image descriptors and CNN-based methods. Furthermore, different patterns have been identified from analysing the evolved programs.

图像分类的性能高度依赖于用于构建模型的提取特征的质量。设计这样的特性通常需要预先了解该领域的知识,并且通常由领域专家承担,如果可以的话,聘请专家的成本非常高。自动化设计这些特性的过程可以在很大程度上减少与此任务相关的成本和工作量。图像描述符,如局部二值模式,已经在计算机视觉中出现,其目的是检测图像中的关键点,例如角、线段和形状,并从这些关键点中提取特征。在本文中,使用遗传编程(GP)通过利用多树程序表示,每个类只使用两个实例来自动进化图像描述符。自动进化描述符直接对图像的原始像素值进行操作,并生成相应的特征向量。将七个已知的数据集适应于少镜头设置,并用于评估所提出方法的性能,并与六种手工制作和一种基于进化计算的图像描述符以及三种基于卷积神经网络(CNN)的方法进行比较。实验结果表明,新方法明显优于竞争对手的图像描述符和基于cnn的方法。此外,通过分析进化的程序,确定了不同的模式。
{"title":"Automatically Evolving Texture Image Descriptors Using the Multitree Representation in Genetic Programming Using Few Instances.","authors":"Harith Al-Sahaf,&nbsp;Ausama Al-Sahaf,&nbsp;Bing Xue,&nbsp;Mengjie Zhang","doi":"10.1162/evco_a_00284","DOIUrl":"https://doi.org/10.1162/evco_a_00284","url":null,"abstract":"<p><p>The performance of image classification is highly dependent on the quality of the extracted features that are used to build a model. Designing such features usually requires prior knowledge of the domain and is often undertaken by a domain expert who, if available, is very costly to employ. Automating the process of designing such features can largely reduce the cost and efforts associated with this task. Image descriptors, such as local binary patterns, have emerged in computer vision, and aim at detecting keypoints, for example, corners, line-segments, and shapes, in an image and extracting features from those keypoints. In this article, genetic programming (GP) is used to automatically evolve an image descriptor using only two instances per class by utilising a multitree program representation. The automatically evolved descriptor operates directly on the raw pixel values of an image and generates the corresponding feature vector. Seven well-known datasets were adapted to the few-shot setting and used to assess the performance of the proposed method and compared against six handcrafted and one evolutionary computation-based image descriptor as well as three convolutional neural network (CNN) based methods. The experimental results show that the new method has significantly outperformed the competitor image descriptors and CNN-based methods. Furthermore, different patterns have been identified from analysing the evolved programs.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38639796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions. 环境变化下自主学习的进化可塑性。
IF 6.8 2区 计算机科学 Q1 Mathematics Pub Date : 2021-09-01 DOI: 10.1162/evco_a_00286
Anil Yaman, Giovanni Iacca, Decebal Constantin Mocanu, Matt Coler, George Fletcher, Mykola Pechenizkiy

A fundamental aspect of learning in biological neural networks is the plasticity property which allows them to modify their configurations during their lifetime. Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons. However, the emergence of a coherent global learning behavior from local Hebbian plasticity rules is not very well understood. The goal of this work is to discover interpretable local Hebbian learning rules that can provide autonomous global learning. To achieve this, we use a discrete representation to encode the learning rules in a finite search space. These rules are then used to perform synaptic changes, based on the local interactions of the neurons. We employ genetic algorithms to optimize these rules to allow learning on two separate tasks (a foraging and a prey-predator scenario) in online lifetime learning settings. The resulting evolved rules converged into a set of well-defined interpretable types, that are thoroughly discussed. Notably, the performance of these rules, while adapting the ANNs during the learning tasks, is comparable to that of offline learning methods such as hill climbing.

生物神经网络学习的一个基本方面是可塑性,这使它们能够在其生命周期中修改其配置。Hebbian学习是一种基于神经元局部相互作用的模拟人工神经网络(ann)可塑性的生物学机制。然而,从局部Hebbian可塑性规则中出现的连贯的全局学习行为尚未得到很好的理解。这项工作的目标是发现可解释的局部Hebbian学习规则,可以提供自主的全局学习。为了实现这一点,我们在有限的搜索空间中使用离散表示来编码学习规则。然后,这些规则被用来根据神经元的局部相互作用来执行突触变化。我们使用遗传算法来优化这些规则,以便在在线终身学习设置中对两个独立的任务(觅食和捕食场景)进行学习。由此产生的演化规则汇聚成一组定义良好的可解释类型,并对其进行了详细的讨论。值得注意的是,当这些规则在学习任务中适应人工神经网络时,其性能与离线学习方法(如爬山)相当。
{"title":"Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions.","authors":"Anil Yaman,&nbsp;Giovanni Iacca,&nbsp;Decebal Constantin Mocanu,&nbsp;Matt Coler,&nbsp;George Fletcher,&nbsp;Mykola Pechenizkiy","doi":"10.1162/evco_a_00286","DOIUrl":"https://doi.org/10.1162/evco_a_00286","url":null,"abstract":"<p><p>A fundamental aspect of learning in biological neural networks is the plasticity property which allows them to modify their configurations during their lifetime. Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons. However, the emergence of a coherent global learning behavior from local Hebbian plasticity rules is not very well understood. The goal of this work is to discover interpretable local Hebbian learning rules that can provide autonomous global learning. To achieve this, we use a discrete representation to encode the learning rules in a finite search space. These rules are then used to perform synaptic changes, based on the local interactions of the neurons. We employ genetic algorithms to optimize these rules to allow learning on two separate tasks (a foraging and a prey-predator scenario) in online lifetime learning settings. The resulting evolved rules converged into a set of well-defined interpretable types, that are thoroughly discussed. Notably, the performance of these rules, while adapting the ANNs during the learning tasks, is comparable to that of offline learning methods such as hill climbing.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9172052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Interaction-Transformation Evolutionary Algorithm for Symbolic Regression. 符号回归的交互变换进化算法。
IF 6.8 2区 计算机科学 Q1 Mathematics Pub Date : 2021-09-01 DOI: 10.1162/evco_a_00285
F O de Franca, G S I Aldeia

Interaction-Transformation (IT) is a new representation for Symbolic Regression that reduces the space of solutions to a set of expressions that follow a specific structure. The potential of this representation was illustrated in prior work with the algorithm called SymTree. This algorithm starts with a simple linear model and incrementally introduces new transformed features until a stop criterion is met. While the results obtained by this algorithm were competitive with the literature, it had the drawback of not scaling well with the problem dimension. This article introduces a mutation-only Evolutionary Algorithm, called ITEA, capable of evolving a population of IT expressions. One advantage of this algorithm is that it enables the user to specify the maximum number of terms in an expression. In order to verify the competitiveness of this approach, ITEA is compared to linear, nonlinear, and Symbolic Regression models from the literature. The results indicate that ITEA is capable of finding equal or better approximations than other Symbolic Regression models while being competitive to state-of-the-art nonlinear models. Additionally, since this representation follows a specific structure, it is possible to extract the importance of each original feature of a data set as an analytical function, enabling us to automate the explanation of any prediction. In conclusion, ITEA is competitive when comparing to regression models with the additional benefit of automating the extraction of additional information of the generated models.

交互转换(IT)是符号回归的一种新表示,它将解的空间缩小为遵循特定结构的一组表达式。这种表示的潜力在之前使用称为SymTree的算法的工作中得到了说明。该算法从一个简单的线性模型开始,逐步引入新的变换特征,直到满足停止准则。虽然该算法得到的结果与文献具有一定的竞争力,但它的缺点是不能很好地随问题维数的变化而缩放。本文介绍了一种仅限突变的进化算法,称为ITEA,它能够进化一组IT表达式。该算法的一个优点是,它使用户能够指定表达式中术语的最大数目。为了验证该方法的竞争力,将ITEA与文献中的线性、非线性和符号回归模型进行了比较。结果表明,ITEA能够找到与其他符号回归模型相同或更好的近似,同时与最先进的非线性模型竞争。此外,由于这种表示遵循特定的结构,因此可以提取数据集的每个原始特征的重要性作为分析函数,使我们能够自动解释任何预测。总之,与回归模型相比,ITEA是有竞争力的,因为它具有自动提取生成模型的附加信息的额外好处。
{"title":"Interaction-Transformation Evolutionary Algorithm for Symbolic Regression.","authors":"F O de Franca,&nbsp;G S I Aldeia","doi":"10.1162/evco_a_00285","DOIUrl":"https://doi.org/10.1162/evco_a_00285","url":null,"abstract":"<p><p>Interaction-Transformation (IT) is a new representation for Symbolic Regression that reduces the space of solutions to a set of expressions that follow a specific structure. The potential of this representation was illustrated in prior work with the algorithm called SymTree. This algorithm starts with a simple linear model and incrementally introduces new transformed features until a stop criterion is met. While the results obtained by this algorithm were competitive with the literature, it had the drawback of not scaling well with the problem dimension. This article introduces a mutation-only Evolutionary Algorithm, called ITEA, capable of evolving a population of IT expressions. One advantage of this algorithm is that it enables the user to specify the maximum number of terms in an expression. In order to verify the competitiveness of this approach, ITEA is compared to linear, nonlinear, and Symbolic Regression models from the literature. The results indicate that ITEA is capable of finding equal or better approximations than other Symbolic Regression models while being competitive to state-of-the-art nonlinear models. Additionally, since this representation follows a specific structure, it is possible to extract the importance of each original feature of a data set as an analytical function, enabling us to automate the explanation of any prediction. In conclusion, ITEA is competitive when comparing to regression models with the additional benefit of automating the extraction of additional information of the generated models.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38701498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Iterated Local Search and Other Algorithms for Buffered Two-Machine Permutation Flow Shops with Constant Processing Times on One Machine. 一台机器上具有恒定处理时间的缓冲双机排列流车间的迭代局部搜索及其他算法。
IF 6.8 2区 计算机科学 Q1 Mathematics Pub Date : 2021-09-01 DOI: 10.1162/evco_a_00287
Hoang Thanh Le, Philine Geser, Martin Middendorf

The two-machine permutation flow shop scheduling problem with buffer is studied for the special case that all processing times on one of the two machines are equal to a constant c. This case is interesting because it occurs in various applications, for example, when one machine is a packing machine or when materials have to be transported. Different types of buffers and buffer usage are considered. It is shown that all considered buffer flow shop problems remain NP-hard for the makespan criterion even with the restriction to equal processing times on one machine. However, the special case where the constant c is larger or smaller than all processing times on the other machine is shown to be polynomially solvable by presenting an algorithm (2BF-OPT) that calculates optimal schedules in O(nlogn) steps. Two heuristics for solving the NP-hard flow shop problems are proposed: (i) a modification of the commonly used NEH heuristic (mNEH) and (ii) an Iterated Local Search heuristic (2BF-ILS) that uses the mNEH heuristic for computing its initial solution. It is shown experimentally that the proposed 2BF-ILS heuristic obtains better results than two state-of-the-art algorithms for buffered flow shop problems from the literature and an Ant Colony Optimization algorithm. In addition, it is shown experimentally that 2BF-ILS obtains the same solution quality as the standard NEH heuristic, however, with a smaller number of function evaluations.

本文研究了在两台机器中的一台上的所有加工时间都等于常数c的特殊情况下,带缓冲的两台机器排列流水车间调度问题。这种情况很有趣,因为它发生在各种应用中,例如,当一台机器是包装机或物料需要运输时。考虑了不同类型的缓冲区和缓冲区的使用情况。结果表明,即使在同一台机器上的处理时间相等的限制下,所有考虑的缓冲流车间问题对于最大时间跨度准则仍然是np困难的。然而,对于常数c大于或小于其他机器上所有处理时间的特殊情况,可以通过提出一种算法(2BF-OPT)来多项式地解决,该算法在O(nlogn)步中计算最优调度。提出了解决NP-hard flow shop问题的两种启发式方法:(i)对常用的NEH启发式(mNEH)的改进;(ii)使用mNEH启发式计算其初始解的迭代局部搜索启发式(2BF-ILS)。实验结果表明,本文提出的2BF-ILS启发式算法比现有的两种算法和蚁群优化算法获得了更好的缓冲流车间问题求解结果。此外,实验表明,2BF-ILS得到的解质量与标准NEH启发式方法相同,但函数评估次数较少。
{"title":"Iterated Local Search and Other Algorithms for Buffered Two-Machine Permutation Flow Shops with Constant Processing Times on One Machine.","authors":"Hoang Thanh Le,&nbsp;Philine Geser,&nbsp;Martin Middendorf","doi":"10.1162/evco_a_00287","DOIUrl":"https://doi.org/10.1162/evco_a_00287","url":null,"abstract":"<p><p>The two-machine permutation flow shop scheduling problem with buffer is studied for the special case that all processing times on one of the two machines are equal to a constant c. This case is interesting because it occurs in various applications, for example, when one machine is a packing machine or when materials have to be transported. Different types of buffers and buffer usage are considered. It is shown that all considered buffer flow shop problems remain NP-hard for the makespan criterion even with the restriction to equal processing times on one machine. However, the special case where the constant c is larger or smaller than all processing times on the other machine is shown to be polynomially solvable by presenting an algorithm (2BF-OPT) that calculates optimal schedules in O(nlogn) steps. Two heuristics for solving the NP-hard flow shop problems are proposed: (i) a modification of the commonly used NEH heuristic (mNEH) and (ii) an Iterated Local Search heuristic (2BF-ILS) that uses the mNEH heuristic for computing its initial solution. It is shown experimentally that the proposed 2BF-ILS heuristic obtains better results than two state-of-the-art algorithms for buffered flow shop problems from the literature and an Ant Colony Optimization algorithm. In addition, it is shown experimentally that 2BF-ILS obtains the same solution quality as the standard NEH heuristic, however, with a smaller number of function evaluations.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9172054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Lower Bounds for Non-Elitist Evolutionary Algorithms via Negative Multiplicative Drift. 基于负乘法漂移的非精英进化算法的下界。
IF 6.8 2区 计算机科学 Q1 Mathematics Pub Date : 2021-06-01 DOI: 10.1162/evco_a_00283
Benjamin Doerr

A decent number of lower bounds for non-elitist population-based evolutionary algorithms has been shown by now. Most of them are technically demanding due to the (hard to avoid) use of negative drift theorems-general results which translate an expected movement away from the target into a high hitting time. We propose a simple negative drift theorem for multiplicative drift scenarios and show that it can simplify existing analyses. We discuss in more detail Lehre's (2010) negative drift in populations method, one of the most general tools to prove lower bounds on the runtime of non-elitist mutation-based evolutionary algorithms for discrete search spaces. Together with other arguments, we obtain an alternative and simpler proof of this result, which also strengthens and simplifies this method. In particular, now only three of the five technical conditions of the previous result have to be verified. The lower bounds we obtain are explicit instead of only asymptotic. This allows us to compute concrete lower bounds for concrete algorithms, but also enables us to show that super-polynomial runtimes appear already when the reproduction rate is only a (1-ω(n-1/2)) factor below the threshold. For the special case of algorithms using standard bit mutation with a random mutation rate (called uniform mixing in the language of hyper-heuristics), we prove the result stated by Dang and Lehre (2016b) and extend it to mutation rates other than Θ(1/n), which includes the heavy-tailed mutation operator proposed by Doerr et al. (2017). We finally use our method and a novel domination argument to show an exponential lower bound for the runtime of the mutation-only simple genetic algorithm on OneMax for arbitrary population size.

到目前为止,已经有相当数量的基于非精英群体的进化算法的下限得到了证明。由于(难以避免的)使用负漂移定理,它们中的大多数在技术上都要求很高——将预期的远离目标的运动转化为高命中时间的一般结果。我们提出了一个简单的负漂移定理,并证明它可以简化现有的分析。我们更详细地讨论了Lehre(2010)的种群负漂移方法,这是证明离散搜索空间中基于非精英突变的进化算法运行时下界的最通用工具之一。结合其他论证,我们得到了对该结果的另一种更简单的证明,也加强和简化了该方法。特别是,以前结果的五个技术条件现在只有三个需要核实。我们得到的下界是显式的,而不仅仅是渐近的。这使我们能够计算具体算法的具体下界,但也使我们能够证明,当繁殖率仅低于阈值的(1-ω(n-1/2))因子时,超多项式运行时间已经出现。对于使用具有随机突变率的标准位突变(在超启发式语言中称为均匀混合)的算法的特殊情况,我们证明了Dang和Lehre (2016b)所陈述的结果,并将其扩展到Θ(1/n)以外的突变率,其中包括Doerr等人(2017)提出的重尾突变算子。最后,我们利用我们的方法和一个新的支配论证,给出了在任意种群规模下OneMax上仅突变简单遗传算法运行时的指数下界。
{"title":"Lower Bounds for Non-Elitist Evolutionary Algorithms via Negative Multiplicative Drift.","authors":"Benjamin Doerr","doi":"10.1162/evco_a_00283","DOIUrl":"https://doi.org/10.1162/evco_a_00283","url":null,"abstract":"<p><p>A decent number of lower bounds for non-elitist population-based evolutionary algorithms has been shown by now. Most of them are technically demanding due to the (hard to avoid) use of negative drift theorems-general results which translate an expected movement away from the target into a high hitting time. We propose a simple negative drift theorem for multiplicative drift scenarios and show that it can simplify existing analyses. We discuss in more detail Lehre's (2010) negative drift in populations method, one of the most general tools to prove lower bounds on the runtime of non-elitist mutation-based evolutionary algorithms for discrete search spaces. Together with other arguments, we obtain an alternative and simpler proof of this result, which also strengthens and simplifies this method. In particular, now only three of the five technical conditions of the previous result have to be verified. The lower bounds we obtain are explicit instead of only asymptotic. This allows us to compute concrete lower bounds for concrete algorithms, but also enables us to show that super-polynomial runtimes appear already when the reproduction rate is only a (1-ω(n-1/2)) factor below the threshold. For the special case of algorithms using standard bit mutation with a random mutation rate (called uniform mixing in the language of hyper-heuristics), we prove the result stated by Dang and Lehre (2016b) and extend it to mutation rates other than Θ(1/n), which includes the heavy-tailed mutation operator proposed by Doerr et al. (2017). We finally use our method and a novel domination argument to show an exponential lower bound for the runtime of the mutation-only simple genetic algorithm on OneMax for arbitrary population size.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38616984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Offline Learning with a Selection Hyper-Heuristic: An Application to Water Distribution Network Optimisation. 具有选择超启发式的离线学习:在配水网络优化中的应用。
IF 6.8 2区 计算机科学 Q1 Mathematics Pub Date : 2021-06-01 DOI: 10.1162/evco_a_00277
William B Yates, Edward C Keedwell

A sequence-based selection hyper-heuristic with online learning is used to optimise 12 water distribution networks of varying sizes. The hyper-heuristic results are compared with those produced by five multiobjective evolutionary algorithms. The comparison demonstrates that the hyper-heuristic is a computationally efficient alternative to a multiobjective evolutionary algorithm. An offline learning algorithm is used to enhance the optimisation performance of the hyper-heuristic. The optimisation results of the offline trained hyper-heuristic are analysed statistically, and a new offline learning methodology is proposed. The new methodology is evaluated, and shown to produce an improvement in performance on each of the 12 networks. Finally, it is demonstrated that offline learning can be usefully transferred from small, computationally inexpensive problems, to larger computationally expensive ones, and that the improvement in optimisation performance is statistically significant, with 99% confidence.

采用基于序列的超启发式选择和在线学习来优化不同规模的12个配水网络。将超启发式结果与五种多目标进化算法的结果进行了比较。比较表明,超启发式算法是一种计算效率高的多目标进化算法的替代方案。采用离线学习算法提高超启发式算法的优化性能。对离线训练超启发式算法的优化结果进行了统计分析,提出了一种新的离线学习方法。对新方法进行了评估,并证明在12个网络中的每个网络上都产生了性能改进。最后,它证明了离线学习可以有效地从小的、计算成本低廉的问题转移到计算成本较高的问题,并且优化性能的改进在统计上是显著的,有99%的置信度。
{"title":"Offline Learning with a Selection Hyper-Heuristic: An Application to Water Distribution Network Optimisation.","authors":"William B Yates,&nbsp;Edward C Keedwell","doi":"10.1162/evco_a_00277","DOIUrl":"https://doi.org/10.1162/evco_a_00277","url":null,"abstract":"<p><p>A sequence-based selection hyper-heuristic with online learning is used to optimise 12 water distribution networks of varying sizes. The hyper-heuristic results are compared with those produced by five multiobjective evolutionary algorithms. The comparison demonstrates that the hyper-heuristic is a computationally efficient alternative to a multiobjective evolutionary algorithm. An offline learning algorithm is used to enhance the optimisation performance of the hyper-heuristic. The optimisation results of the offline trained hyper-heuristic are analysed statistically, and a new offline learning methodology is proposed. The new methodology is evaluated, and shown to produce an improvement in performance on each of the 12 networks. Finally, it is demonstrated that offline learning can be usefully transferred from small, computationally inexpensive problems, to larger computationally expensive ones, and that the improvement in optimisation performance is statistically significant, with 99% confidence.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00277","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38069843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Improving Model-Based Genetic Programming for Symbolic Regression of Small Expressions. 基于模型的小表达式符号回归遗传规划改进。
IF 6.8 2区 计算机科学 Q1 Mathematics Pub Date : 2021-06-01 DOI: 10.1162/evco_a_00278
M Virgolin, T Alderliesten, C Witteveen, P A N Bosman

The Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) is a model-based EA framework that has been shown to perform well in several domains, including Genetic Programming (GP). Differently from traditional EAs where variation acts blindly, GOMEA learns a model of interdependencies within the genotype, that is, the linkage, to estimate what patterns to propagate. In this article, we study the role of Linkage Learning (LL) performed by GOMEA in Symbolic Regression (SR). We show that the non-uniformity in the distribution of the genotype in GP populations negatively biases LL, and propose a method to correct for this. We also propose approaches to improve LL when ephemeral random constants are used. Furthermore, we adapt a scheme of interleaving runs to alleviate the burden of tuning the population size, a crucial parameter for LL, to SR. We run experiments on 10 real-world datasets, enforcing a strict limitation on solution size, to enable interpretability. We find that the new LL method outperforms the standard one, and that GOMEA outperforms both traditional and semantic GP. We also find that the small solutions evolved by GOMEA are competitive with tuned decision trees, making GOMEA a promising new approach to SR.

基因池最优混合进化算法(gome)是一种基于模型的EA框架,在遗传规划(GP)等多个领域表现良好。与变异盲目作用的传统ea不同,goma学习基因型内相互依赖的模型,即连锁,以估计传播哪种模式。在本文中,我们研究了goma在符号回归(SR)中的作用。我们表明GP群体中基因型分布的不均匀性负偏倚LL,并提出了一种方法来纠正这一点。我们还提出了在使用短暂随机常数时改进LL的方法。此外,我们采用了交错运行方案,以减轻调整种群大小(LL的关键参数)到sr的负担。我们在10个真实数据集上运行实验,严格限制解决方案的大小,以实现可解释性。我们发现,新方法优于标准方法,并且goma优于传统GP和语义GP。我们还发现,由goma演化出的小解与调优决策树具有竞争力,这使得goma成为一种很有前途的SR新方法。
{"title":"Improving Model-Based Genetic Programming for Symbolic Regression of Small Expressions.","authors":"M Virgolin,&nbsp;T Alderliesten,&nbsp;C Witteveen,&nbsp;P A N Bosman","doi":"10.1162/evco_a_00278","DOIUrl":"https://doi.org/10.1162/evco_a_00278","url":null,"abstract":"<p><p>The Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) is a model-based EA framework that has been shown to perform well in several domains, including Genetic Programming (GP). Differently from traditional EAs where variation acts blindly, GOMEA learns a model of interdependencies within the genotype, that is, the linkage, to estimate what patterns to propagate. In this article, we study the role of Linkage Learning (LL) performed by GOMEA in Symbolic Regression (SR). We show that the non-uniformity in the distribution of the genotype in GP populations negatively biases LL, and propose a method to correct for this. We also propose approaches to improve LL when ephemeral random constants are used. Furthermore, we adapt a scheme of interleaving runs to alleviate the burden of tuning the population size, a crucial parameter for LL, to SR. We run experiments on 10 real-world datasets, enforcing a strict limitation on solution size, to enable interpretability. We find that the new LL method outperforms the standard one, and that GOMEA outperforms both traditional and semantic GP. We also find that the small solutions evolved by GOMEA are competitive with tuned decision trees, making GOMEA a promising new approach to SR.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00278","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38075060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
期刊
Evolutionary Computation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1