首页 > 最新文献

Evolutionary Computation最新文献

英文 中文
Modular Grammatical Evolution for the Generation of Artificial Neural Networks 用于生成人工神经网络的模块语法进化
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-06-01 DOI: 10.1162/evco_a_00302
Khabat Soltanian;Ali Ebnenasir;Mohsen Afsharchi
This article presents a novel method, called Modular Grammatical Evolution (MGE), toward validating the hypothesis that restricting the solution space of NeuroEvolution to modular and simple neural networks enables the efficient generation of smaller and more structured neural networks while providing acceptable (and in some cases superior) accuracy on large data sets. MGE also enhances the state-of-the-art Grammatical Evolution (GE) methods in two directions. First, MGE's representation is modular in that each individual has a set of genes, and each gene is mapped to a neuron by grammatical rules. Second, the proposed representation mitigates two important drawbacks of GE, namely the low scalability and weak locality of representation, toward generating modular and multilayer networks with a high number of neurons. We define and evaluate five different forms of structures with and without modularity using MGE and find single-layer modules with no coupling more productive. Our experiments demonstrate that modularity helps in finding better neural networks faster. We have validated the proposed method using ten well-known classification benchmarks with different sizes, feature counts, and output class counts. Our experimental results indicate that MGE provides superior accuracy with respect to existing NeuroEvolution methods and returns classifiers that are significantly simpler than other machine learning generated classifiers. Finally, we empirically demonstrate that MGE outperforms other GE methods in terms of locality and scalability properties.
本文提出了一种称为模块语法进化(MGE)的新方法,以验证将神经进化的求解空间限制为模块化和简单的神经网络的假设,从而能够有效地生成更小、更结构化的神经网络,同时在大数据集上提供可接受的(在某些情况下更高的)精度。MGE还在两个方向上增强了最先进的语法进化(GE)方法。首先,MGE的表示是模块化的,因为每个个体都有一组基因,每个基因都通过语法规则映射到一个神经元。其次,所提出的表示减轻了GE在生成具有大量神经元的模块化和多层网络方面的两个重要缺点,即表示的可扩展性低和局部性弱。我们使用MGE定义和评估了具有和不具有模块性的五种不同形式的结构,并发现没有耦合的单层模块更有效率。我们的实验表明,模块化有助于更快地找到更好的神经网络。我们已经使用十个著名的分类基准验证了所提出的方法,这些基准具有不同的大小、特征计数和输出类计数。我们的实验结果表明,相对于现有的神经进化方法,MGE提供了优越的准确性,并返回的分类器比其他机器学习生成的分类器简单得多。最后,我们实证证明了MGE在局部性和可扩展性方面优于其他GE方法。
{"title":"Modular Grammatical Evolution for the Generation of Artificial Neural Networks","authors":"Khabat Soltanian;Ali Ebnenasir;Mohsen Afsharchi","doi":"10.1162/evco_a_00302","DOIUrl":"10.1162/evco_a_00302","url":null,"abstract":"This article presents a novel method, called Modular Grammatical Evolution (MGE), toward validating the hypothesis that restricting the solution space of NeuroEvolution to modular and simple neural networks enables the efficient generation of smaller and more structured neural networks while providing acceptable (and in some cases superior) accuracy on large data sets. MGE also enhances the state-of-the-art Grammatical Evolution (GE) methods in two directions. First, MGE's representation is modular in that each individual has a set of genes, and each gene is mapped to a neuron by grammatical rules. Second, the proposed representation mitigates two important drawbacks of GE, namely the low scalability and weak locality of representation, toward generating modular and multilayer networks with a high number of neurons. We define and evaluate five different forms of structures with and without modularity using MGE and find single-layer modules with no coupling more productive. Our experiments demonstrate that modularity helps in finding better neural networks faster. We have validated the proposed method using ten well-known classification benchmarks with different sizes, feature counts, and output class counts. Our experimental results indicate that MGE provides superior accuracy with respect to existing NeuroEvolution methods and returns classifiers that are significantly simpler than other machine learning generated classifiers. Finally, we empirically demonstrate that MGE outperforms other GE methods in terms of locality and scalability properties.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"30 2","pages":"291-327"},"PeriodicalIF":6.8,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39701853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transfer Learning Based Co-Surrogate Assisted Evolutionary Bi-Objective Optimization for Objectives with Non-Uniform Evaluation Times 基于迁移学习的协同代理进化双目标优化方法
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-06-01 DOI: 10.1162/evco_a_00300
Xilu Wang;Yaochu Jin;Sebastian Schmitt;Markus Olhofer
Most existing multiobjective evolutionary algorithms (MOEAs) implicitly assume that each objective function can be evaluated within the same period of time. Typically. this is untenable in many real-world optimization scenarios where evaluation of different objectives involves different computer simulations or physical experiments with distinct time complexity. To address this issue, a transfer learning scheme based on surrogate-assisted evolutionary algorithms (SAEAs) is proposed, in which a co-surrogate is adopted to model the functional relationship between the fast and slow objective functions and a transferable instance selection method is introduced to acquire useful knowledge from the search process of the fast objective. Our experimental results on DTLZ and UF test suites demonstrate that the proposed algorithm is competitive for solving bi-objective optimization where objectives have non-uniform evaluation times.
大多数现有的多目标进化算法隐含地假设每个目标函数都可以在同一时间段内进行评估。典型的这在许多真实世界的优化场景中是站不住脚的,在这些场景中,对不同目标的评估涉及具有不同时间复杂性的不同计算机模拟或物理实验。为了解决这个问题,提出了一种基于代理辅助进化算法(SAEAs)的迁移学习方案,其中采用协同代理来建模快速目标函数和慢速目标函数之间的函数关系,并引入可转移实例选择方法来从快速目标的搜索过程中获取有用的知识。我们在DTLZ和UF测试套件上的实验结果表明,所提出的算法在解决目标评估时间不均匀的双目标优化方面具有竞争力。
{"title":"Transfer Learning Based Co-Surrogate Assisted Evolutionary Bi-Objective Optimization for Objectives with Non-Uniform Evaluation Times","authors":"Xilu Wang;Yaochu Jin;Sebastian Schmitt;Markus Olhofer","doi":"10.1162/evco_a_00300","DOIUrl":"10.1162/evco_a_00300","url":null,"abstract":"Most existing multiobjective evolutionary algorithms (MOEAs) implicitly assume that each objective function can be evaluated within the same period of time. Typically. this is untenable in many real-world optimization scenarios where evaluation of different objectives involves different computer simulations or physical experiments with distinct time complexity. To address this issue, a transfer learning scheme based on surrogate-assisted evolutionary algorithms (SAEAs) is proposed, in which a co-surrogate is adopted to model the functional relationship between the fast and slow objective functions and a transferable instance selection method is introduced to acquire useful knowledge from the search process of the fast objective. Our experimental results on DTLZ and UF test suites demonstrate that the proposed algorithm is competitive for solving bi-objective optimization where objectives have non-uniform evaluation times.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"30 2","pages":"221-251"},"PeriodicalIF":6.8,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39859789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Evolving Multimodal Robot Behavior via Many Stepping Stones with the Combinatorial Multiobjective Evolutionary Algorithm 基于组合多目标进化算法的多踏脚石进化多模式机器人行为
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-06-01 DOI: 10.1162/evco_a_00301
Joost Huizinga;Jeff Clune
Abstract An important challenge in reinforcement learning is to solve multimodal problems, where agents have to act in qualitatively different ways depending on the circumstances. Because multimodal problems are often too difficult to solve directly, it is often helpful to define a curriculum, which is an ordered set of subtasks that can serve as the stepping stones for solving the overall problem. Unfortunately, choosing an effective ordering for these subtasks is difficult, and a poor ordering can reduce the performance of the learning process. Here, we provide a thorough introduction and investigation of the Combinatorial Multiobjective Evolutionary Algorithm (CMOEA), which allows all combinations of subtasks to be explored simultaneously. We compare CMOEA against three algorithms that can similarly optimize on multiple subtasks simultaneously: NSGA-II, NSGA-III, and ε-Lexicase Selection. The algorithms are tested on a function-optimization problem with two subtasks, a simulated multimodal robot locomotion problem with six subtasks, and a simulated robot maze-navigation problem where a hundred random mazes are treated as subtasks. On these problems, CMOEA either outperforms or is competitive with the controls. As a separate contribution, we show that adding a linear combination over all objectives can improve the ability of the control algorithms to solve these multimodal problems. Lastly, we show that CMOEA can leverage auxiliary objectives more effectively than the controls on the multimodal locomotion task. In general, our experiments suggest that CMOEA is a promising algorithm for solving multimodal problems.
强化学习中的一个重要挑战是解决多模式问题,在多模式问题中,主体必须根据情况以定性不同的方式行事。由于多模式问题通常太难直接解决,因此定义课程通常很有帮助,课程是一组有序的子任务,可以作为解决整体问题的垫脚石。不幸的是,为这些子任务选择有效的排序是困难的,而糟糕的排序可能会降低学习过程的性能。在这里,我们对组合多目标进化算法(CMOEA)进行了全面的介绍和研究,该算法允许同时探索子任务的所有组合。我们将CMOEA与三种可以同时对多个子任务进行类似优化的算法进行了比较:NSGA-II、NSGA-III和ε-词法选择。在具有两个子任务的函数优化问题、具有六个子任务的模拟多模式机器人运动问题和将一百个随机迷宫作为子任务的模拟机器人迷宫导航问题上测试了这些算法。在这些问题上,CMOEA要么优于对照组,要么与对照组具有竞争力。作为另一项贡献,我们表明,在所有目标上添加线性组合可以提高控制算法解决这些多模态问题的能力。最后,我们证明了在多模式运动任务中,CMOEA可以比控制更有效地利用辅助目标。总的来说,我们的实验表明,CMOEA是一种很有前途的解决多模态问题的算法。
{"title":"Evolving Multimodal Robot Behavior via Many Stepping Stones with the Combinatorial Multiobjective Evolutionary Algorithm","authors":"Joost Huizinga;Jeff Clune","doi":"10.1162/evco_a_00301","DOIUrl":"10.1162/evco_a_00301","url":null,"abstract":"Abstract An important challenge in reinforcement learning is to solve multimodal problems, where agents have to act in qualitatively different ways depending on the circumstances. Because multimodal problems are often too difficult to solve directly, it is often helpful to define a curriculum, which is an ordered set of subtasks that can serve as the stepping stones for solving the overall problem. Unfortunately, choosing an effective ordering for these subtasks is difficult, and a poor ordering can reduce the performance of the learning process. Here, we provide a thorough introduction and investigation of the Combinatorial Multiobjective Evolutionary Algorithm (CMOEA), which allows all combinations of subtasks to be explored simultaneously. We compare CMOEA against three algorithms that can similarly optimize on multiple subtasks simultaneously: NSGA-II, NSGA-III, and ε-Lexicase Selection. The algorithms are tested on a function-optimization problem with two subtasks, a simulated multimodal robot locomotion problem with six subtasks, and a simulated robot maze-navigation problem where a hundred random mazes are treated as subtasks. On these problems, CMOEA either outperforms or is competitive with the controls. As a separate contribution, we show that adding a linear combination over all objectives can improve the ability of the control algorithms to solve these multimodal problems. Lastly, we show that CMOEA can leverage auxiliary objectives more effectively than the controls on the multimodal locomotion task. In general, our experiments suggest that CMOEA is a promising algorithm for solving multimodal problems.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"30 2","pages":"131-164"},"PeriodicalIF":6.8,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39744129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
VSD-MOEA: A Dominance-Based Multiobjective Evolutionary Algorithm with Explicit Variable Space Diversity Management VSD-MOEA:一种具有显式可变空间分集管理的基于优势的多目标进化算法
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-06-01 DOI: 10.1162/evco_a_00299
Joel Chacón Castillo;Carlos Segura;Carlos A. Coello Coello
Most state-of-the-art Multiobjective Evolutionary Algorithms (moeas) promote the preservation of diversity of objective function space but neglect the diversity of decision variable space. The aim of this article is to show that explicitly managing the amount of diversity maintained in the decision variable space is useful to increase the quality of moeas when taking into account metrics of the objective space. Our novel Variable Space Diversity-based MOEA (vsd-moea) explicitly considers the diversity of both decision variable and objective function space. This information is used with the aim of properly adapting the balance between exploration and intensification during the optimization process. Particularly, at the initial stages, decisions made by the approach are more biased by the information on the diversity of the variable space, whereas it gradually grants more importance to the diversity of objective function space as the evolution progresses. The latter is achieved through a novel density estimator. The new method is compared with state-of-art moeas using several benchmarks with two and three objectives. This novel proposal yields much better results than state-of-the-art schemes when considering metrics applied on objective function space, exhibiting a more stable and robust behavior.
大多数先进的多目标进化算法(moeas)都提倡保持目标函数空间的多样性,而忽略了决策变量空间的多样。本文的目的是表明,在考虑目标空间的度量时,明确管理决策变量空间中保持的多样性数量有助于提高moeas的质量。我们提出的基于可变空间多样性的MOEA(vsd-MOEA)明确考虑了决策变量和目标函数空间的多样性。使用这些信息的目的是在优化过程中适当调整勘探和强化之间的平衡。特别是,在初始阶段,该方法所做的决策更偏向于变量空间多样性的信息,而随着进化的进展,它逐渐赋予目标函数空间多样性更多的重要性。后者是通过一种新颖的密度估计器实现的。使用具有两个和三个目标的几个基准,将新方法与最先进的moeas进行了比较。当考虑应用于目标函数空间的度量时,这种新的方案比现有技术的方案产生了更好的结果,表现出更稳定和稳健的行为。
{"title":"VSD-MOEA: A Dominance-Based Multiobjective Evolutionary Algorithm with Explicit Variable Space Diversity Management","authors":"Joel Chacón Castillo;Carlos Segura;Carlos A. Coello Coello","doi":"10.1162/evco_a_00299","DOIUrl":"10.1162/evco_a_00299","url":null,"abstract":"Most state-of-the-art Multiobjective Evolutionary Algorithms (moeas) promote the preservation of diversity of objective function space but neglect the diversity of decision variable space. The aim of this article is to show that explicitly managing the amount of diversity maintained in the decision variable space is useful to increase the quality of moeas when taking into account metrics of the objective space. Our novel Variable Space Diversity-based MOEA (vsd-moea) explicitly considers the diversity of both decision variable and objective function space. This information is used with the aim of properly adapting the balance between exploration and intensification during the optimization process. Particularly, at the initial stages, decisions made by the approach are more biased by the information on the diversity of the variable space, whereas it gradually grants more importance to the diversity of objective function space as the evolution progresses. The latter is achieved through a novel density estimator. The new method is compared with state-of-art moeas using several benchmarks with two and three objectives. This novel proposal yields much better results than state-of-the-art schemes when considering metrics applied on objective function space, exhibiting a more stable and robust behavior.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"30 2","pages":"195-219"},"PeriodicalIF":6.8,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39698125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Convergence Analysis of the Hessian Estimation Evolution Strategy Hessian估计进化策略的收敛性分析
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-03-01 DOI: 10.1162/evco_a_00295
Tobias Glasmachers;Oswin Krause
The class of algorithms called Hessian Estimation Evolution Strategies (HE-ESs) update the covariance matrix of their sampling distribution by directly estimating the curvature of the objective function. The approach is practically efficient, as attested by respectable performance on the BBOB testbed, even on rather irregular functions. In this article, we formally prove two strong guarantees for the (1 + 4)-HE-ES, a minimal elitist member of the family: stability of the covariance matrix update, and as a consequence, linear convergence on all convex quadratic problems at a rate that is independent of the problem instance.
一类称为Hessian估计进化策略(HE ES)的算法通过直接估计目标函数的曲率来更新其采样分布的协方差矩阵。该方法实际上是有效的,在BBOB测试台上的良好性能证明了这一点,即使在相当不规则的函数上也是如此。在本文中,我们正式证明了(1 + 4) HE-ES,家族中的一个最小精英成员:协方差矩阵更新的稳定性,因此,所有凸二次型问题的线性收敛速度与问题实例无关。
{"title":"Convergence Analysis of the Hessian Estimation Evolution Strategy","authors":"Tobias Glasmachers;Oswin Krause","doi":"10.1162/evco_a_00295","DOIUrl":"10.1162/evco_a_00295","url":null,"abstract":"The class of algorithms called Hessian Estimation Evolution Strategies (HE-ESs) update the covariance matrix of their sampling distribution by directly estimating the curvature of the objective function. The approach is practically efficient, as attested by respectable performance on the BBOB testbed, even on rather irregular functions. In this article, we formally prove two strong guarantees for the (1 + 4)-HE-ES, a minimal elitist member of the family: stability of the covariance matrix update, and as a consequence, linear convergence on all convex quadratic problems at a rate that is independent of the problem instance.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"30 1","pages":"27-50"},"PeriodicalIF":6.8,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39625448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Runtime Analysis of Restricted Tournament Selection for Bimodal Optimisation 双模优化中限制性锦标赛选择的运行时间分析
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-03-01 DOI: 10.1162/evco_a_00292
Edgar Covantes Osuna;Dirk Sudholt
Niching methods have been developed to maintain the population diversity, to investigate many peaks in parallel, and to reduce the effect of genetic drift. We present the first rigorous runtime analyses of restricted tournament selection (RTS), embedded in a (μ+1) EA, and analyse its effectiveness at finding both optima of the bimodal function TwoMax. In RTS, an offspring competes against the closest individual, with respect to some distance measure, amongst w (window size) population members (chosen uniformly at random with replacement), to encourage competition within the same niche. We prove that RTS finds both optima on TwoMax efficiently if the window size w is large enough. However, if w is too small, RTS fails to find both optima even in exponential time, with high probability. We further consider a variant of RTS selecting individuals for the tournament without replacement. It yields a more diverse tournament and is more effective at preventing one niche from taking over the other. However, this comes at the expense of a slower progress towards optima when a niche collapses to a single individual. Our theoretical results are accompanied by experimental studies that shed light on parameters not covered by the theoretical results and support a conjectured lower runtime bound.
已经开发了生态位方法来保持种群多样性,并行地研究许多峰值,并减少遗传漂移的影响。我们首次对嵌入(μ+1)EA的限制性锦标赛选择(RTS)进行了严格的运行时分析,并分析了其在寻找双峰函数TwoMax的两个最优值方面的有效性。在RTS中,后代在w(窗口大小)种群成员(随机选择并替换)之间,就某些距离测量而言,与最近的个体竞争,以鼓励同一生态位内的竞争。我们证明,如果窗口大小w足够大,RTS在TwoMax上有效地找到两个最优。然而,如果w太小,RTS即使在指数时间内也无法找到两个最优值,概率很高。我们进一步考虑RTS的一种变体,即在没有替换的情况下为锦标赛选择个人。它产生了一个更加多样化的锦标赛,并且更有效地防止一个利基市场取代另一个。然而,当一个生态位坍塌为一个个体时,这是以较慢的优化进程为代价的。我们的理论结果伴随着实验研究,这些研究揭示了理论结果未涵盖的参数,并支持推测的运行时下限。
{"title":"Runtime Analysis of Restricted Tournament Selection for Bimodal Optimisation","authors":"Edgar Covantes Osuna;Dirk Sudholt","doi":"10.1162/evco_a_00292","DOIUrl":"10.1162/evco_a_00292","url":null,"abstract":"Niching methods have been developed to maintain the population diversity, to investigate many peaks in parallel, and to reduce the effect of genetic drift. We present the first rigorous runtime analyses of restricted tournament selection (RTS), embedded in a (μ+1) EA, and analyse its effectiveness at finding both optima of the bimodal function TwoMax. In RTS, an offspring competes against the closest individual, with respect to some distance measure, amongst w (window size) population members (chosen uniformly at random with replacement), to encourage competition within the same niche. We prove that RTS finds both optima on TwoMax efficiently if the window size w is large enough. However, if w is too small, RTS fails to find both optima even in exponential time, with high probability. We further consider a variant of RTS selecting individuals for the tournament without replacement. It yields a more diverse tournament and is more effective at preventing one niche from taking over the other. However, this comes at the expense of a slower progress towards optima when a niche collapses to a single individual. Our theoretical results are accompanied by experimental studies that shed light on parameters not covered by the theoretical results and support a conjectured lower runtime bound.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"30 1","pages":"1-26"},"PeriodicalIF":6.8,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39499752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An Analysis of the Influence of Noneffective Instructions in Linear Genetic Programming 线性遗传规划中无效指令的影响分析
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-03-01 DOI: 10.1162/evco_a_00296
Léo Françoso Dal Piccol Sotto;Franz Rothlauf;Vinícius Veloso de Melo;Márcio P. Basgalupp
Linear Genetic Programming (LGP) represents programs as sequences of instructions and has a Directed Acyclic Graph (DAG) dataflow. The results of instructions are stored in registers that can be used as arguments by other instructions. Instructions that are disconnected from the main part of the program are called noneffective instructions, or structural introns. They also appear in other DAG-based GP approaches like Cartesian Genetic Programming (CGP). This article studies four hypotheses on the role of structural introns: noneffective instructions (1) serve as evolutionary memory, where evolved information is stored and later used in search, (2) preserve population diversity, (3) allow neutral search, where structural introns increase the number of neutral mutations and improve performance, and (4) serve as genetic material to enable program growth. We study different variants of LGP controlling the influence of introns for symbolic regression, classification, and digital circuits problems. We find that there is (1) evolved information in the noneffective instructions that can be reactivated and that (2) structural introns can promote programs with higher effective diversity. However, both effects have no influence on LGP search performance. On the other hand, allowing mutations to not only be applied to effective but also to noneffective instructions (3) increases the rate of neutral mutations and (4) contributes to program growth by making use of the genetic material available as structural introns. This comes along with a significant increase of LGP performance, which makes structural introns important for LGP.
线性遗传规划(LGP)将程序表示为指令序列,并具有有向非循环图(DAG)数据流。指令的结果存储在寄存器中,这些寄存器可以被其他指令用作自变量。与程序的主要部分断开连接的指令被称为无效指令,或结构内含子。它们也出现在其他基于DAG的GP方法中,如笛卡尔遗传规划(CGP)。本文研究了关于结构内含子作用的四个假设:无效指令(1)用作进化记忆,进化信息存储在这里,然后用于搜索;(2)保持种群多样性;(3)允许中性搜索,其中结构内含子增加了中性突变的数量并提高了性能,和(4)作为遗传物质,使程序得以生长。我们研究了LGP的不同变体,控制内含子对符号回归、分类和数字电路问题的影响。我们发现(1)在无效指令中存在可以重新激活的进化信息,(2)结构内含子可以促进具有更高有效多样性的程序。然而,这两种影响对LGP搜索性能都没有影响。另一方面,允许突变不仅应用于有效指令,还应用于无效指令(3)增加了中性突变的发生率,(4)通过利用作为结构内含子的遗传物质来促进程序的生长。这伴随着LGP表现的显著提高,这使得结构内含子对LGP很重要。
{"title":"An Analysis of the Influence of Noneffective Instructions in Linear Genetic Programming","authors":"Léo Françoso Dal Piccol Sotto;Franz Rothlauf;Vinícius Veloso de Melo;Márcio P. Basgalupp","doi":"10.1162/evco_a_00296","DOIUrl":"10.1162/evco_a_00296","url":null,"abstract":"Linear Genetic Programming (LGP) represents programs as sequences of instructions and has a Directed Acyclic Graph (DAG) dataflow. The results of instructions are stored in registers that can be used as arguments by other instructions. Instructions that are disconnected from the main part of the program are called noneffective instructions, or structural introns. They also appear in other DAG-based GP approaches like Cartesian Genetic Programming (CGP). This article studies four hypotheses on the role of structural introns: noneffective instructions (1) serve as evolutionary memory, where evolved information is stored and later used in search, (2) preserve population diversity, (3) allow neutral search, where structural introns increase the number of neutral mutations and improve performance, and (4) serve as genetic material to enable program growth. We study different variants of LGP controlling the influence of introns for symbolic regression, classification, and digital circuits problems. We find that there is (1) evolved information in the noneffective instructions that can be reactivated and that (2) structural introns can promote programs with higher effective diversity. However, both effects have no influence on LGP search performance. On the other hand, allowing mutations to not only be applied to effective but also to noneffective instructions (3) increases the rate of neutral mutations and (4) contributes to program growth by making use of the genetic material available as structural introns. This comes along with a significant increase of LGP performance, which makes structural introns important for LGP.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"30 1","pages":"51-74"},"PeriodicalIF":6.8,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39339617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
High-Dimensional Unbalanced Binary Classification by Genetic Programming with Multi-Criterion Fitness Evaluation and Selection 基于遗传规划的高维非平衡二值分类多准则适合度评价与选择
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-03-01 DOI: 10.1162/evco_a_00304
Wenbin Pei;Bing Xue;Lin Shang;Mengjie Zhang
High-dimensional unbalanced classification is challenging because of the joint effects of high dimensionality and class imbalance. Genetic programming (GP) has the potential benefits for use in high-dimensional classification due to its built-in capability to select informative features. However, once data are not evenly distributed, GP tends to develop biased classifiers which achieve a high accuracy on the majority class but a low accuracy on the minority class. Unfortunately, the minority class is often at least as important as the majority class. It is of importance to investigate how GP can be effectively utilized for high-dimensional unbalanced classification. In this article, to address the performance bias issue of GP, a new two-criterion fitness function is developed, which considers two criteria, that is, the approximation of area under the curve (AUC) and the classification clarity (i.e., how well a program can separate two classes). The obtained values on the two criteria are combined in pairs, instead of summing them together. Furthermore, this article designs a three-criterion tournament selection to effectively identify and select good programs to be used by genetic operators for generating offspring during the evolutionary learning process. The experimental results show that the proposed method achieves better classification performance than other compared methods.
由于高维和类不平衡的共同作用,高维不平衡分类具有挑战性。遗传程序设计(GP)由于其内置的选择信息特征的能力,在高维分类中具有潜在的优势。然而,一旦数据不均匀分布,GP倾向于开发有偏差的分类器,其在多数类上实现高精度,但在少数类上实现低精度。不幸的是,少数阶级往往至少和多数阶级一样重要。研究如何有效地利用GP进行高维不平衡分类具有重要意义。在本文中,为了解决GP的性能偏差问题,开发了一个新的双标准适应度函数,该函数考虑了两个标准,即曲线下面积(AUC)的近似值和分类清晰度(即程序可以在多大程度上区分两类)。根据这两个标准获得的值成对组合,而不是将它们相加。此外,本文设计了一个三标准锦标赛选择,以有效地识别和选择好的程序,供遗传算子在进化学习过程中用于生成后代。实验结果表明,与其他比较方法相比,该方法具有更好的分类性能。
{"title":"High-Dimensional Unbalanced Binary Classification by Genetic Programming with Multi-Criterion Fitness Evaluation and Selection","authors":"Wenbin Pei;Bing Xue;Lin Shang;Mengjie Zhang","doi":"10.1162/evco_a_00304","DOIUrl":"10.1162/evco_a_00304","url":null,"abstract":"High-dimensional unbalanced classification is challenging because of the joint effects of high dimensionality and class imbalance. Genetic programming (GP) has the potential benefits for use in high-dimensional classification due to its built-in capability to select informative features. However, once data are not evenly distributed, GP tends to develop biased classifiers which achieve a high accuracy on the majority class but a low accuracy on the minority class. Unfortunately, the minority class is often at least as important as the majority class. It is of importance to investigate how GP can be effectively utilized for high-dimensional unbalanced classification. In this article, to address the performance bias issue of GP, a new two-criterion fitness function is developed, which considers two criteria, that is, the approximation of area under the curve (AUC) and the classification clarity (i.e., how well a program can separate two classes). The obtained values on the two criteria are combined in pairs, instead of summing them together. Furthermore, this article designs a three-criterion tournament selection to effectively identify and select good programs to be used by genetic operators for generating offspring during the evolutionary learning process. The experimental results show that the proposed method achieves better classification performance than other compared methods.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"30 1","pages":"99-129"},"PeriodicalIF":6.8,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39583649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Shape-Constrained Symbolic Regression—Improving Extrapolation with Prior Knowledge 形状约束符号回归——利用先验知识改进外推
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-03-01 DOI: 10.1162/evco_a_00294
G. Kronberger;F. O. de Franca;B. Burlacu;C. Haider;M. Kommenda
We investigate the addition of constraints on the function image and its derivatives for the incorporation of prior knowledge in symbolic regression. The approach is called shape-constrained symbolic regression and allows us to enforce, for example, monotonicity of the function over selected inputs. The aim is to find models which conform to expected behavior and which have improved extrapolation capabilities. We demonstrate the feasibility of the idea and propose and compare two evolutionary algorithms for shape-constrained symbolic regression: (i) an extension of tree-based genetic programming which discards infeasible solutions in the selection step, and (ii) a two-population evolutionary algorithm that separates the feasible from the infeasible solutions. In both algorithms we use interval arithmetic to approximate bounds for models and their partial derivatives. The algorithms are tested on a set of 19 synthetic and four real-world regression problems. Both algorithms are able to identify models which conform to shape constraints which is not the case for the unmodified symbolic regression algorithms. However, the predictive accuracy of models with constraints is worse on the training set and the test set. Shape-constrained polynomial regression produces the best results for the test set but also significantly larger models.
我们研究了在函数图像及其导数上添加约束,以便在符号回归中引入先验知识。这种方法被称为形状约束符号回归,例如,它允许我们在选定的输入上强制函数的单调性。目的是找到符合预期行为并具有改进的外推能力的模型。我们证明了这一想法的可行性,并提出并比较了两种用于形状约束符号回归的进化算法:(i)基于树的遗传规划的扩展,它在选择步骤中丢弃了不可行的解,以及(ii)将可行解与不可行解分离的两种群进化算法。在这两种算法中,我们都使用区间算法来近似模型及其偏导数的边界。这些算法在19个合成问题和4个真实世界的回归问题上进行了测试。这两种算法都能够识别符合形状约束的模型,而未修改的符号回归算法则不是这样。然而,具有约束的模型在训练集和测试集上的预测精度较差。形状约束多项式回归为测试集产生了最好的结果,但也产生了明显更大的模型。
{"title":"Shape-Constrained Symbolic Regression—Improving Extrapolation with Prior Knowledge","authors":"G. Kronberger;F. O. de Franca;B. Burlacu;C. Haider;M. Kommenda","doi":"10.1162/evco_a_00294","DOIUrl":"10.1162/evco_a_00294","url":null,"abstract":"We investigate the addition of constraints on the function image and its derivatives for the incorporation of prior knowledge in symbolic regression. The approach is called shape-constrained symbolic regression and allows us to enforce, for example, monotonicity of the function over selected inputs. The aim is to find models which conform to expected behavior and which have improved extrapolation capabilities. We demonstrate the feasibility of the idea and propose and compare two evolutionary algorithms for shape-constrained symbolic regression: (i) an extension of tree-based genetic programming which discards infeasible solutions in the selection step, and (ii) a two-population evolutionary algorithm that separates the feasible from the infeasible solutions. In both algorithms we use interval arithmetic to approximate bounds for models and their partial derivatives. The algorithms are tested on a set of 19 synthetic and four real-world regression problems. Both algorithms are able to identify models which conform to shape constraints which is not the case for the unmodified symbolic regression algorithms. However, the predictive accuracy of models with constraints is worse on the training set and the test set. Shape-constrained polynomial regression produces the best results for the test set but also significantly larger models.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"30 1","pages":"75-98"},"PeriodicalIF":6.8,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39499749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Modular Grammatical Evolution for the Generation of Artificial Neural Networks 人工神经网络生成的模块化语法演化
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-12-08 DOI: 10.1145/3520304.3534072
Khabat Soltanian, Ali Ebnenasir, M. Afsharchi
Abstract This article presents a novel method, called Modular Grammatical Evolution (MGE), toward validating the hypothesis that restricting the solution space of NeuroEvolution to modular and simple neural networks enables the efficient generation of smaller and more structured neural networks while providing acceptable (and in some cases superior) accuracy on large data sets. MGE also enhances the state-of-the-art Grammatical Evolution (GE) methods in two directions. First, MGE's representation is modular in that each individual has a set of genes, and each gene is mapped to a neuron by grammatical rules. Second, the proposed representation mitigates two important drawbacks of GE, namely the low scalability and weak locality of representation, toward generating modular and multilayer networks with a high number of neurons. We define and evaluate five different forms of structures with and without modularity using MGE and find single-layer modules with no coupling more productive. Our experiments demonstrate that modularity helps in finding better neural networks faster. We have validated the proposed method using ten well-known classification benchmarks with different sizes, feature counts, and output class counts. Our experimental results indicate that MGE provides superior accuracy with respect to existing NeuroEvolution methods and returns classifiers that are significantly simpler than other machine learning generated classifiers. Finally, we empirically demonstrate that MGE outperforms other GE methods in terms of locality and scalability properties.
本文提出了一种新的方法,称为模块化语法进化(MGE),用于验证假设,即将神经进化的解空间限制为模块化和简单的神经网络,可以有效地生成更小、更结构化的神经网络,同时在大型数据集上提供可接受的(在某些情况下是更好的)准确性。MGE还在两个方向上增强了最先进的语法演化(GE)方法。首先,MGE的表示是模块化的,因为每个个体都有一组基因,每个基因通过语法规则映射到一个神经元。其次,所提出的表示减轻了GE的两个重要缺点,即低可扩展性和弱局部性,用于生成具有大量神经元的模块化和多层网络。我们使用MGE定义和评估了具有和不具有模块化的五种不同形式的结构,并发现无耦合的单层模块更具生产力。我们的实验表明,模块化有助于更快地找到更好的神经网络。我们使用10个知名的分类基准来验证所提出的方法,这些基准具有不同的大小、特征计数和输出类计数。我们的实验结果表明,相对于现有的NeuroEvolution方法,MGE提供了更高的准确性,并且返回的分类器比其他机器学习生成的分类器简单得多。最后,我们通过实证证明了MGE在局部性和可扩展性方面优于其他GE方法。
{"title":"Modular Grammatical Evolution for the Generation of Artificial Neural Networks","authors":"Khabat Soltanian, Ali Ebnenasir, M. Afsharchi","doi":"10.1145/3520304.3534072","DOIUrl":"https://doi.org/10.1145/3520304.3534072","url":null,"abstract":"Abstract This article presents a novel method, called Modular Grammatical Evolution (MGE), toward validating the hypothesis that restricting the solution space of NeuroEvolution to modular and simple neural networks enables the efficient generation of smaller and more structured neural networks while providing acceptable (and in some cases superior) accuracy on large data sets. MGE also enhances the state-of-the-art Grammatical Evolution (GE) methods in two directions. First, MGE's representation is modular in that each individual has a set of genes, and each gene is mapped to a neuron by grammatical rules. Second, the proposed representation mitigates two important drawbacks of GE, namely the low scalability and weak locality of representation, toward generating modular and multilayer networks with a high number of neurons. We define and evaluate five different forms of structures with and without modularity using MGE and find single-layer modules with no coupling more productive. Our experiments demonstrate that modularity helps in finding better neural networks faster. We have validated the proposed method using ten well-known classification benchmarks with different sizes, feature counts, and output class counts. Our experimental results indicate that MGE provides superior accuracy with respect to existing NeuroEvolution methods and returns classifiers that are significantly simpler than other machine learning generated classifiers. Finally, we empirically demonstrate that MGE outperforms other GE methods in terms of locality and scalability properties.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"30 1","pages":"291-327"},"PeriodicalIF":6.8,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46524257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Evolutionary Computation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1