首页 > 最新文献

Evolutionary Computation最新文献

英文 中文
Genetic Programming for Evolving Similarity Functions for Clustering: Representations and Analysis 进化聚类相似函数的遗传规划:表示与分析
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2020-12-02 DOI: 10.1162/evco_a_00264
Andrew Lensen;Bing Xue;Mengjie Zhang
Clustering is a difficult and widely studied data mining task, with many varieties of clustering algorithms proposed in the literature. Nearly all algorithms use a similarity measure such as a distance metric (e.g., Euclidean distance) to decide which instances to assign to the same cluster. These similarity measures are generally predefined and cannot be easily tailored to the properties of a particular dataset, which leads to limitations in the quality and the interpretability of the clusters produced. In this article, we propose a new approach to automatically evolving similarity functions for a given clustering algorithm by using genetic programming. We introduce a new genetic programming-based method which automatically selects a small subset of features (feature selection) and then combines them using a variety of functions (feature construction) to produce dynamic and flexible similarity functions that are specifically designed for a given dataset. We demonstrate how the evolved similarity functions can be used to perform clustering using a graph-based representation. The results of a variety of experiments across a range of large, high-dimensional datasets show that the proposed approach can achieve higher and more consistent performance than the benchmark methods. We further extend the proposed approach to automatically produce multiple complementary similarity functions by using a multi-tree approach, which gives further performance improvements. We also analyse the interpretability and structure of the automatically evolved similarity functions to provide insight into how and why they are superior to standard distance metrics.
聚类是一项困难且研究广泛的数据挖掘任务,文献中提出了多种聚类算法。几乎所有的算法都使用相似性度量,例如距离度量(例如欧几里得距离)来决定将哪些实例分配给同一集群。这些相似性度量通常是预定义的,无法根据特定数据集的属性轻松调整,这导致所产生的聚类的质量和可解释性受到限制。在本文中,我们提出了一种新的方法,通过使用遗传规划来自动进化给定聚类算法的相似性函数。我们介绍了一种新的基于遗传编程的方法,该方法自动选择一小部分特征(特征选择),然后使用各种函数(特征构建)将它们组合起来,以生成专门为给定数据集设计的动态灵活的相似函数。我们展示了如何使用进化的相似性函数来使用基于图的表示进行聚类。在一系列大型高维数据集上进行的各种实验结果表明,与基准方法相比,所提出的方法可以实现更高、更一致的性能。我们进一步扩展了所提出的方法,通过使用多树方法自动生成多个互补相似函数,这进一步提高了性能。我们还分析了自动进化相似性函数的可解释性和结构,以深入了解它们如何以及为什么优于标准距离度量。
{"title":"Genetic Programming for Evolving Similarity Functions for Clustering: Representations and Analysis","authors":"Andrew Lensen;Bing Xue;Mengjie Zhang","doi":"10.1162/evco_a_00264","DOIUrl":"10.1162/evco_a_00264","url":null,"abstract":"<para>Clustering is a difficult and widely studied data mining task, with many varieties of clustering algorithms proposed in the literature. Nearly all algorithms use a similarity measure such as a distance metric (e.g., Euclidean distance) to decide which instances to assign to the same cluster. These similarity measures are generally predefined and cannot be easily tailored to the properties of a particular dataset, which leads to limitations in the quality and the interpretability of the clusters produced. In this article, we propose a new approach to automatically evolving similarity functions for a given clustering algorithm by using genetic programming. We introduce a new genetic programming-based method which automatically selects a small subset of features (feature selection) and then combines them using a variety of functions (feature construction) to produce dynamic and flexible similarity functions that are specifically designed for a given dataset. We demonstrate how the evolved similarity functions can be used to perform clustering using a graph-based representation. The results of a variety of experiments across a range of large, high-dimensional datasets show that the proposed approach can achieve higher and more consistent performance than the benchmark methods. We further extend the proposed approach to automatically produce multiple complementary similarity functions by using a multi-tree approach, which gives further performance improvements. We also analyse the interpretability and structure of the automatically evolved similarity functions to provide insight into how and why they are superior to standard distance metrics.</para>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"28 4","pages":"531-561"},"PeriodicalIF":6.8,"publicationDate":"2020-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00264","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64541083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Evolutionary Image Transition and Painting Using Random Walks 进化图像转换与随机行走绘画
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2020-12-02 DOI: 10.1162/evco_a_00270
Aneta Neumann;Bradley Alexander;Frank Neumann
We present a study demonstrating how random walk algorithms can be used for evolutionary image transition. We design different mutation operators based on uniform and biased random walks and study how their combination with a baseline mutation operator can lead to interesting image transition processes in terms of visual effects and artistic features. Using feature-based analysis we investigate the evolutionary image transition behaviour with respect to different features and evaluate the images constructed during the image transition process. Afterwards, we investigate how modifications of our biased random walk approaches can be used for evolutionary image painting. We introduce an evolutionary image painting approach whose underlying biased random walk can be controlled by a parameter influencing the bias of the random walk and thereby creating different artistic painting effects.
我们提出了一项研究,展示了随机行走算法如何用于进化图像转换。我们基于均匀和有偏随机行走设计了不同的变异算子,并研究了它们与基线变异算子的组合如何在视觉效果和艺术特征方面产生有趣的图像转换过程。使用基于特征的分析,我们研究了不同特征的进化图像转换行为,并评估了在图像转换过程中构建的图像。之后,我们研究了如何将我们有偏差的随机行走方法的修改用于进化图像绘制。我们介绍了一种进化的图像绘画方法,其潜在的有偏差的随机行走可以通过影响随机行走的偏差的参数来控制,从而创造不同的艺术绘画效果。
{"title":"Evolutionary Image Transition and Painting Using Random Walks","authors":"Aneta Neumann;Bradley Alexander;Frank Neumann","doi":"10.1162/evco_a_00270","DOIUrl":"10.1162/evco_a_00270","url":null,"abstract":"<para>We present a study demonstrating how random walk algorithms can be used for evolutionary image transition. We design different mutation operators based on uniform and biased random walks and study how their combination with a baseline mutation operator can lead to interesting image transition processes in terms of visual effects and artistic features. Using feature-based analysis we investigate the evolutionary image transition behaviour with respect to different features and evaluate the images constructed during the image transition process. Afterwards, we investigate how modifications of our biased random walk approaches can be used for evolutionary image painting. We introduce an evolutionary image painting approach whose underlying biased random walk can be controlled by a parameter influencing the bias of the random walk and thereby creating different artistic painting effects.</para>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"28 4","pages":"643-675"},"PeriodicalIF":6.8,"publicationDate":"2020-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00270","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37680160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Errata: Convergence Analysis of Evolutionary Algorithms That Are Based on the Paradigm of Information Geometry 勘误表:基于信息几何范式的进化算法的收敛性分析
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2020-12-02 DOI: 10.1162/evco_x_00281
Hans-Georg Beyer
{"title":"Errata: Convergence Analysis of Evolutionary Algorithms That Are Based on the Paradigm of Information Geometry","authors":"Hans-Georg Beyer","doi":"10.1162/evco_x_00281","DOIUrl":"10.1162/evco_x_00281","url":null,"abstract":"","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"28 4","pages":"709-710"},"PeriodicalIF":6.8,"publicationDate":"2020-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38660291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evolved Transistor Array Robot Controllers 进化型晶体管阵列机器人控制器
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2020-12-02 DOI: 10.1162/evco_a_00272
Michael Garvie;Ittai Flascher;Andrew Philippides;Adrian Thompson;Phil Husbands
For the first time, a field programmable transistor array (FPTA) was used to evolve robot control circuits directly in analog hardware. Controllers were successfully incrementally evolved for a physical robot engaged in a series of visually guided behaviours, including finding a target in a complex environment where the goal was hidden from most locations. Circuits for recognising spoken commands were also evolved and these were used in conjunction with the controllers to enable voice control of the robot, triggering behavioural switching. Poor quality visual sensors were deliberately used to test the ability of evolved analog circuits to deal with noisy uncertain data in realtime. Visual features were coevolved with the controllers to automatically achieve dimensionality reduction and feature extraction and selection in an integrated way. An efficient new method was developed for simulating the robot in its visual environment. This allowed controllers to be evaluated in a simulation connected to the FPTA. The controllers then transferred seamlessly to the real world. The circuit replication issue was also addressed in experiments where circuits were evolved to be able to function correctly in multiple areas of the FPTA. A methodology was developed to analyse the evolved circuits which provided insights into their operation. Comparative experiments demonstrated the superior evolvability of the transistor array medium.
首次使用现场可编程晶体管阵列(FPTA)直接在模拟硬件中开发机器人控制电路。控制器成功地为参与一系列视觉引导行为的物理机器人逐步进化,包括在目标对大多数位置都隐藏的复杂环境中找到目标。识别口头命令的电路也得到了发展,这些电路与控制器一起使用,以实现机器人的语音控制,触发行为切换。低质量的视觉传感器被故意用来测试进化模拟电路实时处理有噪声的不确定数据的能力。视觉特征与控制器共同进化,以集成的方式自动实现降维和特征提取与选择。提出了一种在视觉环境下对机器人进行仿真的有效方法。这允许在连接到FPTA的模拟中对控制器进行评估。控制器然后无缝地转移到现实世界。电路复制问题也在实验中得到了解决,在实验中,电路被进化为能够在FPTA的多个区域中正确工作。开发了一种分析进化电路的方法,为其操作提供了见解。比较实验证明了晶体管阵列介质的优越演化性。
{"title":"Evolved Transistor Array Robot Controllers","authors":"Michael Garvie;Ittai Flascher;Andrew Philippides;Adrian Thompson;Phil Husbands","doi":"10.1162/evco_a_00272","DOIUrl":"10.1162/evco_a_00272","url":null,"abstract":"<para>For the first time, a field programmable transistor array (FPTA) was used to evolve robot control circuits directly in analog hardware. Controllers were successfully incrementally evolved for a physical robot engaged in a series of visually guided behaviours, including finding a target in a complex environment where the goal was hidden from most locations. Circuits for recognising spoken commands were also evolved and these were used in conjunction with the controllers to enable voice control of the robot, triggering behavioural switching. Poor quality visual sensors were deliberately used to test the ability of evolved analog circuits to deal with noisy uncertain data in realtime. Visual features were coevolved with the controllers to automatically achieve dimensionality reduction and feature extraction and selection in an integrated way. An efficient new method was developed for simulating the robot in its visual environment. This allowed controllers to be evaluated in a simulation connected to the FPTA. The controllers then transferred seamlessly to the real world. The circuit replication issue was also addressed in experiments where circuits were evolved to be able to function correctly in multiple areas of the FPTA. A methodology was developed to analyse the evolved circuits which provided insights into their operation. Comparative experiments demonstrated the superior evolvability of the transistor array medium.</para>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"28 4","pages":"677-708"},"PeriodicalIF":6.8,"publicationDate":"2020-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00272","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37890892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Difficulty Adjustable and Scalable Constrained Multiobjective Test Problem Toolkit 难度可调整和可扩展的受限多目标测试问题工具包
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2020-09-02 DOI: 10.1162/evco_a_00259
Zhun Fan;Wenji Li;Xinye Cai;Hui Li;Caimin Wei;Qingfu Zhang;Kalyanmoy Deb;Erik Goodman
Multiobjective evolutionary algorithms (MOEAs) have progressed significantly in recent decades, but most of them are designed to solve unconstrained multiobjective optimization problems. In fact, many real-world multiobjective problems contain a number of constraints. To promote research on constrained multiobjective optimization, we first propose a problem classification scheme with three primary types of difficulty, which reflect various types of challenges presented by real-world optimization problems, in order to characterize the constraint functions in constrained multiobjective optimization problems (CMOPs). These are feasibility-hardness, convergence-hardness, and diversity-hardness. We then develop a general toolkit to construct difficulty adjustable and scalable CMOPs (DAS-CMOPs, or DAS-CMaOPs when the number of objectives is greater than three) with three types of parameterized constraint functions developed to capture the three proposed types of difficulty. In fact, the combination of the three primary constraint functions with different parameters allows the construction of a large variety of CMOPs, with difficulty that can be defined by a triplet, with each of its parameters specifying the level of one of the types of primary difficulty. Furthermore, the number of objectives in this toolkit can be scaled beyond three. Based on this toolkit, we suggest nine difficulty adjustable and scalable CMOPs and nine CMaOPs, to be called DAS-CMOP1-9 and DAS-CMaOP1-9, respectively. To evaluate the proposed test problems, two popular CMOEAs—MOEA/D-CDP (MOEA/D with constraint dominance principle) and NSGA-II-CDP (NSGA-II with constraint dominance principle) and two popular constrained many-objective evolutionary algorithms (CMaOEAs)—C-MOEA/DD and C-NSGA-III—are used to compare performance on DAS-CMOP1-9 and DAS-CMaOP1-9 with a variety of difficulty triplets, respectively. The experimental results reveal that mechanisms in MOEA/D-CDP may be more effective in solving convergence-hard DAS-CMOPs, while mechanisms of NSGA-II-CDP may be more effective in solving DAS-CMOPs with simultaneous diversity-, feasibility-, and convergence-hardness. Mechanisms in C-NSGA-III may be more effective in solving feasibility-hard CMaOPs, while mechanisms of C-MOEA/DD may be more effective in solving CMaOPs with convergence-hardness. In addition, none of them can solve these problems efficiently, which stimulates us to continue to develop new CMOEAs and CMaOEAs to solve the suggested DAS-CMOPs and DAS-CMaOPs.
多目标进化算法(MOEAs)在近几十年来取得了显著的进展,但它们大多是为解决无约束多目标优化问题而设计的。事实上,许多现实世界中的多目标问题都包含许多约束条件。为了促进约束多目标优化的研究,我们首先提出了一种具有三种主要困难类型的问题分类方案,反映了现实世界优化问题所面临的各种类型的挑战,以刻画约束多目标最优化问题中的约束函数。这些是可行性硬度、收敛性硬度和多样性硬度。然后,我们开发了一个通用工具包,用三种类型的参数化约束函数来构建难度可调和可扩展的CMOP(当目标数量大于三时,DAS CMOP或DAS CMaOP),以捕获三种提出的难度类型。事实上,具有不同参数的三个主要约束函数的组合允许构造各种各样的CMOP,其难度可以由三元组来定义,其每个参数指定主要难度类型之一的级别。此外,该工具包中的目标数量可以扩大到三个以上。基于该工具包,我们提出了九个难度可调和可扩展的CMOP和九个CMaOP,分别称为DAS-CMOP1-9和DAS-CMaOP1-9。为了评估所提出的测试问题,使用了两种流行的CMOEA——MOEA/D-CDP(具有约束优势原理的MOEA/D)和NSGA-II-CDP(带有约束优势原则的NSGA-II),以及两种常用的约束多目标进化算法(CMaOEA)——C-MOEA/DD和C-NSGA-III——来比较在具有各种难度三元组的DAS-CMOP1-9和DAS-CMaOP1-9上的性能,分别地实验结果表明,MOEA/D-CDP中的机制在解决收敛困难的DAS CMOP方面可能更有效,而NSGA-II-CDP机制可能在解决同时具有多样性、可行性和收敛困难的DAS-CMOP方面更有效。C-NSGA-III中的机制在解决可行性困难的CMAOP方面可能更有效,而C-MOEA/DD机制在解决具有收敛硬度的CMAOP时可能更有效。此外,它们都不能有效地解决这些问题,这激励我们继续开发新的CMOEA和CMaOEA,以解决建议的DAS CMOP和DAS CMaOP。
{"title":"Difficulty Adjustable and Scalable Constrained Multiobjective Test Problem Toolkit","authors":"Zhun Fan;Wenji Li;Xinye Cai;Hui Li;Caimin Wei;Qingfu Zhang;Kalyanmoy Deb;Erik Goodman","doi":"10.1162/evco_a_00259","DOIUrl":"10.1162/evco_a_00259","url":null,"abstract":"<para>Multiobjective evolutionary algorithms (MOEAs) have progressed significantly in recent decades, but most of them are designed to solve unconstrained multiobjective optimization problems. In fact, many real-world multiobjective problems contain a number of constraints. To promote research on constrained multiobjective optimization, we first propose a problem classification scheme with three primary types of difficulty, which reflect various types of challenges presented by real-world optimization problems, in order to characterize the constraint functions in constrained multiobjective optimization problems (CMOPs). These are feasibility-hardness, convergence-hardness, and diversity-hardness. We then develop a general toolkit to construct difficulty adjustable and scalable CMOPs (DAS-CMOPs, or DAS-CMaOPs when the number of objectives is greater than three) with three types of parameterized constraint functions developed to capture the three proposed types of difficulty. In fact, the combination of the three primary constraint functions with different parameters allows the construction of a large variety of CMOPs, with difficulty that can be defined by a triplet, with each of its parameters specifying the level of one of the types of primary difficulty. Furthermore, the number of objectives in this toolkit can be scaled beyond three. Based on this toolkit, we suggest nine difficulty adjustable and scalable CMOPs and nine CMaOPs, to be called DAS-CMOP1-9 and DAS-CMaOP1-9, respectively. To evaluate the proposed test problems, two popular CMOEAs—MOEA/D-CDP (MOEA/D with constraint dominance principle) and NSGA-II-CDP (NSGA-II with constraint dominance principle) and two popular constrained many-objective evolutionary algorithms (CMaOEAs)—C-MOEA/DD and C-NSGA-III—are used to compare performance on DAS-CMOP1-9 and DAS-CMaOP1-9 with a variety of difficulty triplets, respectively. The experimental results reveal that mechanisms in MOEA/D-CDP may be more effective in solving convergence-hard DAS-CMOPs, while mechanisms of NSGA-II-CDP may be more effective in solving DAS-CMOPs with simultaneous diversity-, feasibility-, and convergence-hardness. Mechanisms in C-NSGA-III may be more effective in solving feasibility-hard CMaOPs, while mechanisms of C-MOEA/DD may be more effective in solving CMaOPs with convergence-hardness. In addition, none of them can solve these problems efficiently, which stimulates us to continue to develop new CMOEAs and CMaOEAs to solve the suggested DAS-CMOPs and DAS-CMaOPs.</para>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"28 3","pages":"339-378"},"PeriodicalIF":6.8,"publicationDate":"2020-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00259","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37266988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 90
EvoComposer: An Evolutionary Algorithm for 4-Voice Music Compositions EvoComposer:一个四声部音乐作品的进化算法
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2020-09-02 DOI: 10.1162/evco_a_00265
R. De Prisco;G. Zaccagnino;R. Zaccagnino
Evolutionary algorithms mimic evolutionary behaviors in order to solve problems. They have been successfully applied in many areas and appear to have a special relationship with creative problems; such a relationship, over the last two decades, has resulted in a long list of applications, including several in the field of music. In this article, we provide an evolutionary algorithm able to compose music. More specifically we consider the following 4-voice harmonization problem: one of the 4 voices (which are bass, tenor, alto, and soprano) is given as input and the composer has to write the other 3 voices in order to have a complete 4-voice piece of music with a 4-note chord for each input note. Solving such a problem means finding appropriate chords to use for each input note and also finding a placement of the notes within each chord so that melodic concerns are addressed. Such a problem is known as the unfigured harmonization problem. The proposed algorithm for the unfigured harmonization problem, named EvoComposer, uses a novel representation of the solutions in terms of chromosomes (that allows to handle both harmonic and nonharmonic tones), specialized operators (that exploit musical information to improve the quality of the produced individuals), and a novel hybrid multiobjective evaluation function (based on an original statistical analysis of a large corpus of Bach's music). Moreover EvoComposer is the first evolutionary algorithm for this specific problem. EvoComposer is a multiobjective evolutionary algorithm, based on the well-known NSGA-II strategy, and takes into consideration two objectives: the harmonic objective, that is finding appropriate chords, and the melodic objective, that is finding appropriate melodic lines. The composing process is totally automatic, without any human intervention. We also provide an evaluation study showing that EvoComposer outperforms other metaheuristics by producing better solutions in terms of both well-known measures of performance, such as hypervolume, Δ index, coverage of two sets, and standard measures of music creativity. We conjecture that a similar approach can be useful also for similar musical problems.
进化算法模仿进化行为来解决问题。它们已经成功地应用于许多领域,并且似乎与创造性问题有着特殊的关系;在过去的二十年里,这种关系产生了一长串的应用程序,包括音乐领域的一些应用程序。在本文中,我们提供了一个能够作曲的进化算法。更具体地说,我们考虑以下四声部和声问题:四声部中的一个(男低音、男高音、女高音和女高音)作为输入,作曲家必须写其他三个声部,以便有一个完整的四声部音乐作品,每个输入音符都有一个四音和弦。解决这样的问题意味着为每个输入音符找到合适的和弦,并在每个和弦中找到音符的位置,从而解决旋律问题。这样的问题被称为无图形协调问题。该算法被命名为EvoComposer,它使用了一种新的染色体解表示(允许处理谐波和非谐波音调)、专门的算子(利用音乐信息来提高产生的个体的质量)和一种新的混合多目标评估函数(基于对巴赫音乐大量语料库的原始统计分析)。此外,EvoComposer是针对这一特定问题的第一个进化算法。EvoComposer是一种基于NSGA-II策略的多目标进化算法,它考虑了两个目标:和声目标,即找到合适的和弦;旋律目标,即找到合适的旋律线。作曲过程是完全自动的,没有任何人为干预。我们还提供了一项评估研究,表明EvoComposer通过在众所周知的性能度量方面产生更好的解决方案,优于其他元启发式方法,例如hypervolume, Δ指数,两集的覆盖范围和音乐创造力的标准度量。我们推测,类似的方法也可以用于类似的音乐问题。
{"title":"EvoComposer: An Evolutionary Algorithm for 4-Voice Music Compositions","authors":"R. De Prisco;G. Zaccagnino;R. Zaccagnino","doi":"10.1162/evco_a_00265","DOIUrl":"10.1162/evco_a_00265","url":null,"abstract":"<para>Evolutionary algorithms mimic evolutionary behaviors in order to solve problems. They have been successfully applied in many areas and appear to have a special relationship with creative problems; such a relationship, over the last two decades, has resulted in a long list of applications, including several in the field of music. In this article, we provide an evolutionary algorithm able to compose music. More specifically we consider the following 4-voice harmonization problem: one of the 4 voices (which are bass, tenor, alto, and soprano) is given as input and the composer has to write the other 3 voices in order to have a complete 4-voice piece of music with a 4-note chord for each input note. Solving such a problem means finding appropriate chords to use for each input note and also finding a placement of the notes within each chord so that melodic concerns are addressed. Such a problem is known as the <italic>unfigured harmonization problem</i>. The proposed algorithm for the unfigured harmonization problem, named <italic>EvoComposer</i>, uses a novel representation of the solutions in terms of chromosomes (that allows to handle both harmonic and nonharmonic tones), specialized operators (that exploit musical information to improve the quality of the produced individuals), and a novel <italic>hybrid</i> multiobjective evaluation function (based on an original statistical analysis of a large corpus of Bach's music). Moreover EvoComposer is the first evolutionary algorithm for this specific problem. EvoComposer is a multiobjective evolutionary algorithm, based on the well-known NSGA-II strategy, and takes into consideration two objectives: the harmonic objective, that is finding appropriate chords, and the melodic objective, that is finding appropriate melodic lines. The composing process is totally automatic, without any human intervention. We also provide an evaluation study showing that EvoComposer outperforms other metaheuristics by producing better solutions in terms of both well-known measures of <italic>performance</i>, such as hypervolume, <inline-formula><mml:math><mml:mi>Δ</mml:mi></mml:math></inline-formula> index, coverage of two sets, and standard measures of <italic>music creativity</i>. We conjecture that a similar approach can be useful also for similar musical problems.</para>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"28 3","pages":"489-530"},"PeriodicalIF":6.8,"publicationDate":"2020-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00265","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45896199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Simple Hyper-Heuristics Control the Neighbourhood Size of Randomised Local Search Optimally for LeadingOnes* 简单超启发式算法最优控制领先者随机局部搜索的邻域大小*
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2020-09-02 DOI: 10.1162/evco_a_00258
Andrei Lissovoi;Pietro S. Oliveto;John Alasdair Warwicker
Selection hyper-heuristics (HHs) are randomised search methodologies which choose and execute heuristics during the optimisation process from a set of low-level heuristics. A machine learning mechanism is generally used to decide which low-level heuristic should be applied in each decision step. In this article, we analyse whether sophisticated learning mechanisms are always necessary for HHs to perform well. To this end we consider the most simple HHs from the literature and rigorously analyse their performance for the LeadingOnes benchmark function. Our analysis shows that the standard Simple Random, Permutation, Greedy, and Random Gradient HHs show no signs of learning. While the former HHs do not attempt to learn from the past performance of low-level heuristics, the idea behind the Random Gradient HH is to continue to exploit the currently selected heuristic as long as it is successful. Hence, it is embedded with a reinforcement learning mechanism with the shortest possible memory. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. We generalise the “simple” Random Gradient HH so success can be measured over a fixed period of time τ, instead of a single iteration. For LeadingOnes we prove that the Generalised Random Gradient (GRG) HH can learn to adapt the neighbourhood size of Randomised Local Search to optimality during the run. As a result, we prove it has the best possible performance achievable with the low-level heuristics (Randomised Local Search with different neighbourhood sizes), up to lower-order terms. We also prove that the performance of the HH improves as the number of low-level local search heuristics to choose from increases. In particular, with access to k low-level local search heuristics, it outperforms the best-possible algorithm using any subset of the k heuristics. Finally, we show that the advantages of GRG over Randomised Local Search and Evolutionary Algorithms using standard bit mutation increase if the anytime performance is considered (i.e., the performance gap is larger if approximate solutions are sought rather than exact ones). Experimental analyses confirm these results for different problem sizes (up to n=108) and shed some light on the best choices for the parameter τ in various situations.
选择超启发式(HHs)是一种随机搜索方法,在优化过程中从一组低级启发式中选择并执行启发式。机器学习机制通常用于决定在每个决策步骤中应该应用哪种低级启发式。在这篇文章中,我们分析了复杂的学习机制是否总是HHs表现良好所必需的。为此,我们考虑了文献中最简单的HH,并严格分析了它们在LeadingOnes基准函数中的性能。我们的分析表明,标准的简单随机、置换、贪婪和随机梯度HHs没有表现出学习的迹象。虽然以前的HH并不试图从低水平启发式的过去表现中学习,但随机梯度HH背后的想法是,只要成功,就继续利用当前选择的启发式。因此,它嵌入了一个记忆最短的强化学习机制。然而,当扰动组合优化问题的合理解时,有希望的启发式在下一步中成功的概率相对较低。我们推广了“简单”随机梯度HH,因此可以在固定的时间段τ内测量成功,而不是单次迭代。对于LeadingOnes,我们证明了广义随机梯度(GRG)HH可以在运行过程中学会将随机局部搜索的邻域大小调整为最优性。因此,我们证明了它具有低级别启发式(具有不同邻域大小的随机局部搜索)以及低阶项所能达到的最佳性能。我们还证明了HH的性能随着可供选择的低级局部搜索启发式的数量的增加而提高。特别地,通过访问k个低级局部搜索启发式,它优于使用k个启发式的任何子集的最佳可能算法。最后,我们表明,如果考虑任何时间的性能(即,如果寻求近似解而不是精确解,则性能差距更大),GRG相对于使用标准比特突变的随机局部搜索和进化算法的优势会增加。实验分析证实了不同问题大小(高达n=108)的这些结果,并阐明了在各种情况下参数τ的最佳选择。
{"title":"Simple Hyper-Heuristics Control the Neighbourhood Size of Randomised Local Search Optimally for LeadingOnes*","authors":"Andrei Lissovoi;Pietro S. Oliveto;John Alasdair Warwicker","doi":"10.1162/evco_a_00258","DOIUrl":"https://doi.org/10.1162/evco_a_00258","url":null,"abstract":"<para>Selection hyper-heuristics (HHs) are randomised search methodologies which choose and execute heuristics during the optimisation process from a set of low-level heuristics. A machine learning mechanism is generally used to decide which low-level heuristic should be applied in each decision step. In this article, we analyse whether sophisticated learning mechanisms are always necessary for HHs to perform well. To this end we consider the most simple HHs from the literature and rigorously analyse their performance for the <small>LeadingOnes</small> benchmark function. Our analysis shows that the standard Simple Random, Permutation, Greedy, and Random Gradient HHs show no signs of learning. While the former HHs do not attempt to learn from the past performance of low-level heuristics, the idea behind the Random Gradient HH is to continue to exploit the currently selected heuristic as long as it is successful. Hence, it is embedded with a reinforcement learning mechanism with the shortest possible memory. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. We generalise the “simple” Random Gradient HH so success can be measured over a fixed period of time <inline-formula><mml:math><mml:mi>τ</mml:mi></mml:math></inline-formula>, instead of a single iteration. For <small>LeadingOnes</small> we prove that the <italic>Generalised Random Gradient (GRG)</i> HH can learn to adapt the neighbourhood size of Randomised Local Search to optimality during the run. As a result, we prove it has the best possible performance achievable with the low-level heuristics (Randomised Local Search with different neighbourhood sizes), up to lower-order terms. We also prove that the performance of the HH improves as the number of low-level local search heuristics to choose from increases. In particular, with access to <inline-formula><mml:math><mml:mi>k</mml:mi></mml:math></inline-formula> low-level local search heuristics, it outperforms the best-possible algorithm using any subset of the <inline-formula><mml:math><mml:mi>k</mml:mi></mml:math></inline-formula> heuristics. Finally, we show that the advantages of GRG over Randomised Local Search and Evolutionary Algorithms using standard bit mutation increase if the anytime performance is considered (i.e., the performance gap is larger if approximate solutions are sought rather than exact ones). Experimental analyses confirm these results for different problem sizes (up to <inline-formula><mml:math><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:msup><mml:mn>10</mml:mn><mml:mn>8</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula>) and shed some light on the best choices for the parameter <inline-formula><mml:math><mml:mi>τ</mml:mi></mml:math></inline-formula> in various situations.</para>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"28 3","pages":"437-461"},"PeriodicalIF":6.8,"publicationDate":"2020-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00258","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50236517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Diagonal Acceleration for Covariance Matrix Adaptation Evolution Strategies 协方差矩阵自适应进化策略的对角加速
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2020-09-02 DOI: 10.1162/evco_a_00260
Y. Akimoto;N. Hansen
We introduce an acceleration for covariance matrix adaptation evolution strategies (CMA-ES) by means of adaptive diagonal decoding (dd-CMA). This diagonal acceleration endows the default CMA-ES with the advantages of separable CMA-ES without inheriting its drawbacks. Technically, we introduce a diagonal matrix D that expresses coordinate-wise variances of the sampling distribution in DCD form. The diagonal matrix can learn a rescaling of the problem in the coordinates within a linear number of function evaluations. Diagonal decoding can also exploit separability of the problem, but, crucially, does not compromise the performance on nonseparable problems. The latter is accomplished by modulating the learning rate for the diagonal matrix based on the condition number of the underlying correlation matrix. dd-CMA-ES not only combines the advantages of default and separable CMA-ES, but may achieve overadditive speedup: it improves the performance, and even the scaling, of the better of default and separable CMA-ES on classes of nonseparable test functions that reflect, arguably, a landscape feature commonly observed in practice. The article makes two further secondary contributions: we introduce two different approaches to guarantee positive definiteness of the covariance matrix with active CMA, which is valuable in particular with large population size; we revise the default parameter setting in CMA-ES, proposing accelerated settings in particular for large dimension. All our contributions can be viewed as independent improvements of CMA-ES, yet they are also complementary and can be seamlessly combined. In numerical experiments with dd-CMA-ES up to dimension 5120, we observe remarkable improvements over the original covariance matrix adaptation on functions with coordinate-wise ill-conditioning. The improvement is observed also for large population sizes up to about dimension squared.
我们介绍了一种通过自适应对角解码(dd-CMA)加速协方差矩阵自适应进化策略(CMA-ES)。这种对角线加速赋予了默认CMA-ES可分离CMA-ES的优点,而没有继承其缺点。从技术上讲,我们引入了一个对角矩阵D,它以DCD的形式表示采样分布的坐标方差。对角矩阵可以在线性函数求值的范围内学习问题在坐标中的重新缩放。对角线解码也可以利用问题的可分性,但至关重要的是,它不会影响不可分离性问题的性能。后者是通过基于底层相关矩阵的条件数调制对角矩阵的学习率来实现的。dd CMA ES不仅结合了默认和可分离CMA-ES的优点,而且可能实现超加性的加速:它提高了默认和可以分离的CMA-ES在不可分离测试函数类上的性能,甚至提高了扩展性,这些测试函数类可以说反映了实践中常见的横向特征。本文进一步做出了两个次要贡献:我们引入了两种不同的方法来保证具有主动CMA的协方差矩阵的正定性,这在大种群规模的情况下是有价值的;我们修改了CMA-ES中的默认参数设置,特别是针对大维度提出了加速设置。我们的所有贡献都可以被视为CMA-ES的独立改进,但它们也是互补的,可以无缝结合。在高达5120维的dd-CMA ES的数值实验中,我们观察到在具有坐标方向不良条件的函数上,与原始协方差矩阵自适应相比有显著改进。对于高达约平方维的大种群规模,也观察到了这种改善。
{"title":"Diagonal Acceleration for Covariance Matrix Adaptation Evolution Strategies","authors":"Y. Akimoto;N. Hansen","doi":"10.1162/evco_a_00260","DOIUrl":"10.1162/evco_a_00260","url":null,"abstract":"<para>We introduce an acceleration for covariance matrix adaptation evolution strategies (CMA-ES) by means of <italic>adaptive diagonal decoding</i> (dd-CMA). This diagonal acceleration endows the default CMA-ES with the advantages of separable CMA-ES without inheriting its drawbacks. Technically, we introduce a diagonal matrix <inline-formula><mml:math><mml:mi>D</mml:mi></mml:math></inline-formula> that expresses coordinate-wise variances of the sampling distribution in <italic>DCD</i> form. The diagonal matrix can learn a rescaling of the problem in the coordinates within a linear number of function evaluations. Diagonal decoding can also exploit separability of the problem, but, crucially, does not compromise the performance on nonseparable problems. The latter is accomplished by modulating the learning rate for the diagonal matrix based on the condition number of the underlying correlation matrix. dd-CMA-ES not only combines the advantages of default and separable CMA-ES, but may achieve overadditive speedup: it improves the performance, and even the scaling, of the better of default and separable CMA-ES on classes of nonseparable test functions that reflect, arguably, a landscape feature commonly observed in practice.</para>\u0000 \u0000<para>The article makes two further secondary contributions: we introduce two different approaches to guarantee positive definiteness of the covariance matrix with active CMA, which is valuable in particular with large population size; we revise the default parameter setting in CMA-ES, proposing accelerated settings in particular for large dimension.</para>\u0000 \u0000<para>All our contributions can be viewed as independent improvements of CMA-ES, yet they are also complementary and can be seamlessly combined. In numerical experiments with dd-CMA-ES up to dimension 5120, we observe remarkable improvements over the original covariance matrix adaptation on functions with coordinate-wise ill-conditioning. The improvement is observed also for large population sizes up to about dimension squared.</para>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"28 3","pages":"405-435"},"PeriodicalIF":6.8,"publicationDate":"2020-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00260","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37266986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Analysis of the (μ/μI,λ)-CSA-ES with Repair by Projection Applied to a Conically Constrained Problem 投影修复的(μ/μI,λ)-CSA-ES在圆锥约束问题中的应用分析
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2020-09-02 DOI: 10.1162/evco_a_00261
Patrick Spettel;Hans-Georg Beyer
Theoretical analyses of evolution strategies are indispensable for gaining a deep understanding of their inner workings. For constrained problems, rather simple problems are of interest in the current research. This work presents a theoretical analysis of a multi-recombinative evolution strategy with cumulative step size adaptation applied to a conically constrained linear optimization problem. The state of the strategy is modeled by random variables and a stochastic iterative mapping is introduced. For the analytical treatment, fluctuations are neglected and the mean value iterative system is considered. Nonlinear difference equations are derived based on one-generation progress rates. Based on that, expressions for the steady state of the mean value iterative system are derived. By comparison with real algorithm runs, it is shown that for the considered assumptions, the theoretical derivations are able to predict the dynamics and the steady state values of the real runs.
进化策略的理论分析对于深入了解其内部运作是必不可少的。对于受约束的问题,目前的研究中对相当简单的问题很感兴趣。本文对应用于圆锥约束线性优化问题的具有累积步长自适应的多重组进化策略进行了理论分析。该策略的状态由随机变量建模,并引入了随机迭代映射。对于分析处理,忽略了波动,并考虑了均值迭代系统。基于一代进度率推导了非线性差分方程。在此基础上,导出了均值迭代系统稳态的表达式。通过与实际算法运行的比较,表明对于所考虑的假设,理论推导能够预测实际运行的动力学和稳态值。
{"title":"Analysis of the (μ/μI,λ)-CSA-ES with Repair by Projection Applied to a Conically Constrained Problem","authors":"Patrick Spettel;Hans-Georg Beyer","doi":"10.1162/evco_a_00261","DOIUrl":"https://doi.org/10.1162/evco_a_00261","url":null,"abstract":"<para>Theoretical analyses of evolution strategies are indispensable for gaining a deep understanding of their inner workings. For constrained problems, rather simple problems are of interest in the current research. This work presents a theoretical analysis of a multi-recombinative evolution strategy with cumulative step size adaptation applied to a conically constrained linear optimization problem. The state of the strategy is modeled by random variables and a stochastic iterative mapping is introduced. For the analytical treatment, fluctuations are neglected and the mean value iterative system is considered. Nonlinear difference equations are derived based on one-generation progress rates. Based on that, expressions for the steady state of the mean value iterative system are derived. By comparison with real algorithm runs, it is shown that for the considered assumptions, the theoretical derivations are able to predict the dynamics and the steady state values of the real runs.</para>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"28 3","pages":"463-488"},"PeriodicalIF":6.8,"publicationDate":"2020-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00261","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50380679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Generating New Space-Filling Test Instances for Continuous Black-Box Optimization 为连续黑盒优化生成新的空间填充测试实例
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2020-09-02 DOI: 10.1162/evco_a_00262
Mario A. Muñoz;Kate Smith-Miles
This article presents a method to generate diverse and challenging new test instances for continuous black-box optimization. Each instance is represented as a feature vector of exploratory landscape analysis measures. By projecting the features into a two-dimensional instance space, the location of existing test instances can be visualized, and their similarities and differences revealed. New instances are generated through genetic programming which evolves functions with controllable characteristics. Convergence to selected target points in the instance space is used to drive the evolutionary process, such that the new instances span the entire space more comprehensively. We demonstrate the method by generating two-dimensional functions to visualize its success, and ten-dimensional functions to test its scalability. We show that the method can recreate existing test functions when target points are co-located with existing functions, and can generate new functions with entirely different characteristics when target points are located in empty regions of the instance space. Moreover, we test the effectiveness of three state-of-the-art algorithms on the new set of instances. The results demonstrate that the new set is not only more diverse than a well-known benchmark set, but also more challenging for the tested algorithms. Hence, the method opens up a new avenue for developing test instances with controllable characteristics, necessary to expose the strengths and weaknesses of algorithms, and drive algorithm development.
本文提出了一种为连续黑盒优化生成多样化且具有挑战性的新测试实例的方法。每个实例都表示为探索性景观分析措施的特征向量。通过将特征投影到二维实例空间中,可以可视化现有测试实例的位置,并揭示它们的异同。新的实例是通过遗传程序生成的,它进化出具有可控特性的函数。收敛到实例空间中选定的目标点用于驱动进化过程,从而使新实例更全面地跨越整个空间。我们通过生成二维函数来展示该方法的成功,并通过生成十维函数来测试其可扩展性。我们证明,当目标点与现有函数位于同一位置时,该方法可以重新创建现有的测试函数,当目标点将位于实例空间的空白区域时,该算法可以生成具有完全不同特性的新函数。此外,我们在新的实例集上测试了三种最先进算法的有效性。结果表明,新集合不仅比众所周知的基准集合更具多样性,而且对测试的算法更具挑战性。因此,该方法为开发具有可控特性的测试实例开辟了一条新的途径,有必要揭示算法的优缺点,推动算法的发展。
{"title":"Generating New Space-Filling Test Instances for Continuous Black-Box Optimization","authors":"Mario A. Muñoz;Kate Smith-Miles","doi":"10.1162/evco_a_00262","DOIUrl":"10.1162/evco_a_00262","url":null,"abstract":"<para>This article presents a method to generate diverse and challenging new test instances for continuous black-box optimization. Each instance is represented as a feature vector of exploratory landscape analysis measures. By projecting the features into a two-dimensional instance space, the location of existing test instances can be visualized, and their similarities and differences revealed. New instances are generated through genetic programming which evolves functions with controllable characteristics. Convergence to selected target points in the instance space is used to drive the evolutionary process, such that the new instances span the entire space more comprehensively. We demonstrate the method by generating two-dimensional functions to visualize its success, and ten-dimensional functions to test its scalability. We show that the method can recreate existing test functions when target points are co-located with existing functions, and can generate new functions with entirely different characteristics when target points are located in empty regions of the instance space. Moreover, we test the effectiveness of three state-of-the-art algorithms on the new set of instances. The results demonstrate that the new set is not only more diverse than a well-known benchmark set, but also more challenging for the tested algorithms. Hence, the method opens up a new avenue for developing test instances with controllable characteristics, necessary to expose the strengths and weaknesses of algorithms, and drive algorithm development.</para>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"28 3","pages":"379-404"},"PeriodicalIF":6.8,"publicationDate":"2020-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00262","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37150347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
期刊
Evolutionary Computation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1