首页 > 最新文献

Evolutionary Computation最新文献

英文 中文
Multiobjective Evolutionary Algorithms Are Still Good: Maximizing Monotone Approximately Submodular Minus Modular Functions 多目标进化算法仍然很好:最大化单调近似子模负模函数
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-12-01 DOI: 10.1162/evco_a_00288
Chao Qian
As evolutionary algorithms (EAs) are general-purpose optimization algorithms, recent theoretical studies have tried to analyze their performance for solving general problem classes, with the goal of providing a general theoretical explanation of the behavior of EAs. Particularly, a simple multiobjective EA, that is, GSEMO, has been shown to be able to achieve good polynomial-time approximation guarantees for submodular optimization, where the objective function is only required to satisfy some properties and its explicit formulation is not needed. Submodular optimization has wide applications in diverse areas, and previous studies have considered the cases where the objective functions are monotone submodular, monotone non-submodular, or non-monotone submodular. To complement this line of research, this article studies the problem class of maximizing monotone approximately submodular minus modular functions (i.e., g-c) with a size constraint, where g is a so-called non-negative monotone approximately submodular function and c is a so-called non-negative modular function, resulting in the objective function (g-c) being non-monotone non-submodular in general. Different from previous analyses, we prove that by optimizing the original objective function (g-c) and the size simultaneously, the GSEMO fails to achieve a good polynomial-time approximation guarantee. However, we also prove that by optimizing a distorted objective function and the size simultaneously, the GSEMO can still achieve the best-known polynomial-time approximation guarantee. Empirical studies on the applications of Bayesian experimental design and directed vertex cover show the excellent performance of the GSEMO.
由于进化算法是一种通用的优化算法,最近的理论研究试图分析它们在解决一般问题类时的性能,目的是为进化算法的行为提供一般的理论解释。特别是,一个简单的多目标EA,即GSEMO,已经被证明能够为子模优化实现良好的多项式时间近似保证,其中目标函数只需要满足一些性质,而不需要其显式公式。子模优化在不同领域有着广泛的应用,以前的研究已经考虑了目标函数是单调子模、单调非子模或非单调子模的情况。为了补充这一研究,本文研究了具有大小约束的单调近似子模负模函数(即g-c)的最大化问题类,其中g是所谓的非负单调近似子模块函数,c是所谓的无负模函数,导致目标函数(g-c)一般是非单调的非子模。与以往的分析不同,我们证明了通过同时优化原始目标函数(g-c)和大小,GSEMO无法实现良好的多项式时间近似保证。然而,我们也证明了通过同时优化失真的目标函数和大小,GSEMO仍然可以实现最著名的多项式时间近似保证。对贝叶斯实验设计和有向顶点覆盖应用的实证研究表明,GSEMO具有良好的性能。
{"title":"Multiobjective Evolutionary Algorithms Are Still Good: Maximizing Monotone Approximately Submodular Minus Modular Functions","authors":"Chao Qian","doi":"10.1162/evco_a_00288","DOIUrl":"10.1162/evco_a_00288","url":null,"abstract":"As evolutionary algorithms (EAs) are general-purpose optimization algorithms, recent theoretical studies have tried to analyze their performance for solving general problem classes, with the goal of providing a general theoretical explanation of the behavior of EAs. Particularly, a simple multiobjective EA, that is, GSEMO, has been shown to be able to achieve good polynomial-time approximation guarantees for submodular optimization, where the objective function is only required to satisfy some properties and its explicit formulation is not needed. Submodular optimization has wide applications in diverse areas, and previous studies have considered the cases where the objective functions are monotone submodular, monotone non-submodular, or non-monotone submodular. To complement this line of research, this article studies the problem class of maximizing monotone approximately submodular minus modular functions (i.e., g-c) with a size constraint, where g is a so-called non-negative monotone approximately submodular function and c is a so-called non-negative modular function, resulting in the objective function (g-c) being non-monotone non-submodular in general. Different from previous analyses, we prove that by optimizing the original objective function (g-c) and the size simultaneously, the GSEMO fails to achieve a good polynomial-time approximation guarantee. However, we also prove that by optimizing a distorted objective function and the size simultaneously, the GSEMO can still achieve the best-known polynomial-time approximation guarantee. Empirical studies on the applications of Bayesian experimental design and directed vertex cover show the excellent performance of the GSEMO.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"29 4","pages":"463-490"},"PeriodicalIF":6.8,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39089247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions. 环境变化下自主学习的进化可塑性。
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-09-01 DOI: 10.1162/evco_a_00286
Anil Yaman, Giovanni Iacca, Decebal Constantin Mocanu, Matt Coler, George Fletcher, Mykola Pechenizkiy

A fundamental aspect of learning in biological neural networks is the plasticity property which allows them to modify their configurations during their lifetime. Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons. However, the emergence of a coherent global learning behavior from local Hebbian plasticity rules is not very well understood. The goal of this work is to discover interpretable local Hebbian learning rules that can provide autonomous global learning. To achieve this, we use a discrete representation to encode the learning rules in a finite search space. These rules are then used to perform synaptic changes, based on the local interactions of the neurons. We employ genetic algorithms to optimize these rules to allow learning on two separate tasks (a foraging and a prey-predator scenario) in online lifetime learning settings. The resulting evolved rules converged into a set of well-defined interpretable types, that are thoroughly discussed. Notably, the performance of these rules, while adapting the ANNs during the learning tasks, is comparable to that of offline learning methods such as hill climbing.

生物神经网络学习的一个基本方面是可塑性,这使它们能够在其生命周期中修改其配置。Hebbian学习是一种基于神经元局部相互作用的模拟人工神经网络(ann)可塑性的生物学机制。然而,从局部Hebbian可塑性规则中出现的连贯的全局学习行为尚未得到很好的理解。这项工作的目标是发现可解释的局部Hebbian学习规则,可以提供自主的全局学习。为了实现这一点,我们在有限的搜索空间中使用离散表示来编码学习规则。然后,这些规则被用来根据神经元的局部相互作用来执行突触变化。我们使用遗传算法来优化这些规则,以便在在线终身学习设置中对两个独立的任务(觅食和捕食场景)进行学习。由此产生的演化规则汇聚成一组定义良好的可解释类型,并对其进行了详细的讨论。值得注意的是,当这些规则在学习任务中适应人工神经网络时,其性能与离线学习方法(如爬山)相当。
{"title":"Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions.","authors":"Anil Yaman,&nbsp;Giovanni Iacca,&nbsp;Decebal Constantin Mocanu,&nbsp;Matt Coler,&nbsp;George Fletcher,&nbsp;Mykola Pechenizkiy","doi":"10.1162/evco_a_00286","DOIUrl":"https://doi.org/10.1162/evco_a_00286","url":null,"abstract":"<p><p>A fundamental aspect of learning in biological neural networks is the plasticity property which allows them to modify their configurations during their lifetime. Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons. However, the emergence of a coherent global learning behavior from local Hebbian plasticity rules is not very well understood. The goal of this work is to discover interpretable local Hebbian learning rules that can provide autonomous global learning. To achieve this, we use a discrete representation to encode the learning rules in a finite search space. These rules are then used to perform synaptic changes, based on the local interactions of the neurons. We employ genetic algorithms to optimize these rules to allow learning on two separate tasks (a foraging and a prey-predator scenario) in online lifetime learning settings. The resulting evolved rules converged into a set of well-defined interpretable types, that are thoroughly discussed. Notably, the performance of these rules, while adapting the ANNs during the learning tasks, is comparable to that of offline learning methods such as hill climbing.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"29 3","pages":"391-414"},"PeriodicalIF":6.8,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9172052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Iterated Local Search and Other Algorithms for Buffered Two-Machine Permutation Flow Shops with Constant Processing Times on One Machine. 一台机器上具有恒定处理时间的缓冲双机排列流车间的迭代局部搜索及其他算法。
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-09-01 DOI: 10.1162/evco_a_00287
Hoang Thanh Le, Philine Geser, Martin Middendorf

The two-machine permutation flow shop scheduling problem with buffer is studied for the special case that all processing times on one of the two machines are equal to a constant c. This case is interesting because it occurs in various applications, for example, when one machine is a packing machine or when materials have to be transported. Different types of buffers and buffer usage are considered. It is shown that all considered buffer flow shop problems remain NP-hard for the makespan criterion even with the restriction to equal processing times on one machine. However, the special case where the constant c is larger or smaller than all processing times on the other machine is shown to be polynomially solvable by presenting an algorithm (2BF-OPT) that calculates optimal schedules in O(nlogn) steps. Two heuristics for solving the NP-hard flow shop problems are proposed: (i) a modification of the commonly used NEH heuristic (mNEH) and (ii) an Iterated Local Search heuristic (2BF-ILS) that uses the mNEH heuristic for computing its initial solution. It is shown experimentally that the proposed 2BF-ILS heuristic obtains better results than two state-of-the-art algorithms for buffered flow shop problems from the literature and an Ant Colony Optimization algorithm. In addition, it is shown experimentally that 2BF-ILS obtains the same solution quality as the standard NEH heuristic, however, with a smaller number of function evaluations.

本文研究了在两台机器中的一台上的所有加工时间都等于常数c的特殊情况下,带缓冲的两台机器排列流水车间调度问题。这种情况很有趣,因为它发生在各种应用中,例如,当一台机器是包装机或物料需要运输时。考虑了不同类型的缓冲区和缓冲区的使用情况。结果表明,即使在同一台机器上的处理时间相等的限制下,所有考虑的缓冲流车间问题对于最大时间跨度准则仍然是np困难的。然而,对于常数c大于或小于其他机器上所有处理时间的特殊情况,可以通过提出一种算法(2BF-OPT)来多项式地解决,该算法在O(nlogn)步中计算最优调度。提出了解决NP-hard flow shop问题的两种启发式方法:(i)对常用的NEH启发式(mNEH)的改进;(ii)使用mNEH启发式计算其初始解的迭代局部搜索启发式(2BF-ILS)。实验结果表明,本文提出的2BF-ILS启发式算法比现有的两种算法和蚁群优化算法获得了更好的缓冲流车间问题求解结果。此外,实验表明,2BF-ILS得到的解质量与标准NEH启发式方法相同,但函数评估次数较少。
{"title":"Iterated Local Search and Other Algorithms for Buffered Two-Machine Permutation Flow Shops with Constant Processing Times on One Machine.","authors":"Hoang Thanh Le,&nbsp;Philine Geser,&nbsp;Martin Middendorf","doi":"10.1162/evco_a_00287","DOIUrl":"https://doi.org/10.1162/evco_a_00287","url":null,"abstract":"<p><p>The two-machine permutation flow shop scheduling problem with buffer is studied for the special case that all processing times on one of the two machines are equal to a constant c. This case is interesting because it occurs in various applications, for example, when one machine is a packing machine or when materials have to be transported. Different types of buffers and buffer usage are considered. It is shown that all considered buffer flow shop problems remain NP-hard for the makespan criterion even with the restriction to equal processing times on one machine. However, the special case where the constant c is larger or smaller than all processing times on the other machine is shown to be polynomially solvable by presenting an algorithm (2BF-OPT) that calculates optimal schedules in O(nlogn) steps. Two heuristics for solving the NP-hard flow shop problems are proposed: (i) a modification of the commonly used NEH heuristic (mNEH) and (ii) an Iterated Local Search heuristic (2BF-ILS) that uses the mNEH heuristic for computing its initial solution. It is shown experimentally that the proposed 2BF-ILS heuristic obtains better results than two state-of-the-art algorithms for buffered flow shop problems from the literature and an Ant Colony Optimization algorithm. In addition, it is shown experimentally that 2BF-ILS obtains the same solution quality as the standard NEH heuristic, however, with a smaller number of function evaluations.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"29 3","pages":"415-439"},"PeriodicalIF":6.8,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9172054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Lower Bounds for Non-Elitist Evolutionary Algorithms via Negative Multiplicative Drift. 基于负乘法漂移的非精英进化算法的下界。
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-06-01 DOI: 10.1162/evco_a_00283
Benjamin Doerr

A decent number of lower bounds for non-elitist population-based evolutionary algorithms has been shown by now. Most of them are technically demanding due to the (hard to avoid) use of negative drift theorems-general results which translate an expected movement away from the target into a high hitting time. We propose a simple negative drift theorem for multiplicative drift scenarios and show that it can simplify existing analyses. We discuss in more detail Lehre's (2010) negative drift in populations method, one of the most general tools to prove lower bounds on the runtime of non-elitist mutation-based evolutionary algorithms for discrete search spaces. Together with other arguments, we obtain an alternative and simpler proof of this result, which also strengthens and simplifies this method. In particular, now only three of the five technical conditions of the previous result have to be verified. The lower bounds we obtain are explicit instead of only asymptotic. This allows us to compute concrete lower bounds for concrete algorithms, but also enables us to show that super-polynomial runtimes appear already when the reproduction rate is only a (1-ω(n-1/2)) factor below the threshold. For the special case of algorithms using standard bit mutation with a random mutation rate (called uniform mixing in the language of hyper-heuristics), we prove the result stated by Dang and Lehre (2016b) and extend it to mutation rates other than Θ(1/n), which includes the heavy-tailed mutation operator proposed by Doerr et al. (2017). We finally use our method and a novel domination argument to show an exponential lower bound for the runtime of the mutation-only simple genetic algorithm on OneMax for arbitrary population size.

到目前为止,已经有相当数量的基于非精英群体的进化算法的下限得到了证明。由于(难以避免的)使用负漂移定理,它们中的大多数在技术上都要求很高——将预期的远离目标的运动转化为高命中时间的一般结果。我们提出了一个简单的负漂移定理,并证明它可以简化现有的分析。我们更详细地讨论了Lehre(2010)的种群负漂移方法,这是证明离散搜索空间中基于非精英突变的进化算法运行时下界的最通用工具之一。结合其他论证,我们得到了对该结果的另一种更简单的证明,也加强和简化了该方法。特别是,以前结果的五个技术条件现在只有三个需要核实。我们得到的下界是显式的,而不仅仅是渐近的。这使我们能够计算具体算法的具体下界,但也使我们能够证明,当繁殖率仅低于阈值的(1-ω(n-1/2))因子时,超多项式运行时间已经出现。对于使用具有随机突变率的标准位突变(在超启发式语言中称为均匀混合)的算法的特殊情况,我们证明了Dang和Lehre (2016b)所陈述的结果,并将其扩展到Θ(1/n)以外的突变率,其中包括Doerr等人(2017)提出的重尾突变算子。最后,我们利用我们的方法和一个新的支配论证,给出了在任意种群规模下OneMax上仅突变简单遗传算法运行时的指数下界。
{"title":"Lower Bounds for Non-Elitist Evolutionary Algorithms via Negative Multiplicative Drift.","authors":"Benjamin Doerr","doi":"10.1162/evco_a_00283","DOIUrl":"https://doi.org/10.1162/evco_a_00283","url":null,"abstract":"<p><p>A decent number of lower bounds for non-elitist population-based evolutionary algorithms has been shown by now. Most of them are technically demanding due to the (hard to avoid) use of negative drift theorems-general results which translate an expected movement away from the target into a high hitting time. We propose a simple negative drift theorem for multiplicative drift scenarios and show that it can simplify existing analyses. We discuss in more detail Lehre's (2010) negative drift in populations method, one of the most general tools to prove lower bounds on the runtime of non-elitist mutation-based evolutionary algorithms for discrete search spaces. Together with other arguments, we obtain an alternative and simpler proof of this result, which also strengthens and simplifies this method. In particular, now only three of the five technical conditions of the previous result have to be verified. The lower bounds we obtain are explicit instead of only asymptotic. This allows us to compute concrete lower bounds for concrete algorithms, but also enables us to show that super-polynomial runtimes appear already when the reproduction rate is only a (1-ω(n-1/2)) factor below the threshold. For the special case of algorithms using standard bit mutation with a random mutation rate (called uniform mixing in the language of hyper-heuristics), we prove the result stated by Dang and Lehre (2016b) and extend it to mutation rates other than Θ(1/n), which includes the heavy-tailed mutation operator proposed by Doerr et al. (2017). We finally use our method and a novel domination argument to show an exponential lower bound for the runtime of the mutation-only simple genetic algorithm on OneMax for arbitrary population size.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"29 2","pages":"305-329"},"PeriodicalIF":6.8,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38616984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Offline Learning with a Selection Hyper-Heuristic: An Application to Water Distribution Network Optimisation. 具有选择超启发式的离线学习:在配水网络优化中的应用。
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-06-01 DOI: 10.1162/evco_a_00277
William B Yates, Edward C Keedwell

A sequence-based selection hyper-heuristic with online learning is used to optimise 12 water distribution networks of varying sizes. The hyper-heuristic results are compared with those produced by five multiobjective evolutionary algorithms. The comparison demonstrates that the hyper-heuristic is a computationally efficient alternative to a multiobjective evolutionary algorithm. An offline learning algorithm is used to enhance the optimisation performance of the hyper-heuristic. The optimisation results of the offline trained hyper-heuristic are analysed statistically, and a new offline learning methodology is proposed. The new methodology is evaluated, and shown to produce an improvement in performance on each of the 12 networks. Finally, it is demonstrated that offline learning can be usefully transferred from small, computationally inexpensive problems, to larger computationally expensive ones, and that the improvement in optimisation performance is statistically significant, with 99% confidence.

采用基于序列的超启发式选择和在线学习来优化不同规模的12个配水网络。将超启发式结果与五种多目标进化算法的结果进行了比较。比较表明,超启发式算法是一种计算效率高的多目标进化算法的替代方案。采用离线学习算法提高超启发式算法的优化性能。对离线训练超启发式算法的优化结果进行了统计分析,提出了一种新的离线学习方法。对新方法进行了评估,并证明在12个网络中的每个网络上都产生了性能改进。最后,它证明了离线学习可以有效地从小的、计算成本低廉的问题转移到计算成本较高的问题,并且优化性能的改进在统计上是显著的,有99%的置信度。
{"title":"Offline Learning with a Selection Hyper-Heuristic: An Application to Water Distribution Network Optimisation.","authors":"William B Yates,&nbsp;Edward C Keedwell","doi":"10.1162/evco_a_00277","DOIUrl":"https://doi.org/10.1162/evco_a_00277","url":null,"abstract":"<p><p>A sequence-based selection hyper-heuristic with online learning is used to optimise 12 water distribution networks of varying sizes. The hyper-heuristic results are compared with those produced by five multiobjective evolutionary algorithms. The comparison demonstrates that the hyper-heuristic is a computationally efficient alternative to a multiobjective evolutionary algorithm. An offline learning algorithm is used to enhance the optimisation performance of the hyper-heuristic. The optimisation results of the offline trained hyper-heuristic are analysed statistically, and a new offline learning methodology is proposed. The new methodology is evaluated, and shown to produce an improvement in performance on each of the 12 networks. Finally, it is demonstrated that offline learning can be usefully transferred from small, computationally inexpensive problems, to larger computationally expensive ones, and that the improvement in optimisation performance is statistically significant, with 99% confidence.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"29 2","pages":"187-210"},"PeriodicalIF":6.8,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00277","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38069843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Improving Model-Based Genetic Programming for Symbolic Regression of Small Expressions. 基于模型的小表达式符号回归遗传规划改进。
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-06-01 DOI: 10.1162/evco_a_00278
M Virgolin, T Alderliesten, C Witteveen, P A N Bosman

The Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) is a model-based EA framework that has been shown to perform well in several domains, including Genetic Programming (GP). Differently from traditional EAs where variation acts blindly, GOMEA learns a model of interdependencies within the genotype, that is, the linkage, to estimate what patterns to propagate. In this article, we study the role of Linkage Learning (LL) performed by GOMEA in Symbolic Regression (SR). We show that the non-uniformity in the distribution of the genotype in GP populations negatively biases LL, and propose a method to correct for this. We also propose approaches to improve LL when ephemeral random constants are used. Furthermore, we adapt a scheme of interleaving runs to alleviate the burden of tuning the population size, a crucial parameter for LL, to SR. We run experiments on 10 real-world datasets, enforcing a strict limitation on solution size, to enable interpretability. We find that the new LL method outperforms the standard one, and that GOMEA outperforms both traditional and semantic GP. We also find that the small solutions evolved by GOMEA are competitive with tuned decision trees, making GOMEA a promising new approach to SR.

基因池最优混合进化算法(gome)是一种基于模型的EA框架,在遗传规划(GP)等多个领域表现良好。与变异盲目作用的传统ea不同,goma学习基因型内相互依赖的模型,即连锁,以估计传播哪种模式。在本文中,我们研究了goma在符号回归(SR)中的作用。我们表明GP群体中基因型分布的不均匀性负偏倚LL,并提出了一种方法来纠正这一点。我们还提出了在使用短暂随机常数时改进LL的方法。此外,我们采用了交错运行方案,以减轻调整种群大小(LL的关键参数)到sr的负担。我们在10个真实数据集上运行实验,严格限制解决方案的大小,以实现可解释性。我们发现,新方法优于标准方法,并且goma优于传统GP和语义GP。我们还发现,由goma演化出的小解与调优决策树具有竞争力,这使得goma成为一种很有前途的SR新方法。
{"title":"Improving Model-Based Genetic Programming for Symbolic Regression of Small Expressions.","authors":"M Virgolin,&nbsp;T Alderliesten,&nbsp;C Witteveen,&nbsp;P A N Bosman","doi":"10.1162/evco_a_00278","DOIUrl":"https://doi.org/10.1162/evco_a_00278","url":null,"abstract":"<p><p>The Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) is a model-based EA framework that has been shown to perform well in several domains, including Genetic Programming (GP). Differently from traditional EAs where variation acts blindly, GOMEA learns a model of interdependencies within the genotype, that is, the linkage, to estimate what patterns to propagate. In this article, we study the role of Linkage Learning (LL) performed by GOMEA in Symbolic Regression (SR). We show that the non-uniformity in the distribution of the genotype in GP populations negatively biases LL, and propose a method to correct for this. We also propose approaches to improve LL when ephemeral random constants are used. Furthermore, we adapt a scheme of interleaving runs to alleviate the burden of tuning the population size, a crucial parameter for LL, to SR. We run experiments on 10 real-world datasets, enforcing a strict limitation on solution size, to enable interpretability. We find that the new LL method outperforms the standard one, and that GOMEA outperforms both traditional and semantic GP. We also find that the small solutions evolved by GOMEA are competitive with tuned decision trees, making GOMEA a promising new approach to SR.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"29 2","pages":"211-237"},"PeriodicalIF":6.8,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00278","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38075060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Probabilistic Contextual and Structural Dependencies Learning in Grammar-Based Genetic Programming. 基于语法的遗传规划中的概率上下文和结构依赖学习。
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-06-01 DOI: 10.1162/evco_a_00280
Pak-Kan Wong, Man-Leung Wong, Kwong-Sak Leung

Genetic Programming is a method to automatically create computer programs based on the principles of evolution. The problem of deceptiveness caused by complex dependencies among components of programs is challenging. It is important because it can misguide Genetic Programming to create suboptimal programs. Besides, a minor modification in the programs may lead to a notable change in the program behaviours and affect the final outputs. This article presents Grammar-Based Genetic Programming with Bayesian Classifiers (GBGPBC) in which the probabilistic dependencies among components of programs are captured using a set of Bayesian network classifiers. Our system was evaluated using a set of benchmark problems (the deceptive maximum problems, the royal tree problems, and the bipolar asymmetric royal tree problems). It was shown to be often more robust and more efficient in searching the best programs than other related Genetic Programming approaches in terms of the total number of fitness evaluation. We studied what factors affect the performance of GBGPBC and discovered that robust variants of GBGPBC were consistently weakly correlated with some complexity measures. Furthermore, our approach has been applied to learn a ranking program on a set of customers in direct marketing. Our suggested solutions help companies to earn significantly more when compared with other solutions produced by several well-known machine learning algorithms, such as neural networks, logistic regression, and Bayesian networks.

遗传编程是一种基于进化原理自动创建计算机程序的方法。由程序组件之间复杂的依赖关系引起的欺骗问题是具有挑战性的。这很重要,因为它会误导遗传编程,从而产生次优程序。此外,程序的微小修改可能会导致程序行为的显著变化,从而影响最终的输出。本文介绍了使用贝叶斯分类器(GBGPBC)的基于语法的遗传规划,其中使用一组贝叶斯网络分类器捕获程序组件之间的概率依赖关系。我们的系统使用一组基准问题(欺骗性最大值问题、皇家树问题和双极不对称皇家树问题)进行评估。结果表明,从适应度评估的总数来看,该方法比其他相关的遗传规划方法具有更强的鲁棒性和更高的搜索效率。我们研究了影响GBGPBC性能的因素,发现GBGPBC的健壮变体与一些复杂性度量始终呈弱相关。此外,我们的方法已被应用于学习一组直销客户的排名程序。与一些知名的机器学习算法(如神经网络、逻辑回归和贝叶斯网络)产生的其他解决方案相比,我们建议的解决方案可以帮助公司获得更多的收益。
{"title":"Probabilistic Contextual and Structural Dependencies Learning in Grammar-Based Genetic Programming.","authors":"Pak-Kan Wong,&nbsp;Man-Leung Wong,&nbsp;Kwong-Sak Leung","doi":"10.1162/evco_a_00280","DOIUrl":"https://doi.org/10.1162/evco_a_00280","url":null,"abstract":"<p><p>Genetic Programming is a method to automatically create computer programs based on the principles of evolution. The problem of deceptiveness caused by complex dependencies among components of programs is challenging. It is important because it can misguide Genetic Programming to create suboptimal programs. Besides, a minor modification in the programs may lead to a notable change in the program behaviours and affect the final outputs. This article presents Grammar-Based Genetic Programming with Bayesian Classifiers (GBGPBC) in which the probabilistic dependencies among components of programs are captured using a set of Bayesian network classifiers. Our system was evaluated using a set of benchmark problems (the deceptive maximum problems, the royal tree problems, and the bipolar asymmetric royal tree problems). It was shown to be often more robust and more efficient in searching the best programs than other related Genetic Programming approaches in terms of the total number of fitness evaluation. We studied what factors affect the performance of GBGPBC and discovered that robust variants of GBGPBC were consistently weakly correlated with some complexity measures. Furthermore, our approach has been applied to learn a ranking program on a set of customers in direct marketing. Our suggested solutions help companies to earn significantly more when compared with other solutions produced by several well-known machine learning algorithms, such as neural networks, logistic regression, and Bayesian networks.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"29 2","pages":"239-268"},"PeriodicalIF":6.8,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38483671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Decomposition-Based Evolutionary Algorithm with Correlative Selection Mechanism for Many-Objective Optimization. 一种基于分解的关联选择机制的多目标优化进化算法。
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-06-01 DOI: 10.1162/evco_a_00279
Ruochen Liu, Ruinan Wang, Renyu Bian, Jing Liu, Licheng Jiao

Decomposition-based evolutionary algorithms have been quite successful in dealing with multiobjective optimization problems. Recently, more and more researchers attempt to apply the decomposition approach to solve many-objective optimization problems. A many-objective evolutionary algorithm based on decomposition with correlative selection mechanism (MOEA/D-CSM) is also proposed to solve many-objective optimization problems in this article. Since MOEA/D-SCM is based on a decomposition approach which adopts penalty boundary intersection (PBI), a set of reference points must be generated in advance. Thus, a new concept related to the set of reference points is introduced first, namely, the correlation between an individual and a reference point. Thereafter, a new selection mechanism based on the correlation is designed and called correlative selection mechanism. The correlative selection mechanism finds its correlative individuals for each reference point as soon as possible so that the diversity among population members is maintained. However, when a reference point has two or more correlative individuals, the worse correlative individuals may be removed from a population so that the solutions can be ensured to move toward the Pareto-optimal front. In a comprehensive experimental study, we apply MOEA/D-CSM to a number of many-objective test problems with 3 to 15 objectives and make a comparison with three state-of-the-art many-objective evolutionary algorithms, namely, NSGA-III, MOEA/D, and RVEA. Experimental results show that the proposed MOEA/D-CSM can produce competitive results on most of the problems considered in this study.

基于分解的进化算法在处理多目标优化问题方面取得了相当大的成功。近年来,越来越多的研究者尝试用分解方法求解多目标优化问题。本文还提出了一种基于关联选择机制分解的多目标进化算法(MOEA/D-CSM)来解决多目标优化问题。由于MOEA/D-SCM是基于一种采用罚边界相交(PBI)的分解方法,因此必须提前生成一组参考点。因此,首先引入了一个与参考点集合相关的新概念,即个体与参考点之间的相关性。在此基础上,设计了一种新的基于相关性的选择机制,称为关联选择机制。相关选择机制在每个参考点上尽快找到与其相关的个体,从而保持种群成员之间的多样性。然而,当一个参考点有两个或两个以上的相关个体时,可能会从群体中移除相关性较差的个体,以确保解向帕累托最优前沿移动。在一项综合实验研究中,我们将MOEA/D- csm应用于3 - 15个目标的多目标测试问题,并与NSGA-III、MOEA/D和RVEA三种最先进的多目标进化算法进行了比较。实验结果表明,本文所提出的MOEA/D-CSM在大多数问题上都能产生具有竞争力的结果。
{"title":"A Decomposition-Based Evolutionary Algorithm with Correlative Selection Mechanism for Many-Objective Optimization.","authors":"Ruochen Liu,&nbsp;Ruinan Wang,&nbsp;Renyu Bian,&nbsp;Jing Liu,&nbsp;Licheng Jiao","doi":"10.1162/evco_a_00279","DOIUrl":"https://doi.org/10.1162/evco_a_00279","url":null,"abstract":"<p><p>Decomposition-based evolutionary algorithms have been quite successful in dealing with multiobjective optimization problems. Recently, more and more researchers attempt to apply the decomposition approach to solve many-objective optimization problems. A many-objective evolutionary algorithm based on decomposition with correlative selection mechanism (MOEA/D-CSM) is also proposed to solve many-objective optimization problems in this article. Since MOEA/D-SCM is based on a decomposition approach which adopts penalty boundary intersection (PBI), a set of reference points must be generated in advance. Thus, a new concept related to the set of reference points is introduced first, namely, the correlation between an individual and a reference point. Thereafter, a new selection mechanism based on the correlation is designed and called correlative selection mechanism. The correlative selection mechanism finds its correlative individuals for each reference point as soon as possible so that the diversity among population members is maintained. However, when a reference point has two or more correlative individuals, the worse correlative individuals may be removed from a population so that the solutions can be ensured to move toward the Pareto-optimal front. In a comprehensive experimental study, we apply MOEA/D-CSM to a number of many-objective test problems with 3 to 15 objectives and make a comparison with three state-of-the-art many-objective evolutionary algorithms, namely, NSGA-III, MOEA/D, and RVEA. Experimental results show that the proposed MOEA/D-CSM can produce competitive results on most of the problems considered in this study.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"29 2","pages":"269-304"},"PeriodicalIF":6.8,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38483670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A Systematic Literature Review of the Successors of “NeuroEvolution of Augmenting Topologies” “扩充拓扑的神经进化”后续研究的系统文献综述
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-03-02 DOI: 10.1162/evco_a_00282
Evgenia Papavasileiou;Jan Cornelis;Bart Jansen
NeuroEvolution (NE) refers to a family of methods for optimizing Artificial Neural Networks (ANNs) using Evolutionary Computation (EC) algorithms. NeuroEvolution of Augmenting Topologies (NEAT) is considered one of the most influential algorithms in the field. Eighteen years after its invention, a plethora of methods have been proposed that extend NEAT in different aspects. In this article, we present a systematic literature review (SLR) to list and categorize the methods succeeding NEAT. Our review protocol identified 232 papers by merging the findings of two major electronic databases. Applying criteria that determine the paper's relevance and assess its quality, resulted in 61 methods that are presented in this article. Our review article proposes a new categorization scheme of NEAT's successors into three clusters. NEAT-based methods are categorized based on 1) whether they consider issues specific to the search space or the fitness landscape, 2) whether they combine principles from NE and another domain, or 3) the particular properties of the evolved ANNs. The clustering supports researchers 1) understanding the current state of the art that will enable them, 2) exploring new research directions or 3) benchmarking their proposed method to the state of the art, if they are interested in comparing, and 4) positioning themselves in the domain or 5) selecting a method that is most appropriate for their problem.
神经进化(NE)是指使用进化计算(EC)算法优化人工神经网络(Ann)的一系列方法。增强拓扑的神经进化(NEAT)被认为是该领域最具影响力的算法之一。在其发明18年后,已经提出了大量在不同方面扩展NEAT的方法。在这篇文章中,我们提出了一个系统的文献综述(SLR)来列出和分类NEAT之后的方法。我们的审查协议通过合并两个主要电子数据库的研究结果,确定了232篇论文。应用确定论文相关性和评估其质量的标准,得出了本文中提出的61种方法。我们的综述文章提出了一种新的分类方案,将NEAT的继任者分为三个集群。基于NEAT的方法基于以下方面进行分类:1)它们是否考虑了搜索空间或适应度景观特有的问题,2)它们是否结合了NE和另一个领域的原理,或者3)进化的Ann的特定特性。聚类支持研究人员1)了解使他们能够实现的当前技术状态,2)探索新的研究方向,或3)如果他们有兴趣进行比较,则将他们提出的方法与现有技术进行比较,以及4)在该领域中定位自己,或5)选择最适合他们问题的方法。
{"title":"A Systematic Literature Review of the Successors of “NeuroEvolution of Augmenting Topologies”","authors":"Evgenia Papavasileiou;Jan Cornelis;Bart Jansen","doi":"10.1162/evco_a_00282","DOIUrl":"10.1162/evco_a_00282","url":null,"abstract":"<para>NeuroEvolution (NE) refers to a family of methods for optimizing Artificial Neural Networks (ANNs) using Evolutionary Computation (EC) algorithms. NeuroEvolution of Augmenting Topologies (NEAT) is considered one of the most influential algorithms in the field. Eighteen years after its invention, a plethora of methods have been proposed that extend NEAT in different aspects. In this article, we present a systematic literature review (SLR) to list and categorize the methods succeeding NEAT. Our review protocol identified 232 papers by merging the findings of two major electronic databases. Applying criteria that determine the paper's relevance and assess its quality, resulted in 61 methods that are presented in this article. Our review article proposes a new categorization scheme of NEAT's successors into three clusters. NEAT-based methods are categorized based on 1) whether they consider issues specific to the search space or the fitness landscape, 2) whether they combine principles from NE and another domain, or 3) the particular properties of the evolved ANNs. The clustering supports researchers 1) understanding the current state of the art that will enable them, 2) exploring new research directions or 3) benchmarking their proposed method to the state of the art, if they are interested in comparing, and 4) positioning themselves in the domain or 5) selecting a method that is most appropriate for their problem.</para>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"29 1","pages":"1-73"},"PeriodicalIF":6.8,"publicationDate":"2021-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38569425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Effect of Objective Normalization and Penalty Parameter on Penalty Boundary Intersection Decomposition-Based Evolutionary Many-Objective Optimization Algorithms 目标归一化和惩罚参数对基于惩罚边界交集分解的进化多目标优化算法的影响
IF 6.8 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-03-02 DOI: 10.1162/evco_a_00276
Lei Chen;Kalyanmoy Deb;Hai-Lin Liu;Qingfu Zhang
An objective normalization strategy is essential in any evolutionary multiobjective or many-objective optimization (EMO or EMaO) algorithm, due to the distance calculations between objective vectors required to compute diversity and convergence of population members. For the decomposition-based EMO/EMaO algorithms involving the Penalty Boundary Intersection (PBI) metric, normalization is an important matter due to the computation of two distance metrics. In this article, we make a theoretical analysis of the effect of instabilities in the normalization process on the performance of PBI-based MOEA/D and a proposed PBI-based NSGA-III procedure. Although the effect is well recognized in the literature, few theoretical studies have been done so far to understand its true nature and the choice of a suitable penalty parameter value for an arbitrary problem. The developed theoretical results have been corroborated with extensive experimental results on three to 15-objective convex and non-convex instances of DTLZ and WFG problems. The article, makes important theoretical conclusions on PBI-based decomposition algorithms derived from the study.
由于计算群体成员的多样性和收敛性所需的目标向量之间的距离计算,目标归一化策略在任何进化多目标或多目标优化(EMO或EMaO)算法中都是必不可少的。对于涉及惩罚边界交集(PBI)度量的基于分解的EMO/EMaO算法,由于两个距离度量的计算,归一化是一个重要问题。在本文中,我们从理论上分析了归一化过程中的不稳定性对基于PBI的MOEA/D性能的影响,并提出了一个基于PBI和NSGA-III的程序。尽管这种效应在文献中得到了很好的认可,但到目前为止,很少有理论研究来了解其真实性质以及为任意问题选择合适的惩罚参数值。在DTLZ和WFG问题的3到15个目标凸和非凸实例上的大量实验结果证实了所发展的理论结果。文章对基于PBI的分解算法的研究得出了重要的理论结论。
{"title":"Effect of Objective Normalization and Penalty Parameter on Penalty Boundary Intersection Decomposition-Based Evolutionary Many-Objective Optimization Algorithms","authors":"Lei Chen;Kalyanmoy Deb;Hai-Lin Liu;Qingfu Zhang","doi":"10.1162/evco_a_00276","DOIUrl":"10.1162/evco_a_00276","url":null,"abstract":"<para>An objective normalization strategy is essential in any evolutionary multiobjective or many-objective optimization (EMO or EMaO) algorithm, due to the distance calculations between objective vectors required to compute diversity and convergence of population members. For the decomposition-based EMO/EMaO algorithms involving the Penalty Boundary Intersection (PBI) metric, normalization is an important matter due to the computation of two distance metrics. In this article, we make a theoretical analysis of the effect of instabilities in the normalization process on the performance of PBI-based MOEA/D and a proposed PBI-based NSGA-III procedure. Although the effect is well recognized in the literature, few theoretical studies have been done so far to understand its true nature and the choice of a suitable penalty parameter value for an arbitrary problem. The developed theoretical results have been corroborated with extensive experimental results on three to 15-objective convex and non-convex instances of DTLZ and WFG problems. The article, makes important theoretical conclusions on PBI-based decomposition algorithms derived from the study.</para>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"29 1","pages":"157-186"},"PeriodicalIF":6.8,"publicationDate":"2021-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00276","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38069842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
期刊
Evolutionary Computation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1