Pub Date : 2022-07-18DOI: 10.1109/CEC55065.2022.9870377
A. Peerlinck, John W. Sheppard
We propose a factored evolutionary framework for multi-objective optimization that can incorporate any multi-objective population based algorithm. Our framework, which is based on Factored Evolutionary Algorithms, uses overlapping subpopulations to increase exploration of the objective space; however, it also allows for the creation of distinct subpopulations as in co-operative co-evolutionary algorithms (CCEA). We apply the framework with the Non-Dominated Sorting Genetic Algorithm-II (NSGA-II), resulting in Factored NSGA-II. We compare NSGA-II, CC-NSGA-II, and F-NSGA-II on two different versions of the multi-objective knapsack problem. The first is the classic binary multi-knapsack implementation introduced by Zitzler and Thiele, where the number of objectives equals the number of knapsacks. The second uses a single knapsack where, aside from maximizing profit and minimizing weight, an additional objective tries to minimize the difference in weight of the items in the knapsack, creating a balanced knapsack. We further extend this version to minimize volume and balance the volume. The proposed 3-to-5 objective balanced single knapsack problem poses a difficult problem for multi-objective algorithms. Our results indicate that the non-dominated solutions found by F-NSGA-II tend to cover more of the Pareto front and have a larger hypervolume.
我们提出了一个多目标优化的因子进化框架,该框架可以结合任何基于多目标种群的算法。我们的框架基于因子进化算法,使用重叠的子种群来增加对目标空间的探索;然而,它也允许在合作共同进化算法(CCEA)中创建不同的亚种群。我们将该框架与非支配排序遗传算法- ii (NSGA-II)一起应用,从而得到因子NSGA-II。我们比较了NSGA-II、CC-NSGA-II和F-NSGA-II在两个不同版本的多目标背包问题上的表现。第一种是由Zitzler和Thiele引入的经典二进制多背包实现,其中目标的数量等于背包的数量。第二种方法使用单个背包,除了利润最大化和重量最小化之外,还有一个额外的目标是尽量减少背包中物品的重量差异,从而创造一个平衡的背包。我们进一步扩展了这个版本,以最小化音量和平衡音量。提出的3- 5目标平衡单背包问题是多目标算法中的一个难题。我们的结果表明,F-NSGA-II发现的非支配解倾向于覆盖更多的帕累托锋面,并且具有更大的超容积。
{"title":"Multi-Objective Factored Evolutionary Optimization and the Multi-Objective Knapsack Problem","authors":"A. Peerlinck, John W. Sheppard","doi":"10.1109/CEC55065.2022.9870377","DOIUrl":"https://doi.org/10.1109/CEC55065.2022.9870377","url":null,"abstract":"We propose a factored evolutionary framework for multi-objective optimization that can incorporate any multi-objective population based algorithm. Our framework, which is based on Factored Evolutionary Algorithms, uses overlapping subpopulations to increase exploration of the objective space; however, it also allows for the creation of distinct subpopulations as in co-operative co-evolutionary algorithms (CCEA). We apply the framework with the Non-Dominated Sorting Genetic Algorithm-II (NSGA-II), resulting in Factored NSGA-II. We compare NSGA-II, CC-NSGA-II, and F-NSGA-II on two different versions of the multi-objective knapsack problem. The first is the classic binary multi-knapsack implementation introduced by Zitzler and Thiele, where the number of objectives equals the number of knapsacks. The second uses a single knapsack where, aside from maximizing profit and minimizing weight, an additional objective tries to minimize the difference in weight of the items in the knapsack, creating a balanced knapsack. We further extend this version to minimize volume and balance the volume. The proposed 3-to-5 objective balanced single knapsack problem poses a difficult problem for multi-objective algorithms. Our results indicate that the non-dominated solutions found by F-NSGA-II tend to cover more of the Pareto front and have a larger hypervolume.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115978079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/CEC55065.2022.9870348
Eugen Croitoru, Alexandru–Denis Chipărus, H. Luchian
Taking focused inspiration from biological evolution, we present an empirical study which shows that a Simple Genetic Algorithm (SGA) exhibits punctuated equilibria and punctuated gradualism in its evolution. Using the concept of consensus sequences, and comparing genotype change to phenotype change, we show how an SGA explores candidate solutions along a neutral network - Hamming-proximal bitstrings of similar fit-ness. Alongside mapping the normal functioning of an SGA, we monitor the formation of error thresholds “from above” by starting with a high mutation probability and slowly lowering it, during hundreds of thousands of generations. The formation of a stable consensus sequence is marked by a measurable upheaval in the dynamics of the population, leading to an efficient exploration of the search space in a short time. After the global optimum is found, we can still measure the degree of exploration the SGA performs on that neutral network, and observe punctuated equilibria. We use 11 numerical benchmark functions, along with the Royal Road Function, and a similar bit block Trap Function; the phenomena observed are largely similar on all of them, pointing to a generic behaviour of Genetic Algorithms, rather than problem particularities. Using a consensus sequence (a per-locus-mode chromosome) obscures quasispecies dynamics. This is why we use a per-locus-mean chromosome to measure information change between successive generations, and plot the number and maximal size of Quasispecies and Neutral Networks.
{"title":"Punctuated Equilibrium and Neutral Networks in Genetic Algorithms","authors":"Eugen Croitoru, Alexandru–Denis Chipărus, H. Luchian","doi":"10.1109/CEC55065.2022.9870348","DOIUrl":"https://doi.org/10.1109/CEC55065.2022.9870348","url":null,"abstract":"Taking focused inspiration from biological evolution, we present an empirical study which shows that a Simple Genetic Algorithm (SGA) exhibits punctuated equilibria and punctuated gradualism in its evolution. Using the concept of consensus sequences, and comparing genotype change to phenotype change, we show how an SGA explores candidate solutions along a neutral network - Hamming-proximal bitstrings of similar fit-ness. Alongside mapping the normal functioning of an SGA, we monitor the formation of error thresholds “from above” by starting with a high mutation probability and slowly lowering it, during hundreds of thousands of generations. The formation of a stable consensus sequence is marked by a measurable upheaval in the dynamics of the population, leading to an efficient exploration of the search space in a short time. After the global optimum is found, we can still measure the degree of exploration the SGA performs on that neutral network, and observe punctuated equilibria. We use 11 numerical benchmark functions, along with the Royal Road Function, and a similar bit block Trap Function; the phenomena observed are largely similar on all of them, pointing to a generic behaviour of Genetic Algorithms, rather than problem particularities. Using a consensus sequence (a per-locus-mode chromosome) obscures quasispecies dynamics. This is why we use a per-locus-mean chromosome to measure information change between successive generations, and plot the number and maximal size of Quasispecies and Neutral Networks.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116765316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/CEC55065.2022.9870233
Manou Rosenberg, Mark Reynolds, T. French, Lyndon While
In this study we propose obstacle-aware evolution-ary algorithms to identify optimised network topologies for electricity distribution networks including isolated microgrids or stand-alone power systems. We outline the extension of two evo-lutionary algorithms that are modified to consider different types of geographically constrained areas in electricity distribution planning. These areas are represented as polygonal obstacles that either cannot be traversed or cause a higher weight factor when traversing. Both proposed evolutionary algorithms are extended such that they find optimised network solutions that avoid solid obstacles and consider the increased cost of traversing soft obstacles. The algorithms are tested and compared on different types of problem instances with solid and soft obstacles and the problem-specific evolutionary algorithm can be shown to successfully find low cost network topologies on a range of different test instances.
{"title":"Evolutionary Algorithms for Planning Remote Electricity Distribution Networks Considering Isolated Microgrids and Geographical Constraints","authors":"Manou Rosenberg, Mark Reynolds, T. French, Lyndon While","doi":"10.1109/CEC55065.2022.9870233","DOIUrl":"https://doi.org/10.1109/CEC55065.2022.9870233","url":null,"abstract":"In this study we propose obstacle-aware evolution-ary algorithms to identify optimised network topologies for electricity distribution networks including isolated microgrids or stand-alone power systems. We outline the extension of two evo-lutionary algorithms that are modified to consider different types of geographically constrained areas in electricity distribution planning. These areas are represented as polygonal obstacles that either cannot be traversed or cause a higher weight factor when traversing. Both proposed evolutionary algorithms are extended such that they find optimised network solutions that avoid solid obstacles and consider the increased cost of traversing soft obstacles. The algorithms are tested and compared on different types of problem instances with solid and soft obstacles and the problem-specific evolutionary algorithm can be shown to successfully find low cost network topologies on a range of different test instances.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116073295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The original Particle Swarm Optimization (PSO) used two formulas to describe updating of particle's position and velocity, respectively, based on simulating the foraging behavior of bird swarm. The general improving methods on PSO are to adjust and optimize its parameters or combine new learning strategy to update velocity formula for the better performance. But these methods lack of theoretical analysis and make the algorithm more complex. This paper proposes a new formulation to restructure the particles' position updating behaviors based on linear system theory, and obtain a Restructuring PSO algorithm (RPSO). Compared with the conventional PSO algorithm, RPSO only uses one particle position updating formula, without velocity updating formula, and takes fewer parameters. In order to verify the effectiveness of RPSO, experiments on the CEC 2013 benchmark functions have been conducted to compare with four algorithms, and the final results show that proposed algorithm has a certain degree of competition.
{"title":"Restructuring Particle Swarm Optimization algorithm based on linear system theory","authors":"Jian-lin Zhu, Jianhua Liu, Zihang Wang, Yuxiang Chen","doi":"10.1109/CEC55065.2022.9870261","DOIUrl":"https://doi.org/10.1109/CEC55065.2022.9870261","url":null,"abstract":"The original Particle Swarm Optimization (PSO) used two formulas to describe updating of particle's position and velocity, respectively, based on simulating the foraging behavior of bird swarm. The general improving methods on PSO are to adjust and optimize its parameters or combine new learning strategy to update velocity formula for the better performance. But these methods lack of theoretical analysis and make the algorithm more complex. This paper proposes a new formulation to restructure the particles' position updating behaviors based on linear system theory, and obtain a Restructuring PSO algorithm (RPSO). Compared with the conventional PSO algorithm, RPSO only uses one particle position updating formula, without velocity updating formula, and takes fewer parameters. In order to verify the effectiveness of RPSO, experiments on the CEC 2013 benchmark functions have been conducted to compare with four algorithms, and the final results show that proposed algorithm has a certain degree of competition.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116273721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/CEC55065.2022.9870282
Ying Liu, P. Thulasiraman, N. Pillay
The travelling salesperson problem is an NP-hard combinatorial optimization problem. In this paper, we consider the multi-objective travelling salesperson problem (MTSP), both static and dynamic, with conflicting objectives. NSGA-II and MOEA/D, two popular evolutionary multi-objective optimization algorithms suffer from loss of diversity and poor convergence when applied separately on MTSP. However, both these techniques have their individual strengths. NSGA-II maintains di-versity through non-dominated sorting and crowding distance selection. MOEA/D is good at exploring extreme points on the Pareto front with faster convergence. In this paper, we adopt the bicriterion framework that exploits the strengths of Pareto-Criterion (PC) and Non-Pareto Criterion (NPC) evolutionary populations. In this research, NSGA-II (PC) and MOEA/D (NPC) coevolve to compensate the diversity of each other. We further improve the convergence using local search and a hybrid of order crossover and inver-over operators. To our knowledge, this is the first work that combines NSGA-II and MOEA/D in a bicriterion framework for solving MTSP, both static and dynamic. We perform various experiments on different MTSP bench-mark datasets with and without traffic factors to study static and dynamic MTSP. Our proposed algorithm is compared against standard algorithms such as NSGA-II & III, MOEA/D, and a baseline divide and conquer coevolution technique using performance metrics such as inverted generational distance, hypervolume, and the spacing metric to concurrently quantify the convergence and diversity of our proposed algorithm. We also compare our results to datasets used in the literature and show that our proposed algorithm performs empirically better than compared algorithms.
{"title":"Bicriterion Coevolution for the Multi-objective Travelling Salesperson Problem","authors":"Ying Liu, P. Thulasiraman, N. Pillay","doi":"10.1109/CEC55065.2022.9870282","DOIUrl":"https://doi.org/10.1109/CEC55065.2022.9870282","url":null,"abstract":"The travelling salesperson problem is an NP-hard combinatorial optimization problem. In this paper, we consider the multi-objective travelling salesperson problem (MTSP), both static and dynamic, with conflicting objectives. NSGA-II and MOEA/D, two popular evolutionary multi-objective optimization algorithms suffer from loss of diversity and poor convergence when applied separately on MTSP. However, both these techniques have their individual strengths. NSGA-II maintains di-versity through non-dominated sorting and crowding distance selection. MOEA/D is good at exploring extreme points on the Pareto front with faster convergence. In this paper, we adopt the bicriterion framework that exploits the strengths of Pareto-Criterion (PC) and Non-Pareto Criterion (NPC) evolutionary populations. In this research, NSGA-II (PC) and MOEA/D (NPC) coevolve to compensate the diversity of each other. We further improve the convergence using local search and a hybrid of order crossover and inver-over operators. To our knowledge, this is the first work that combines NSGA-II and MOEA/D in a bicriterion framework for solving MTSP, both static and dynamic. We perform various experiments on different MTSP bench-mark datasets with and without traffic factors to study static and dynamic MTSP. Our proposed algorithm is compared against standard algorithms such as NSGA-II & III, MOEA/D, and a baseline divide and conquer coevolution technique using performance metrics such as inverted generational distance, hypervolume, and the spacing metric to concurrently quantify the convergence and diversity of our proposed algorithm. We also compare our results to datasets used in the literature and show that our proposed algorithm performs empirically better than compared algorithms.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127166802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/CEC55065.2022.9870440
James Sargant, S. Houghten, Michael Dubé
A generative evolutionary algorithm is used to evolve weighted personal contact networks that represent physical contact between individuals, and thus possible paths of infection during an epidemic. The evolutionary algorithm evolves a list of edge-editing operations applied to an initial graph. Two initial graphs are considered, a ring graph and a power-law graph. Different probabilities of infection and a wide range of weights are considered, which improve performance over other work. Modified edge operations are introduced, which also improve performance. It is shown that when trying to maximize epidemic duration, the best results are obtained when using the ring graph as the initial graph. When attempting to match a given epidemic profile, similar results are obtained when using either initial graph, but both improve performance over other work.
{"title":"Evolving Weighted Contact Networks for Epidemic Modeling: the Ring and the Power","authors":"James Sargant, S. Houghten, Michael Dubé","doi":"10.1109/CEC55065.2022.9870440","DOIUrl":"https://doi.org/10.1109/CEC55065.2022.9870440","url":null,"abstract":"A generative evolutionary algorithm is used to evolve weighted personal contact networks that represent physical contact between individuals, and thus possible paths of infection during an epidemic. The evolutionary algorithm evolves a list of edge-editing operations applied to an initial graph. Two initial graphs are considered, a ring graph and a power-law graph. Different probabilities of infection and a wide range of weights are considered, which improve performance over other work. Modified edge operations are introduced, which also improve performance. It is shown that when trying to maximize epidemic duration, the best results are obtained when using the ring graph as the initial graph. When attempting to match a given epidemic profile, similar results are obtained when using either initial graph, but both improve performance over other work.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125569606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/CEC55065.2022.9870295
V. Stanovov, S. Akhmedova, E. Semenkin
In this paper the adaptive differential evolution algorithm is presented, which includes a set of concepts, such as linear bias change in parameter adaptation, repetitive generation of points for bound constraint handling, as well as non-linear population size reduction and selective pressure. The proposed algorithm is used to solve the problems of the CEC 2022 Bound Constrained Single Objective Numerical Optimization bench-mark problems. The computational experiments and analysis of the results demonstrate that the NL-SHADE-LBC algorithm presented in this study is able to demonstrate high efficiency in solving complex optimization problems compared to the winners of the previous years' competitions.
{"title":"NL-SHADE-LBC algorithm with linear parameter adaptation bias change for CEC 2022 Numerical Optimization","authors":"V. Stanovov, S. Akhmedova, E. Semenkin","doi":"10.1109/CEC55065.2022.9870295","DOIUrl":"https://doi.org/10.1109/CEC55065.2022.9870295","url":null,"abstract":"In this paper the adaptive differential evolution algorithm is presented, which includes a set of concepts, such as linear bias change in parameter adaptation, repetitive generation of points for bound constraint handling, as well as non-linear population size reduction and selective pressure. The proposed algorithm is used to solve the problems of the CEC 2022 Bound Constrained Single Objective Numerical Optimization bench-mark problems. The computational experiments and analysis of the results demonstrate that the NL-SHADE-LBC algorithm presented in this study is able to demonstrate high efficiency in solving complex optimization problems compared to the winners of the previous years' competitions.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127100991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/CEC55065.2022.9870420
Karam M. Sallam, Mohamed Abdel-Basset, Mohammed El-Abd, A. W. Mohamed
The success of differential evolution algorithm depends on its offspring breeding strategy and the associated control parameters. Improved Multi-Operator Differential Evolution (IMODE) proved its efficiency and ranked first in the CEC2020 competition. In this paper, an improved IMODE, called IMODEII, is introduced. In IMODEII, Reinforcement Learning (RL), a computational methodology that simulates interaction-based learning, is used as an adaptive operator selection approach. RL is used to select the best-performing action among three of them in the optimization process to evolve a set of solution based on the population state and reward value. Different from IMODE, only two mutation strategies have been used in IMODEII. We tested the performance of the proposed IMODEII by considering 12 benchmark functions with 10 and 20 variables taken from CEC2022 competition on single objective bound constrained numerical optimisation. A comparison between the proposed IMODEII and the state-of-the-art algorithms is conducted, with the results demonstrating the efficiency of the proposed IMODEII.
{"title":"IMODEII: an Improved IMODE algorithm based on the Reinforcement Learning","authors":"Karam M. Sallam, Mohamed Abdel-Basset, Mohammed El-Abd, A. W. Mohamed","doi":"10.1109/CEC55065.2022.9870420","DOIUrl":"https://doi.org/10.1109/CEC55065.2022.9870420","url":null,"abstract":"The success of differential evolution algorithm depends on its offspring breeding strategy and the associated control parameters. Improved Multi-Operator Differential Evolution (IMODE) proved its efficiency and ranked first in the CEC2020 competition. In this paper, an improved IMODE, called IMODEII, is introduced. In IMODEII, Reinforcement Learning (RL), a computational methodology that simulates interaction-based learning, is used as an adaptive operator selection approach. RL is used to select the best-performing action among three of them in the optimization process to evolve a set of solution based on the population state and reward value. Different from IMODE, only two mutation strategies have been used in IMODEII. We tested the performance of the proposed IMODEII by considering 12 benchmark functions with 10 and 20 variables taken from CEC2022 competition on single objective bound constrained numerical optimisation. A comparison between the proposed IMODEII and the state-of-the-art algorithms is conducted, with the results demonstrating the efficiency of the proposed IMODEII.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131392951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/CEC55065.2022.9870421
Miguel Martins, M. Rocha, Vítor Pereira
Recent developments in Generative Deep Learning have fostered new engineering methods for protein design. Although deep generative models trained on protein sequence can learn biologically meaningful representations, the design of proteins with optimised properties remains a challenge. We combined deep learning architectures with evolutionary computation to steer the protein generative process towards specific sets of properties to address this problem. The latent space of a Variational Autoencoder is explored by evolutionary algorithms to find the best candidates. A set of single-objective and multi-objective problems were conceived to evaluate the algorithms' capacity to optimise proteins. The optimisation tasks consider the average proteins' hydrophobicity, their solubility and the probability of being generated by a defined functional Hidden Markov Model profile. The results show that Evolutionary Algorithms can achieve good results while allowing for more variability in the design of the experiment, thus resulting in a much greater set of possibly functional novel proteins.
{"title":"Variational Autoencoders and Evolutionary Algorithms for Targeted Novel Enzyme Design","authors":"Miguel Martins, M. Rocha, Vítor Pereira","doi":"10.1109/CEC55065.2022.9870421","DOIUrl":"https://doi.org/10.1109/CEC55065.2022.9870421","url":null,"abstract":"Recent developments in Generative Deep Learning have fostered new engineering methods for protein design. Although deep generative models trained on protein sequence can learn biologically meaningful representations, the design of proteins with optimised properties remains a challenge. We combined deep learning architectures with evolutionary computation to steer the protein generative process towards specific sets of properties to address this problem. The latent space of a Variational Autoencoder is explored by evolutionary algorithms to find the best candidates. A set of single-objective and multi-objective problems were conceived to evaluate the algorithms' capacity to optimise proteins. The optimisation tasks consider the average proteins' hydrophobicity, their solubility and the probability of being generated by a defined functional Hidden Markov Model profile. The results show that Evolutionary Algorithms can achieve good results while allowing for more variability in the design of the experiment, thus resulting in a much greater set of possibly functional novel proteins.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130433331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/CEC55065.2022.9870386
G. Greenwood, D. Ashlock
Divide the dollar is a simpler version of a game invented by John Nash to study the bargaining problem. The generalized divide the dollar game is an n-player version. Evolutionary algorithms can be used to evolve players for this game, but it has been previously shown representation has a profound effect on the success of the evolutionary search. Representation defines both the genome and the move (search) operator used by the evolutionary algorithm. This study investigates how well two representations for a 3-player generalized divide the dollar game, one using a differential evolution move operator and the other a CMA-ES move operator, can find good players implemented as neural networks. Our results indicate both representations can evolve very good player trios, but the CMA-ES representation tends to evolve fairer players.
{"title":"Evolving Neural Networks for a Generalized Divide the Dollar Game","authors":"G. Greenwood, D. Ashlock","doi":"10.1109/CEC55065.2022.9870386","DOIUrl":"https://doi.org/10.1109/CEC55065.2022.9870386","url":null,"abstract":"Divide the dollar is a simpler version of a game invented by John Nash to study the bargaining problem. The generalized divide the dollar game is an n-player version. Evolutionary algorithms can be used to evolve players for this game, but it has been previously shown representation has a profound effect on the success of the evolutionary search. Representation defines both the genome and the move (search) operator used by the evolutionary algorithm. This study investigates how well two representations for a 3-player generalized divide the dollar game, one using a differential evolution move operator and the other a CMA-ES move operator, can find good players implemented as neural networks. Our results indicate both representations can evolve very good player trios, but the CMA-ES representation tends to evolve fairer players.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116616047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}