This paper presents a coalition-based metaheuristic (CBM) to solve the uncapacitated facility location problem. CBM is a population-based metaheuristic where individuals encapsulate a single solution and are considered as agents. In comparison to classical evolutionary algorithms, these agents have additional capacities of decision, learning and cooperation. Our approach is also a case study to present how concepts from multiagent systems' domain may contribute to the design of new metaheuristics. The tackled problem is a well-known combinatorial optimization problem, namely the uncapacitated facility location problem, that consists in determining the sites in which some facilities must be set up to satisfy the requirements of a client set at minimum cost. A computational experiment is conducted to test the performance of learning mechanisms and to compare our approach with several existing metaheuristics. The results showed that CBM is competitive with powerful heuristics approaches and presents several advantages in terms of flexibility and modularity.
{"title":"A cooperative and self-adaptive metaheuristic for the facility location problem","authors":"D. Meignan, Jean-Charles Créput, A. Koukam","doi":"10.1145/1569901.1569946","DOIUrl":"https://doi.org/10.1145/1569901.1569946","url":null,"abstract":"This paper presents a coalition-based metaheuristic (CBM) to solve the uncapacitated facility location problem. CBM is a population-based metaheuristic where individuals encapsulate a single solution and are considered as agents. In comparison to classical evolutionary algorithms, these agents have additional capacities of decision, learning and cooperation. Our approach is also a case study to present how concepts from multiagent systems' domain may contribute to the design of new metaheuristics. The tackled problem is a well-known combinatorial optimization problem, namely the uncapacitated facility location problem, that consists in determining the sites in which some facilities must be set up to satisfy the requirements of a client set at minimum cost. A computational experiment is conducted to test the performance of learning mechanisms and to compare our approach with several existing metaheuristics. The results showed that CBM is competitive with powerful heuristics approaches and presents several advantages in terms of flexibility and modularity.","PeriodicalId":193093,"journal":{"name":"Proceedings of the 11th Annual conference on Genetic and evolutionary computation","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130492617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent work in test based coevolution has focused on employing ideas from multi-objective optimization in coevolutionary domains. So called Pareto coevolution treats the coevolving set of test cases as objectives to be optimized in the sense of multi-objective optimization. Pareto coevolution can be seen as a relaxation of traditional multi-objective evolutionary optimization. Rather than being forced to determine the outcome of a particular individual on every objective, pareto coevolution allows the examination of an individual's outcome on a particular objective. By introducing the notion of certifying pareto dominance and mutual non-dominance, this paper proves for the first time that free lunches exist for the class of pareto coevolutionary optimization problems. This theoretical result is of particular interest because we explicitly provide an algorithm for pareto coevolution which has better performance, on average, than all traditional multi-objective algorithms in the relaxed setting of pareto coevolution. The notion of certificates of preference/non-preference has potential implications for coevolutionary algorithm design in many classes of coevolution as well as for general multi-objective optimization in the relaxed setting of pareto coevolution.
{"title":"Free lunches in pareto coevolution","authors":"Travis C. Service, D. Tauritz","doi":"10.1145/1569901.1570132","DOIUrl":"https://doi.org/10.1145/1569901.1570132","url":null,"abstract":"Recent work in test based coevolution has focused on employing ideas from multi-objective optimization in coevolutionary domains. So called Pareto coevolution treats the coevolving set of test cases as objectives to be optimized in the sense of multi-objective optimization. Pareto coevolution can be seen as a relaxation of traditional multi-objective evolutionary optimization. Rather than being forced to determine the outcome of a particular individual on every objective, pareto coevolution allows the examination of an individual's outcome on a particular objective. By introducing the notion of certifying pareto dominance and mutual non-dominance, this paper proves for the first time that free lunches exist for the class of pareto coevolutionary optimization problems. This theoretical result is of particular interest because we explicitly provide an algorithm for pareto coevolution which has better performance, on average, than all traditional multi-objective algorithms in the relaxed setting of pareto coevolution. The notion of certificates of preference/non-preference has potential implications for coevolutionary algorithm design in many classes of coevolution as well as for general multi-objective optimization in the relaxed setting of pareto coevolution.","PeriodicalId":193093,"journal":{"name":"Proceedings of the 11th Annual conference on Genetic and evolutionary computation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129654325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A key to the success of any genetic programming process is the use of a good alphabet of atomic building blocks from which solutions can be evolved efficiently. An alphabet that is too granular may generate an unnecessarily large search space; an inappropriately coarse grained alphabet may bias or prevent finding optimal solutions. Here we introduce a method that automatically identifies a small alphabet for a problem domain. We process solutions on the complexity-optimality Pareto front of a number of sample systems and identify terms that appear significantly more frequently than merited by their size. These terms are then used as basic building blocks to solve new problems in the same problem domain. We demonstrate this process on symbolic regression for a variety of physics problems. The method discovers key terms relating to concepts such as energy and momentum. A significant performance enhancement is demonstrated when these terms are then used as basic building blocks on new physics problems. We suggest that identifying a problem-specific alphabet is key to scaling evolutionary methods to higher complexity systems.
{"title":"Discovering a domain alphabet","authors":"Michael D. Schmidt, Hod Lipson","doi":"10.1145/1569901.1570047","DOIUrl":"https://doi.org/10.1145/1569901.1570047","url":null,"abstract":"A key to the success of any genetic programming process is the use of a good alphabet of atomic building blocks from which solutions can be evolved efficiently. An alphabet that is too granular may generate an unnecessarily large search space; an inappropriately coarse grained alphabet may bias or prevent finding optimal solutions. Here we introduce a method that automatically identifies a small alphabet for a problem domain. We process solutions on the complexity-optimality Pareto front of a number of sample systems and identify terms that appear significantly more frequently than merited by their size. These terms are then used as basic building blocks to solve new problems in the same problem domain. We demonstrate this process on symbolic regression for a variety of physics problems. The method discovers key terms relating to concepts such as energy and momentum. A significant performance enhancement is demonstrated when these terms are then used as basic building blocks on new physics problems. We suggest that identifying a problem-specific alphabet is key to scaling evolutionary methods to higher complexity systems.","PeriodicalId":193093,"journal":{"name":"Proceedings of the 11th Annual conference on Genetic and evolutionary computation","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124895390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Track 9: genetic algorithms","authors":"Jano von Hemert, T. Lenaerts","doi":"10.1145/3257503","DOIUrl":"https://doi.org/10.1145/3257503","url":null,"abstract":"","PeriodicalId":193093,"journal":{"name":"Proceedings of the 11th Annual conference on Genetic and evolutionary computation","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125775302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes an evolved control architecture based on neural fields for a relatively complex and unstable dynamical system. The neural field model is capable of addressing goal-based planning problems and has properties, like embedding in an Euclidean space and linear stability, that potentially make it well-fitted for dynamic control tasks. The neural field control architecture is tested over the stability problem on a typical inverted-pendulum and the performance of an evolved neural field and a hand-tuned neural field is compared. The neural field controller performs well in the simulation and has a spatial representation which allows interpretation of field potentials. Also, the evolved neural field performs almost as good as the non-evolved one, is more general, and uses a different strategy to control the plant.
{"title":"Evolved neural fields applied to the stability problem of a simple biped walking model","authors":"Juan J. Figueredo, Jonatan Gómez","doi":"10.1145/1569901.1570154","DOIUrl":"https://doi.org/10.1145/1569901.1570154","url":null,"abstract":"This paper proposes an evolved control architecture based on neural fields for a relatively complex and unstable dynamical system. The neural field model is capable of addressing goal-based planning problems and has properties, like embedding in an Euclidean space and linear stability, that potentially make it well-fitted for dynamic control tasks. The neural field control architecture is tested over the stability problem on a typical inverted-pendulum and the performance of an evolved neural field and a hand-tuned neural field is compared. The neural field controller performs well in the simulation and has a spatial representation which allows interpretation of field potentials. Also, the evolved neural field performs almost as good as the non-evolved one, is more general, and uses a different strategy to control the plant.","PeriodicalId":193093,"journal":{"name":"Proceedings of the 11th Annual conference on Genetic and evolutionary computation","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127086223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The addition of prediction mechanisms in Evolutionary Algorithms (EAs) applied to dynamic environments is essential in order to anticipate the changes in the landscape and maximize its adaptability. In previous work, a combination of a linear regression predictor and a Markov chain model was used to enable the EA to estimate when next change will occur and to predict the direction of the change. Knowing when and how the change will occur, the anticipation of the change was made introducing useful information before it happens. In this paper we introduce mechanisms to dynamically adjust the linear predictor in order to achieve higher adaptability and robustness. We also extend previous studies introducing nonlinear change periods in order to evaluate the predictor's accuracy.
{"title":"Improving prediction in evolutionary algorithms for dynamic environments","authors":"A. Simoes, E. Costa","doi":"10.1145/1569901.1570021","DOIUrl":"https://doi.org/10.1145/1569901.1570021","url":null,"abstract":"The addition of prediction mechanisms in Evolutionary Algorithms (EAs) applied to dynamic environments is essential in order to anticipate the changes in the landscape and maximize its adaptability. In previous work, a combination of a linear regression predictor and a Markov chain model was used to enable the EA to estimate when next change will occur and to predict the direction of the change. Knowing when and how the change will occur, the anticipation of the change was made introducing useful information before it happens. In this paper we introduce mechanisms to dynamically adjust the linear predictor in order to achieve higher adaptability and robustness. We also extend previous studies introducing nonlinear change periods in order to evaluate the predictor's accuracy.","PeriodicalId":193093,"journal":{"name":"Proceedings of the 11th Annual conference on Genetic and evolutionary computation","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127218756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a crossover operator that works with genetic programming trees and is approximately geometric crossover in the semantic space. By defining semantic as program's evaluation profile with respect to a set of fitness cases and constraining to a specific class of metric-based fitness functions, we cause the fitness landscape in the semantic space to have perfect fitness-distance correlation. The proposed approximately geometric semantic crossover exploits this property of the semantic fitness landscape by an appropriate sampling. We demonstrate also how the proposed method may be conveniently combined with hill climbing. We discuss the properties of the methods, and describe an extensive computational experiment concerning logical function synthesis and symbolic regression.
{"title":"Approximating geometric crossover in semantic space","authors":"K. Krawiec, Pawel Lichocki","doi":"10.1145/1569901.1570036","DOIUrl":"https://doi.org/10.1145/1569901.1570036","url":null,"abstract":"We propose a crossover operator that works with genetic programming trees and is approximately geometric crossover in the semantic space. By defining semantic as program's evaluation profile with respect to a set of fitness cases and constraining to a specific class of metric-based fitness functions, we cause the fitness landscape in the semantic space to have perfect fitness-distance correlation. The proposed approximately geometric semantic crossover exploits this property of the semantic fitness landscape by an appropriate sampling. We demonstrate also how the proposed method may be conveniently combined with hill climbing. We discuss the properties of the methods, and describe an extensive computational experiment concerning logical function synthesis and symbolic regression.","PeriodicalId":193093,"journal":{"name":"Proceedings of the 11th Annual conference on Genetic and evolutionary computation","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126868654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes an information share mechanism into particle swarm optimization (PSO) in order to use all the useful information of the swarm to prevent premature convergence. The particle in traditional PSO uses only the information from its personal best position and the neighborhood's best position. This mechanism is not with sufficient search information and therefore the algorithm is easy to be trapped into local optima. In the proposed information share PSO (ISPSO), all the particles post their best search information to a share device and any particle can read the information on the device and use the information provided by any other particle to help enhance its search ability. Therefore, the ISPSO can use the whole swarm's information to guide the flying direction. The ISPSO has been applied to optimize multimodal functions, and the experimental results demonstrate that the ISPSO can yield better performance when is compared with the traditional and some other improved PSOs.
{"title":"Particle swarm optimization with information share mechanism","authors":"Zhi-hui Zhan, Jun Zhang, Rui-zhang Huang","doi":"10.1145/1569901.1570146","DOIUrl":"https://doi.org/10.1145/1569901.1570146","url":null,"abstract":"This paper proposes an information share mechanism into particle swarm optimization (PSO) in order to use all the useful information of the swarm to prevent premature convergence. The particle in traditional PSO uses only the information from its personal best position and the neighborhood's best position. This mechanism is not with sufficient search information and therefore the algorithm is easy to be trapped into local optima. In the proposed information share PSO (ISPSO), all the particles post their best search information to a share device and any particle can read the information on the device and use the information provided by any other particle to help enhance its search ability. Therefore, the ISPSO can use the whole swarm's information to guide the flying direction. The ISPSO has been applied to optimize multimodal functions, and the experimental results demonstrate that the ISPSO can yield better performance when is compared with the traditional and some other improved PSOs.","PeriodicalId":193093,"journal":{"name":"Proceedings of the 11th Annual conference on Genetic and evolutionary computation","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131294795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mathematical modeling for gene regulative networks (GRNs) provides an effective tool for hypothesis testing in biology. A necessary step in setting up such models is the estimation of model parameters, i.e., an optimization process during which the difference between model output and given experimental data is minimized. This parameter estimation step is often difficult, especially for larger systems due to often incomplete quantitative data, the large size of the parameter space, and non-linearities in system behavior. Addressing the task of parameter estimation, we investigate the influence multiobjectivization can have on the optimization process. On the example of an established model for the segment polarity GRN in Drosophila, we test different multiobjectivization scenarios compared to a singleobjective function proposed earlier for the parameter optimization of the segment polarity network model. Since, instead of a single optimal parameter setting, a set of optimal parameter settings exists for this GRN, the comparison of the different optimization scenarios focuses on the capabilities of the different scenarios to identify optimal parameter settings showing good diversity in the parameter space. By embedding the objective functions in an evolutionary algorithm (EA), we show the superiority of the multiobjective approaches in exploring the model's parameter space.
{"title":"Multiobjectivization for parameter estimation: a case-study on the segment polarity network of drosophila","authors":"T. Hohm, E. Zitzler","doi":"10.1145/1569901.1569931","DOIUrl":"https://doi.org/10.1145/1569901.1569931","url":null,"abstract":"Mathematical modeling for gene regulative networks (GRNs) provides an effective tool for hypothesis testing in biology. A necessary step in setting up such models is the estimation of model parameters, i.e., an optimization process during which the difference between model output and given experimental data is minimized. This parameter estimation step is often difficult, especially for larger systems due to often incomplete quantitative data, the large size of the parameter space, and non-linearities in system behavior. Addressing the task of parameter estimation, we investigate the influence multiobjectivization can have on the optimization process. On the example of an established model for the segment polarity GRN in Drosophila, we test different multiobjectivization scenarios compared to a singleobjective function proposed earlier for the parameter optimization of the segment polarity network model. Since, instead of a single optimal parameter setting, a set of optimal parameter settings exists for this GRN, the comparison of the different optimization scenarios focuses on the capabilities of the different scenarios to identify optimal parameter settings showing good diversity in the parameter space. By embedding the objective functions in an evolutionary algorithm (EA), we show the superiority of the multiobjective approaches in exploring the model's parameter space.","PeriodicalId":193093,"journal":{"name":"Proceedings of the 11th Annual conference on Genetic and evolutionary computation","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133067286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes and evaluates a Particle Swarm Optimization (PSO) based ensemble classifier. The members of the ensemble are Nearest Prototype Classifiers generated sequentially using PSO and combined by a majority voting mechanism. Two necessary requirements for good performance of an ensemble are accuracy and diversity of error. Accuracy is achieved by PSO minimizing a fitness function representing the error rate as the members are created. The diversity of error is promoted by using a different initialization of PSO each time to create a new member and by adopting decorrelated training where a penalty term is added to the fitness function to penalize particles that make the same errors as previously generated classifiers. Simulation experiments on different classification problems show that the ensemble has better performance than a single classifier and are effective in generating diverse ensemble members.
{"title":"Particle swarm optimization based multi-prototype ensembles","authors":"A. W. Mohemmed, Mark Johnston, Mengjie Zhang","doi":"10.1145/1569901.1569910","DOIUrl":"https://doi.org/10.1145/1569901.1569910","url":null,"abstract":"This paper proposes and evaluates a Particle Swarm Optimization (PSO) based ensemble classifier. The members of the ensemble are Nearest Prototype Classifiers generated sequentially using PSO and combined by a majority voting mechanism. Two necessary requirements for good performance of an ensemble are accuracy and diversity of error. Accuracy is achieved by PSO minimizing a fitness function representing the error rate as the members are created. The diversity of error is promoted by using a different initialization of PSO each time to create a new member and by adopting decorrelated training where a penalty term is added to the fitness function to penalize particles that make the same errors as previously generated classifiers. Simulation experiments on different classification problems show that the ensemble has better performance than a single classifier and are effective in generating diverse ensemble members.","PeriodicalId":193093,"journal":{"name":"Proceedings of the 11th Annual conference on Genetic and evolutionary computation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131306179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}