In many problems in science and engineering, it is often the case that there exist a number of computational models to simulate the problem at hand. These models are usually trade-offs between accuracy and computational expense. Given a limited computation budget, there is need to develop a framework for selecting between different models in a sensible fashion during the search. The method proposed here is based on the construction of a heteroassociative mapping to estimate the differences between models, and using this information to guide the search. The proposed framework is tested on the problem of minimizing the transmitted vibration energy in a satellite boom.
{"title":"Topographical mapping assisted evolutionary search for multilevel optimization","authors":"M. El-Beltagy, A. Keane","doi":"10.1109/CEC.1999.781996","DOIUrl":"https://doi.org/10.1109/CEC.1999.781996","url":null,"abstract":"In many problems in science and engineering, it is often the case that there exist a number of computational models to simulate the problem at hand. These models are usually trade-offs between accuracy and computational expense. Given a limited computation budget, there is need to develop a framework for selecting between different models in a sensible fashion during the search. The method proposed here is based on the construction of a heteroassociative mapping to estimate the differences between models, and using this information to guide the search. The proposed framework is tested on the problem of minimizing the transmitted vibration energy in a satellite boom.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131781246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes a dual-agent system capable of learning eye-body-coordinated maneuvers in playing a sumo contest. The two agents rely on each other by either offering feedback information on the physical performance of a certain selected maneuver or giving advice on candidate maneuvers for an improvement over the previous performance. At the core of this learning system lies in a multi-phase genetic-programming approach that is aimed to enable the player to gradually acquire sophisticated sumo maneuvers. As illustrated in the sumo learning experiments involving opponents of complex shapes and sizes, the proposed multi-phase learning allows the development of specialized strategic maneuvers based on the general ones, and hence demonstrates the efficiency of maneuver acquisition. We provide details of the problem addressed and the implemented solutions concerning a mobile robot for performing sumo maneuvers and the computational assistant for coaching the robot. In addition, we show the actual performances of the sumo agent, as a result of coaching, in dealing with a number of difficult sumo situations.
{"title":"Learning coordinated maneuvers in complex environments: a sumo experiment","authors":"Jiming Liu, Chow Kwong Pok, HuiKa Keung","doi":"10.1109/CEC.1999.781945","DOIUrl":"https://doi.org/10.1109/CEC.1999.781945","url":null,"abstract":"This paper describes a dual-agent system capable of learning eye-body-coordinated maneuvers in playing a sumo contest. The two agents rely on each other by either offering feedback information on the physical performance of a certain selected maneuver or giving advice on candidate maneuvers for an improvement over the previous performance. At the core of this learning system lies in a multi-phase genetic-programming approach that is aimed to enable the player to gradually acquire sophisticated sumo maneuvers. As illustrated in the sumo learning experiments involving opponents of complex shapes and sizes, the proposed multi-phase learning allows the development of specialized strategic maneuvers based on the general ones, and hence demonstrates the efficiency of maneuver acquisition. We provide details of the problem addressed and the implemented solutions concerning a mobile robot for performing sumo maneuvers and the computational assistant for coaching the robot. In addition, we show the actual performances of the sumo agent, as a result of coaching, in dealing with a number of difficult sumo situations.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134340715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes the implementation and the functioning of RAGA (rule acquisition with a genetic algorithm), a genetic-algorithm-based data mining system suitable for both supervised and certain types of unsupervised knowledge extraction from large and possibly noisy databases. RAGA differs from a standard genetic algorithm in several crucial respects, including the following: (i) its 'chromosomes' are variable-length symbolic structures, i.e. association rules that may contain n-place predicates (n/spl ges/0), (ii) besides typed crossover and mutation operators, it uses macromutations as generalization and specialization operators to efficiently explore the space of rules, and (iii) it evolves a default hierarchy of rules. Several data mining experiments with the system are described.
{"title":"Rule acquisition with a genetic algorithm","authors":"R. Cattral, F. Oppacher, D. Deugo","doi":"10.1109/CEC.1999.781916","DOIUrl":"https://doi.org/10.1109/CEC.1999.781916","url":null,"abstract":"This paper describes the implementation and the functioning of RAGA (rule acquisition with a genetic algorithm), a genetic-algorithm-based data mining system suitable for both supervised and certain types of unsupervised knowledge extraction from large and possibly noisy databases. RAGA differs from a standard genetic algorithm in several crucial respects, including the following: (i) its 'chromosomes' are variable-length symbolic structures, i.e. association rules that may contain n-place predicates (n/spl ges/0), (ii) besides typed crossover and mutation operators, it uses macromutations as generalization and specialization operators to efficiently explore the space of rules, and (iii) it evolves a default hierarchy of rules. Several data mining experiments with the system are described.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123170659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe the use of genetic algorithms to initialize a set of hard locations that constitutes the storage space of Sparse Distributed Memory (SDM). SDM is an associative memory technique that uses binary spaces, and relies on close memory items tending to be clustered together, with some level of abstraction. An important factor in the physical implementation of SDM is how many hard locations are used, which greatly affects the memory capacity. It is also dependent on the dimension of the binary space used. For the SDM system to function appropriately, the hard locations should be uniformly distributed over the binary space. We represented a set of hard locations of SDM as population members, and employed GA to search for the best (fittest) distribution of hard locations over the vast binary space. Accordingly, fitness is based on how far each hard location is from all other hard locations, which measures the uniformity of the distribution. The preliminary results are very promising, with the GA significantly outperforming random initialization used in most existing SDM implementations. This use of GA, which is similar to the Michigan approach, differs from the standard approach in that the object of the search is the entire population.
{"title":"Using genetic algorithms for sparse distributed memory initialization","authors":"A. Anwar, D. Dasgupta, S. Franklin","doi":"10.1109/CEC.1999.782538","DOIUrl":"https://doi.org/10.1109/CEC.1999.782538","url":null,"abstract":"We describe the use of genetic algorithms to initialize a set of hard locations that constitutes the storage space of Sparse Distributed Memory (SDM). SDM is an associative memory technique that uses binary spaces, and relies on close memory items tending to be clustered together, with some level of abstraction. An important factor in the physical implementation of SDM is how many hard locations are used, which greatly affects the memory capacity. It is also dependent on the dimension of the binary space used. For the SDM system to function appropriately, the hard locations should be uniformly distributed over the binary space. We represented a set of hard locations of SDM as population members, and employed GA to search for the best (fittest) distribution of hard locations over the vast binary space. Accordingly, fitness is based on how far each hard location is from all other hard locations, which measures the uniformity of the distribution. The preliminary results are very promising, with the GA significantly outperforming random initialization used in most existing SDM implementations. This use of GA, which is similar to the Michigan approach, differs from the standard approach in that the object of the search is the entire population.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125364570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a modified genetic algorithm for global minimization. The algorithm uses a new genetic operator, the Mendel operator. This algorithm finds one of the local minimizers first and then finds a lower minimizer at the next iteration as a tunneling algorithm or a filled function method. By repeating these processes, a global minimizer can finally be obtained. Mendel operations simulating Mendel's genetic law are devised to avoid converging to the same minimizer of the previous run. Also, the proposed algorithm guarantees convergence to a lower minimizer by using an elitist method.
{"title":"A genetic algorithm with a Mendel operator for global minimization","authors":"In-Soo Song, Hyun-Wook Woo, M. Tahk","doi":"10.1109/CEC.1999.782664","DOIUrl":"https://doi.org/10.1109/CEC.1999.782664","url":null,"abstract":"This paper proposes a modified genetic algorithm for global minimization. The algorithm uses a new genetic operator, the Mendel operator. This algorithm finds one of the local minimizers first and then finds a lower minimizer at the next iteration as a tunneling algorithm or a filled function method. By repeating these processes, a global minimizer can finally be obtained. Mendel operations simulating Mendel's genetic law are devised to avoid converging to the same minimizer of the previous run. Also, the proposed algorithm guarantees convergence to a lower minimizer by using an elitist method.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"290 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134494228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The study of artificial life involves simulating biological or sociological processes with a computer. Combining artificial life with techniques from evolutionary computation frequently involves modeling the behavior or decision processes of artificial organisms within a society in such a way that genetic algorithms can be applied to modify these models and enhance behavior over time. Typically, endogenous fitness is used with co-evolution. We explore the use of an exogenous fitness function with genetic programming and co-evolution to develop individuals and species capable of competing in a hostile environment. To facilitate the study, we use a commercially available environment-AI Wars-to host the organisms and run the experiments. Results from our experiments, though preliminary, indicate the ability of co-evolution, genetic programming, and exogenous fitness to evolve fit individuals. The results also suggest the ability to assess the nature of the fitness landscape and the impact of various fitness factors on evolutionary performance.
{"title":"Genetic programming and co-evolution with exogenous fitness in an artificial life environment","authors":"Michael Waters, J. Sheppard","doi":"10.1109/CEC.1999.785471","DOIUrl":"https://doi.org/10.1109/CEC.1999.785471","url":null,"abstract":"The study of artificial life involves simulating biological or sociological processes with a computer. Combining artificial life with techniques from evolutionary computation frequently involves modeling the behavior or decision processes of artificial organisms within a society in such a way that genetic algorithms can be applied to modify these models and enhance behavior over time. Typically, endogenous fitness is used with co-evolution. We explore the use of an exogenous fitness function with genetic programming and co-evolution to develop individuals and species capable of competing in a hostile environment. To facilitate the study, we use a commercially available environment-AI Wars-to host the organisms and run the experiments. Results from our experiments, though preliminary, indicate the ability of co-evolution, genetic programming, and exogenous fitness to evolve fit individuals. The results also suggest the ability to assess the nature of the fitness landscape and the impact of various fitness factors on evolutionary performance.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"8 Suppl 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134584042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the defining operations of genetic algorithms is selection: choosing chromesomes from the population to generate offspring via crossover or mutation. Researchers have described many selection algorithms, including schemes that apply probabilities based on chromosomes' ranks in the population and that simulate tournaments among chromosomes. The paper investigates two rank based assignments of probabilities: linear normalization and exponential normalization, and two tournament selection schemes: 2-tournament selection without replacement and k-tournament selection with replacement. It makes explicit the probabilities that each associates with the population's chromosomes; demonstrates, following other researchers but using elementary arguments based on these probabilities, the equivalence of linear normalization with 2-tournament selection and of exponential normalization with k-tournament selection; and argues for the use of tournament selection rather than the explicit assignment of rank based probabilities whenever possible.
{"title":"It's all the same to me: revisiting rank-based probabilities and tournaments","authors":"B. Julstrom","doi":"10.1109/CEC.1999.782661","DOIUrl":"https://doi.org/10.1109/CEC.1999.782661","url":null,"abstract":"One of the defining operations of genetic algorithms is selection: choosing chromesomes from the population to generate offspring via crossover or mutation. Researchers have described many selection algorithms, including schemes that apply probabilities based on chromosomes' ranks in the population and that simulate tournaments among chromosomes. The paper investigates two rank based assignments of probabilities: linear normalization and exponential normalization, and two tournament selection schemes: 2-tournament selection without replacement and k-tournament selection with replacement. It makes explicit the probabilities that each associates with the population's chromosomes; demonstrates, following other researchers but using elementary arguments based on these probabilities, the equivalence of linear normalization with 2-tournament selection and of exponential normalization with k-tournament selection; and argues for the use of tournament selection rather than the explicit assignment of rank based probabilities whenever possible.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133454875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The self-adaptation of the mutation distribution is a distinguishing feature of evolutionary algorithms that optimize over continuous variables. It is widely recognized that self-adaptation accelerates the search for optima and enhances the ability to locate optima accurately, but it is generally unclear whether these optima are global ones or not. Here, it is proven that the probability of convergence to the global optimum is less than one in general, even if the objective function is continuous.
{"title":"Self-adaptation and global convergence: a counter-example","authors":"G. Rudolph","doi":"10.1109/CEC.1999.781994","DOIUrl":"https://doi.org/10.1109/CEC.1999.781994","url":null,"abstract":"The self-adaptation of the mutation distribution is a distinguishing feature of evolutionary algorithms that optimize over continuous variables. It is widely recognized that self-adaptation accelerates the search for optima and enhances the ability to locate optima accurately, but it is generally unclear whether these optima are global ones or not. Here, it is proven that the probability of convergence to the global optimum is less than one in general, even if the objective function is continuous.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133675872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Previous studies have shown that embedding local search in classical evolutionary programming (EP) could lead to improved performance on function optimization problems. The utility of local search is investigated with fast evolutionary programming (FEP) and comparisons are offered between performance improvements obtained when using local search with Gaussian and Cauchy mutations. Experiments were conducted on a suite of four well known function optimization problems using two local search methods (conjugate gradient and F.J. Solis and R.J.-B. Wets, (1981)) with varying amounts of local search being incorporated into the evolutionary algorithm. Empirical results indicate that FEP with the conjugate gradient method outperforms other hybrid methods on three of the four functions when evolution was conducted for a fixed number of generations. Trials using local search produced solutions that were statistically as good as or better than trials without local search. However, the cost of using local search justified the enhancement in solution quality only when using Gaussian mutations but not when using Cauchy mutations.
{"title":"Local search operators in fast evolutionary programming","authors":"H. K. Birru, K. Chellapilla, S. Rao","doi":"10.1109/CEC.1999.782662","DOIUrl":"https://doi.org/10.1109/CEC.1999.782662","url":null,"abstract":"Previous studies have shown that embedding local search in classical evolutionary programming (EP) could lead to improved performance on function optimization problems. The utility of local search is investigated with fast evolutionary programming (FEP) and comparisons are offered between performance improvements obtained when using local search with Gaussian and Cauchy mutations. Experiments were conducted on a suite of four well known function optimization problems using two local search methods (conjugate gradient and F.J. Solis and R.J.-B. Wets, (1981)) with varying amounts of local search being incorporated into the evolutionary algorithm. Empirical results indicate that FEP with the conjugate gradient method outperforms other hybrid methods on three of the four functions when evolution was conducted for a fixed number of generations. Trials using local search produced solutions that were statistically as good as or better than trials without local search. However, the cost of using local search justified the enhancement in solution quality only when using Gaussian mutations but not when using Cauchy mutations.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117281371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new type of neural architecture consisting of chaotic neurons and apply it to the prediction of chaotic time series signals. To evolve chaotic neural systems, we use cellular automata whose production rules are evolved based on a DNA coding method. The structure of networks are appropriate for learning nonlinear, chaotic, and nonstationary systems. In order to verify their effectiveness, we apply the evolutionary chaotic neural systems to one-step ahead prediction of Mackey-Glass time series data.
{"title":"Evolving chaotic neural systems for time series prediction","authors":"Dong-Wook Lee, K. Sim","doi":"10.1109/CEC.1999.781941","DOIUrl":"https://doi.org/10.1109/CEC.1999.781941","url":null,"abstract":"We present a new type of neural architecture consisting of chaotic neurons and apply it to the prediction of chaotic time series signals. To evolve chaotic neural systems, we use cellular automata whose production rules are evolved based on a DNA coding method. The structure of networks are appropriate for learning nonlinear, chaotic, and nonstationary systems. In order to verify their effectiveness, we apply the evolutionary chaotic neural systems to one-step ahead prediction of Mackey-Glass time series data.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121039738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}