In many problems in science and engineering, it is often the case that there exist a number of computational models to simulate the problem at hand. These models are usually trade-offs between accuracy and computational expense. Given a limited computation budget, there is need to develop a framework for selecting between different models in a sensible fashion during the search. The method proposed here is based on the construction of a heteroassociative mapping to estimate the differences between models, and using this information to guide the search. The proposed framework is tested on the problem of minimizing the transmitted vibration energy in a satellite boom.
{"title":"Topographical mapping assisted evolutionary search for multilevel optimization","authors":"M. El-Beltagy, A. Keane","doi":"10.1109/CEC.1999.781996","DOIUrl":"https://doi.org/10.1109/CEC.1999.781996","url":null,"abstract":"In many problems in science and engineering, it is often the case that there exist a number of computational models to simulate the problem at hand. These models are usually trade-offs between accuracy and computational expense. Given a limited computation budget, there is need to develop a framework for selecting between different models in a sensible fashion during the search. The method proposed here is based on the construction of a heteroassociative mapping to estimate the differences between models, and using this information to guide the search. The proposed framework is tested on the problem of minimizing the transmitted vibration energy in a satellite boom.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131781246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes a dual-agent system capable of learning eye-body-coordinated maneuvers in playing a sumo contest. The two agents rely on each other by either offering feedback information on the physical performance of a certain selected maneuver or giving advice on candidate maneuvers for an improvement over the previous performance. At the core of this learning system lies in a multi-phase genetic-programming approach that is aimed to enable the player to gradually acquire sophisticated sumo maneuvers. As illustrated in the sumo learning experiments involving opponents of complex shapes and sizes, the proposed multi-phase learning allows the development of specialized strategic maneuvers based on the general ones, and hence demonstrates the efficiency of maneuver acquisition. We provide details of the problem addressed and the implemented solutions concerning a mobile robot for performing sumo maneuvers and the computational assistant for coaching the robot. In addition, we show the actual performances of the sumo agent, as a result of coaching, in dealing with a number of difficult sumo situations.
{"title":"Learning coordinated maneuvers in complex environments: a sumo experiment","authors":"Jiming Liu, Chow Kwong Pok, HuiKa Keung","doi":"10.1109/CEC.1999.781945","DOIUrl":"https://doi.org/10.1109/CEC.1999.781945","url":null,"abstract":"This paper describes a dual-agent system capable of learning eye-body-coordinated maneuvers in playing a sumo contest. The two agents rely on each other by either offering feedback information on the physical performance of a certain selected maneuver or giving advice on candidate maneuvers for an improvement over the previous performance. At the core of this learning system lies in a multi-phase genetic-programming approach that is aimed to enable the player to gradually acquire sophisticated sumo maneuvers. As illustrated in the sumo learning experiments involving opponents of complex shapes and sizes, the proposed multi-phase learning allows the development of specialized strategic maneuvers based on the general ones, and hence demonstrates the efficiency of maneuver acquisition. We provide details of the problem addressed and the implemented solutions concerning a mobile robot for performing sumo maneuvers and the computational assistant for coaching the robot. In addition, we show the actual performances of the sumo agent, as a result of coaching, in dealing with a number of difficult sumo situations.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134340715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Transposition is a new genetic operator alternative to crossover and allows a classical GA to achieve better results. This mechanism characterized by the presence of mobile genetic units must be used with the right parameters to enable maximum performance to the GA. The paper presents the results of an empirical study which offers the main guidelines to choose the proper setting of parameters to use with transposition, which will lead the GA to the best solutions.
{"title":"Enhancing transposition performance","authors":"A. Simoes, E. Costa","doi":"10.1109/CEC.1999.782651","DOIUrl":"https://doi.org/10.1109/CEC.1999.782651","url":null,"abstract":"Transposition is a new genetic operator alternative to crossover and allows a classical GA to achieve better results. This mechanism characterized by the presence of mobile genetic units must be used with the right parameters to enable maximum performance to the GA. The paper presents the results of an empirical study which offers the main guidelines to choose the proper setting of parameters to use with transposition, which will lead the GA to the best solutions.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115235808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper provides a preliminary evaluation of the accuracy of computer aided diagnostics (CAD) in addressing the inconsistencies of inter-observer variance scoring. The inter-observer variability problem, in this case, relates to different cytopathologists and radiologists at separate locations scoring the same type of samples differently using the same methodologies and environmental discriminates. Two distinctly different FNA data sets were used. The first was the data collected at the University of Wisconsin (Wolberg data set) while the other was a completely independent one defined and processed at the Breast Cancer Center, University Health Center at Syracuse (Syracuse data set). Two computer aided diagnostic (CAD) paradigms were used: the evolutionary programming (EP)/probabilistic neural network (PNN) hybrid and a mean of predictors model. Four experiments mere performed to evaluate the hybrid. The fourth experiment, k-fold crossover validation, resulted in a 91.25% average classification accuracy with a .9783 average Az index. The mean of predictors model was used to verify the results of the more complex hybrid using both the fraction of missed malignancies (Type II errors) and fraction of false malignancies (Type I errors). The EP/PNN hybrid experiments resulted in a 3.05% mean value of missed malignancies (Type II) and a 5.69% mean value of false malignancies (Type I errors) using the k-fold crossover studies. The mean of predictors model provided a.429% mean Type II error and a 4.09% mean Type I error.
{"title":"Investigation of and preliminary results for the solution of the inter-observer variability problem using fine needle aspirate (FNA) data","authors":"W. Land, Lewis A. Loren, T. Masters","doi":"10.1109/CEC.1999.785489","DOIUrl":"https://doi.org/10.1109/CEC.1999.785489","url":null,"abstract":"The paper provides a preliminary evaluation of the accuracy of computer aided diagnostics (CAD) in addressing the inconsistencies of inter-observer variance scoring. The inter-observer variability problem, in this case, relates to different cytopathologists and radiologists at separate locations scoring the same type of samples differently using the same methodologies and environmental discriminates. Two distinctly different FNA data sets were used. The first was the data collected at the University of Wisconsin (Wolberg data set) while the other was a completely independent one defined and processed at the Breast Cancer Center, University Health Center at Syracuse (Syracuse data set). Two computer aided diagnostic (CAD) paradigms were used: the evolutionary programming (EP)/probabilistic neural network (PNN) hybrid and a mean of predictors model. Four experiments mere performed to evaluate the hybrid. The fourth experiment, k-fold crossover validation, resulted in a 91.25% average classification accuracy with a .9783 average Az index. The mean of predictors model was used to verify the results of the more complex hybrid using both the fraction of missed malignancies (Type II errors) and fraction of false malignancies (Type I errors). The EP/PNN hybrid experiments resulted in a 3.05% mean value of missed malignancies (Type II) and a 5.69% mean value of false malignancies (Type I errors) using the k-fold crossover studies. The mean of predictors model provided a.429% mean Type II error and a 4.09% mean Type I error.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115537690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A genetic algorithm is applied to the problem of conditioning the petrophysical rock properties of a reservoir model on historic production data. This is a difficult optimization problem where each evaluation of the objective function implies a flow simulation of the whole reservoir. Due to the high computing cost of this function, it is imperative to make use of an efficient optimization method to find a near optimal solution using as few iterations as possible. We have applied a genetic algorithm to this problem. Ten independent runs are used to give a prediction with an uncertainty estimate for the total future oil production using two different production strategies.
{"title":"Oil reservoir production forecasting with uncertainty estimation using genetic algorithms","authors":"H. Soleng","doi":"10.1109/CEC.1999.782574","DOIUrl":"https://doi.org/10.1109/CEC.1999.782574","url":null,"abstract":"A genetic algorithm is applied to the problem of conditioning the petrophysical rock properties of a reservoir model on historic production data. This is a difficult optimization problem where each evaluation of the objective function implies a flow simulation of the whole reservoir. Due to the high computing cost of this function, it is imperative to make use of an efficient optimization method to find a near optimal solution using as few iterations as possible. We have applied a genetic algorithm to this problem. Ten independent runs are used to give a prediction with an uncertainty estimate for the total future oil production using two different production strategies.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114374369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes the implementation and the functioning of RAGA (rule acquisition with a genetic algorithm), a genetic-algorithm-based data mining system suitable for both supervised and certain types of unsupervised knowledge extraction from large and possibly noisy databases. RAGA differs from a standard genetic algorithm in several crucial respects, including the following: (i) its 'chromosomes' are variable-length symbolic structures, i.e. association rules that may contain n-place predicates (n/spl ges/0), (ii) besides typed crossover and mutation operators, it uses macromutations as generalization and specialization operators to efficiently explore the space of rules, and (iii) it evolves a default hierarchy of rules. Several data mining experiments with the system are described.
{"title":"Rule acquisition with a genetic algorithm","authors":"R. Cattral, F. Oppacher, D. Deugo","doi":"10.1109/CEC.1999.781916","DOIUrl":"https://doi.org/10.1109/CEC.1999.781916","url":null,"abstract":"This paper describes the implementation and the functioning of RAGA (rule acquisition with a genetic algorithm), a genetic-algorithm-based data mining system suitable for both supervised and certain types of unsupervised knowledge extraction from large and possibly noisy databases. RAGA differs from a standard genetic algorithm in several crucial respects, including the following: (i) its 'chromosomes' are variable-length symbolic structures, i.e. association rules that may contain n-place predicates (n/spl ges/0), (ii) besides typed crossover and mutation operators, it uses macromutations as generalization and specialization operators to efficiently explore the space of rules, and (iii) it evolves a default hierarchy of rules. Several data mining experiments with the system are described.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123170659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper deals with identical parallel machine scheduling problems with two kinds of objective functions, i.e., both regular and non-regular objective functions, and proposes a genetic algorithm approach in which (a) the sequence of jobs on each machine as well as the assignment of jobs to machines are determined directly by referring to a string (genotype), and (b) the start time of each job is fixed by solving the linear programming problem and a feasible schedule (phenotype) is obtained. As for (b), we newly introduce a method of representing the problem to determine the start time of each job as a linear programming problem whose objective function is formed as a weighted sum of the original multiple objective functions. This method enables us to obtain a lot of potential schedules. Moreover, through computational experiments by using our genetic algorithm approach, the effectiveness for generating a variety of Pareto-optimal schedules is investigated.
{"title":"A genetic algorithm approach to multi-objective scheduling problems with earliness and tardiness penalties","authors":"H. Tamaki, Etsuo Nishino, S. Abe","doi":"10.1109/CEC.1999.781906","DOIUrl":"https://doi.org/10.1109/CEC.1999.781906","url":null,"abstract":"This paper deals with identical parallel machine scheduling problems with two kinds of objective functions, i.e., both regular and non-regular objective functions, and proposes a genetic algorithm approach in which (a) the sequence of jobs on each machine as well as the assignment of jobs to machines are determined directly by referring to a string (genotype), and (b) the start time of each job is fixed by solving the linear programming problem and a feasible schedule (phenotype) is obtained. As for (b), we newly introduce a method of representing the problem to determine the start time of each job as a linear programming problem whose objective function is formed as a weighted sum of the original multiple objective functions. This method enables us to obtain a lot of potential schedules. Moreover, through computational experiments by using our genetic algorithm approach, the effectiveness for generating a variety of Pareto-optimal schedules is investigated.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124513936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sequencing of DNA is among the most important tasks in molecular biology. DNA chips are considered to be a more rapid alternative to more common gel-based methods of sequencing. Previously, we demonstrated the reconstruction of DNA sequence information from a simulated DNA chip using evolutionary programming. The research presented here extends this work by relaxing several assumptions adopted in our initial investigation. We also examine the relationship between base composition of the target sequence and the useful set of probes required to decipher the target on a DNA chip. Comments regarding the nature of the optimal ratio for the target and probe lengths are offered. Our results go further to suggest that evolutionary computation is well-suited to address the sequence reconstruction problem.
{"title":"Simulated sequencing by hybridization using evolutionary programming","authors":"G. Fogel, K. Chellapilla","doi":"10.1109/CEC.1999.781960","DOIUrl":"https://doi.org/10.1109/CEC.1999.781960","url":null,"abstract":"Sequencing of DNA is among the most important tasks in molecular biology. DNA chips are considered to be a more rapid alternative to more common gel-based methods of sequencing. Previously, we demonstrated the reconstruction of DNA sequence information from a simulated DNA chip using evolutionary programming. The research presented here extends this work by relaxing several assumptions adopted in our initial investigation. We also examine the relationship between base composition of the target sequence and the useful set of probes required to decipher the target on a DNA chip. Comments regarding the nature of the optimal ratio for the target and probe lengths are offered. Our results go further to suggest that evolutionary computation is well-suited to address the sequence reconstruction problem.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125032748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A fuzzy simulated evolution algorithm is presented for multi-objective minimization of VLSI cell placement problem. We propose a fuzzy goal-based search strategy combined with a fuzzy allocation scheme. The allocation scheme tries to minimize multiple objectives and adds controlled randomness as opposed to original deterministic allocation schemes. Experiments with benchmark tests demonstrate a noticeable improvement in solution quality.
{"title":"Fuzzy simulated evolution algorithm for multi-objective optimization of VLSI placement","authors":"S. M. Sait, H. Youssef, Hussain Ali","doi":"10.1109/CEC.1999.781912","DOIUrl":"https://doi.org/10.1109/CEC.1999.781912","url":null,"abstract":"A fuzzy simulated evolution algorithm is presented for multi-objective minimization of VLSI cell placement problem. We propose a fuzzy goal-based search strategy combined with a fuzzy allocation scheme. The allocation scheme tries to minimize multiple objectives and adds controlled randomness as opposed to original deterministic allocation schemes. Experiments with benchmark tests demonstrate a noticeable improvement in solution quality.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123565504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An evolutionary algorithm (EA) approach is used to generate test vectors for the detection of shrinkage faults in programmable logic arrays (PLA). Three basic steps are performed during the generation of the test vectors: crossover, mutation and selection. A new mutation operator is introduced that helps increase the Hamming distance among the candidate solutions. Once crossover and mutation have occurred, the new candidate test vectors with higher fitness function scores replace the old ones. With this scheme, population members steadily improve their fitness level with each new generation. The resulting process yields improved solutions to the problem of the PLA test vector generation for shrinkage faults. PLA testing and fault simulation is computationally prohibitive in uniprocessor machines. However, PLAGA is well suited for powerful parallel processing machines with vectorization capability,.
{"title":"PLAGA: a highly parallelizable genetic algorithm for programmable logic arrays test pattern generation","authors":"Alfiedo Cruz, S. Mukherjee","doi":"10.1109/CEC.1999.782524","DOIUrl":"https://doi.org/10.1109/CEC.1999.782524","url":null,"abstract":"An evolutionary algorithm (EA) approach is used to generate test vectors for the detection of shrinkage faults in programmable logic arrays (PLA). Three basic steps are performed during the generation of the test vectors: crossover, mutation and selection. A new mutation operator is introduced that helps increase the Hamming distance among the candidate solutions. Once crossover and mutation have occurred, the new candidate test vectors with higher fitness function scores replace the old ones. With this scheme, population members steadily improve their fitness level with each new generation. The resulting process yields improved solutions to the problem of the PLA test vector generation for shrinkage faults. PLA testing and fault simulation is computationally prohibitive in uniprocessor machines. However, PLAGA is well suited for powerful parallel processing machines with vectorization capability,.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121070810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}