The problem of finding robust solutions for scheduling problems is of utmost importance for real-world applications as they operate in dynamic environments. In such environments it is often necessary to reschedule the existing plan due to various failures (e.g., machine breakdowns, sickness of employees, etc.). Thus a robust solution (i.e., a quality solution which can be modified easily according to a change in the environment) may be more valuable than an optimal solution which does not allow easy modifications. The issue of robust solutions for job shop scheduling problems is considered. A robustness measure is defined and its properties are investigated. This study is supported by a series of experiments; the results indicate that robust solutions exist and can be identified.
{"title":"Robust solutions to job shop problems","authors":"Mikkel Tjornfelt-Jensen, Tage Kiilsholm Hansen","doi":"10.1109/CEC.1999.782551","DOIUrl":"https://doi.org/10.1109/CEC.1999.782551","url":null,"abstract":"The problem of finding robust solutions for scheduling problems is of utmost importance for real-world applications as they operate in dynamic environments. In such environments it is often necessary to reschedule the existing plan due to various failures (e.g., machine breakdowns, sickness of employees, etc.). Thus a robust solution (i.e., a quality solution which can be modified easily according to a change in the environment) may be more valuable than an optimal solution which does not allow easy modifications. The issue of robust solutions for job shop scheduling problems is considered. A robustness measure is defined and its properties are investigated. This study is supported by a series of experiments; the results indicate that robust solutions exist and can be identified.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114670054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Presents neural network training algorithms which are based on the differential evolution (DE) strategies introduced by Storn and Price (J. of Global Optimization, vol. 11, pp. 341-59, 1997). These strategies are applied to train neural networks with small integer weights. Such neural networks are better suited for hardware implementation than the real weight ones. Furthermore, we constrain the weights and biases in the range [-2/sup k/+1, 2/sup k/-1], for k=3,4,5. Thus, they can be represented by just k bits. These algorithms have been designed keeping in mind that the resulting integer weights require less bits to be stored and the digital arithmetic operations between them are more easily implemented in hardware. Obviously, if the network is trained in a constrained weight space, smaller weights are found and less memory is required. On the other hand, the network training procedure can be more effective and efficient when large weights are allowed. Thus, for a given application, a trade-off between effectiveness and memory consumption has to be considered. We present the results of evolution algorithms for this difficult task. Based on the application of the proposed class of methods on classical neural network benchmarks, our experience is that these methods are effective and reliable.
介绍了基于由Storn和Price (J. of Global Optimization, vol. 11, pp. 341-59, 1997)引入的差分进化(DE)策略的神经网络训练算法。这些策略被用于训练具有小整数权值的神经网络。这样的神经网络比真正的权重神经网络更适合硬件实现。此外,对于k=3,4,5,我们将权重和偏差限制在[-2/sup k/+ 1,2 /sup k/-1]范围内。因此,它们可以只用k位表示。这些算法在设计时考虑到所得到的整数权重需要更少的比特来存储,并且它们之间的数字算术运算更容易在硬件中实现。显然,如果网络是在一个受限的权重空间中训练的,那么可以找到更小的权重,所需的内存也更少。另一方面,当允许较大的权重时,网络训练过程会更加有效和高效。因此,对于给定的应用程序,必须考虑有效性和内存消耗之间的权衡。我们提出了针对这一困难任务的进化算法的结果。基于在经典神经网络基准测试中的应用,我们的经验表明这些方法是有效和可靠的。
{"title":"Neural network training with constrained integer weights","authors":"V. Plagianakos, M. Vrahatis","doi":"10.1109/CEC.1999.785521","DOIUrl":"https://doi.org/10.1109/CEC.1999.785521","url":null,"abstract":"Presents neural network training algorithms which are based on the differential evolution (DE) strategies introduced by Storn and Price (J. of Global Optimization, vol. 11, pp. 341-59, 1997). These strategies are applied to train neural networks with small integer weights. Such neural networks are better suited for hardware implementation than the real weight ones. Furthermore, we constrain the weights and biases in the range [-2/sup k/+1, 2/sup k/-1], for k=3,4,5. Thus, they can be represented by just k bits. These algorithms have been designed keeping in mind that the resulting integer weights require less bits to be stored and the digital arithmetic operations between them are more easily implemented in hardware. Obviously, if the network is trained in a constrained weight space, smaller weights are found and less memory is required. On the other hand, the network training procedure can be more effective and efficient when large weights are allowed. Thus, for a given application, a trade-off between effectiveness and memory consumption has to be considered. We present the results of evolution algorithms for this difficult task. Based on the application of the proposed class of methods on classical neural network benchmarks, our experience is that these methods are effective and reliable.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115740850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We empirically study the performance of the particle swarm optimizer (PSO). Four different benchmark functions with asymmetric initial range settings are selected as testing functions. The experimental results illustrate the advantages and disadvantages of the PSO. Under all the testing cases, the PSO always converges very quickly towards the optimal positions but may slow its convergence speed when it is near a minimum. Nevertheless, the experimental results show that the PSO is a promising optimization method and a new approach is suggested to improve PSO's performance near the optima, such as using an adaptive inertia weight.
{"title":"Empirical study of particle swarm optimization","authors":"Yuhui Shi, R. Eberhart","doi":"10.1109/CEC.1999.785511","DOIUrl":"https://doi.org/10.1109/CEC.1999.785511","url":null,"abstract":"We empirically study the performance of the particle swarm optimizer (PSO). Four different benchmark functions with asymmetric initial range settings are selected as testing functions. The experimental results illustrate the advantages and disadvantages of the PSO. Under all the testing cases, the PSO always converges very quickly towards the optimal positions but may slow its convergence speed when it is near a minimum. Nevertheless, the experimental results show that the PSO is a promising optimization method and a new approach is suggested to improve PSO's performance near the optima, such as using an adaptive inertia weight.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127201571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We review four current projects pertaining to artificial neural network (ANN) models that merge radiologist-extracted findings to perform computer aided diagnosis (CADx) of breast cancer. These projects are: (1) prediction of breast lesion malignancy using mammographic findings; (2) classification of malignant lesions as in situ vs. invasive cancer; (3) prediction of breast mass malignancy using ultrasound findings; and (4) the evaluation of CADx models in a cross-institution study. These projects share in common the use of feedforward error backpropagation ANNs. Inputs to the ANNs are medical findings such as mammographic or ultrasound lesion descriptors and patient history data. The output is the biopsy outcome (benign vs. malignant, or in situ vs. invasive cancer) which is being predicted. All ANNs undergo supervised training using actual patient data. These ANN decision models may assist in the management of patients with breast lesions, such as by reducing the number of unnecessary surgical procedures and their associated cost.
{"title":"Application of artificial neural networks for diagnosis of breast cancer","authors":"J. Lo, C. Floyd","doi":"10.1109/CEC.1999.785486","DOIUrl":"https://doi.org/10.1109/CEC.1999.785486","url":null,"abstract":"We review four current projects pertaining to artificial neural network (ANN) models that merge radiologist-extracted findings to perform computer aided diagnosis (CADx) of breast cancer. These projects are: (1) prediction of breast lesion malignancy using mammographic findings; (2) classification of malignant lesions as in situ vs. invasive cancer; (3) prediction of breast mass malignancy using ultrasound findings; and (4) the evaluation of CADx models in a cross-institution study. These projects share in common the use of feedforward error backpropagation ANNs. Inputs to the ANNs are medical findings such as mammographic or ultrasound lesion descriptors and patient history data. The output is the biopsy outcome (benign vs. malignant, or in situ vs. invasive cancer) which is being predicted. All ANNs undergo supervised training using actual patient data. These ANN decision models may assist in the management of patients with breast lesions, such as by reducing the number of unnecessary surgical procedures and their associated cost.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125869103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Self-recovery micro-rollback synthesis (SMS) has currently become an important issue in high level synthesis. The problem of SMS combines the problem of functional unit scheduling and assignment with the problem of checkpoint insertion and microprogram optimization. It has been shown that these problems are NP-complete. The most studied problem is functional unit scheduling and assignment. Several heuristic techniques, including as soon as possible (ASAP), as last as possible (ALAP), integer programming, spring elasticity model, graph based mobility model, and genetic algorithm, are proposed. However, there are few studies on self-recovery micro-rollback synthesis and the technique of solution space searching by genetic algorithm has not been attempted. We study the feasibility of the genetic algorithm for the problem of SMS constrained on: the number of functional units, control steps, number of checkpoints, and the functional unit areas.
{"title":"Genetic micro-rollback self-recovery synthesis","authors":"Kingkarn Sookhanaphibarn, C. Lursinsap","doi":"10.1109/CEC.1999.782571","DOIUrl":"https://doi.org/10.1109/CEC.1999.782571","url":null,"abstract":"Self-recovery micro-rollback synthesis (SMS) has currently become an important issue in high level synthesis. The problem of SMS combines the problem of functional unit scheduling and assignment with the problem of checkpoint insertion and microprogram optimization. It has been shown that these problems are NP-complete. The most studied problem is functional unit scheduling and assignment. Several heuristic techniques, including as soon as possible (ASAP), as last as possible (ALAP), integer programming, spring elasticity model, graph based mobility model, and genetic algorithm, are proposed. However, there are few studies on self-recovery micro-rollback synthesis and the technique of solution space searching by genetic algorithm has not been attempted. We study the feasibility of the genetic algorithm for the problem of SMS constrained on: the number of functional units, control steps, number of checkpoints, and the functional unit areas.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126851925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of GAs (genetic algorithms) in evolvable hardware is reviewed. A case is made for implementing as much of the GA in hardware as possible. The technical difficulties of using a standard GA with an FPGA are described. A new type of GA called a Ringed GA, which features only local interactions among individuals, is introduced. A new type of reconfigurable platform called the PIG is described. The use of the PIG to support local, parallel GA operations is explained. Experiments in evolving digital circuits using a ringed GA on the PIG are described. Conclusions and plans for future work are presented.
{"title":"Ring around the PIG: a parallel GA with only local interactions coupled with a self-reconfigurable hardware platform to implement an O(1) evolutionary cycle for evolvable hardware","authors":"N. Macias","doi":"10.1109/CEC.1999.782541","DOIUrl":"https://doi.org/10.1109/CEC.1999.782541","url":null,"abstract":"The use of GAs (genetic algorithms) in evolvable hardware is reviewed. A case is made for implementing as much of the GA in hardware as possible. The technical difficulties of using a standard GA with an FPGA are described. A new type of GA called a Ringed GA, which features only local interactions among individuals, is introduced. A new type of reconfigurable platform called the PIG is described. The use of the PIG to support local, parallel GA operations is explained. Experiments in evolving digital circuits using a ringed GA on the PIG are described. Conclusions and plans for future work are presented.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126856986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Linyu Yang, J. Yen, Athirathnam Rajesh, K. D. Kihm
Genetic Algorithms (GA's) have been demonstrated to be a promising search and optimization technique. However, there are two issues regarding applying genetic algorithms to complex system identifications. The first issue is the high computational cost due to their slow convergence. The second issue is its scalability to deal with high dimensional model identification problems. To alleviate the difficulties, we propose a two-layer supervisory model optimization architecture and hybrid GA algorithms. The upper supervisory layer guides the low level optimization algorithm so that the optimization space of the algorithm is gradually reduced. The lower layer uses simplex-GA approach to perform search and numerical optimization within the range defined by the upper layer. Simplex is added as an additional operator of traditional GA to speed up the convergence. We have applied the proposed approach to tomographic reconstruction and the modeling of central metabolism, the results are satisfactory.
{"title":"A supervisory architecture and hybrid GA for the identifications of complex systems","authors":"Linyu Yang, J. Yen, Athirathnam Rajesh, K. D. Kihm","doi":"10.1109/CEC.1999.782513","DOIUrl":"https://doi.org/10.1109/CEC.1999.782513","url":null,"abstract":"Genetic Algorithms (GA's) have been demonstrated to be a promising search and optimization technique. However, there are two issues regarding applying genetic algorithms to complex system identifications. The first issue is the high computational cost due to their slow convergence. The second issue is its scalability to deal with high dimensional model identification problems. To alleviate the difficulties, we propose a two-layer supervisory model optimization architecture and hybrid GA algorithms. The upper supervisory layer guides the low level optimization algorithm so that the optimization space of the algorithm is gradually reduced. The lower layer uses simplex-GA approach to perform search and numerical optimization within the range defined by the upper layer. Simplex is added as an additional operator of traditional GA to speed up the convergence. We have applied the proposed approach to tomographic reconstruction and the modeling of central metabolism, the results are satisfactory.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114904547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Niche exploitation by an organism cumulatively results from its existing adaptations and phylogenetic history. The biological sonar of dolphins is an adaptation for object (e.g. prey or obstacle) detection and classification in visually limited environments. Current biomimetic modeling of echo discrimination by dolphins emphasizes the mechanical and neurological filtering of the peripheral auditory system prior to central nervous system processing of echoes. Anatomical data from, and psychoacoustic and neurophysiological experiments performed on bottlenose dolphins (Tursiops truncatus) have determined the structure of auditory tuning curves for a few tested frequencies. However, an optimal filter set has yet to be developed that demonstrates comparable frequency-dependent sensitivity across the range of dolphin hearing. Evolutionary computation techniques are employed to optimize the sensitivity of filters to that observed in the bottlenose dolphin, by seeding the population with bounded filter parameters and evolving the number, shape, and frequency distribution of individual filters. Comparisons of evolved and known biological tuning curves are discussed.
{"title":"Creation of a biomimetic model of dolphin hearing through the use of evolutionary computation","authors":"D. Houser, D. Helweg, K. Chellapilla, P. Moore","doi":"10.1109/CEC.1999.781971","DOIUrl":"https://doi.org/10.1109/CEC.1999.781971","url":null,"abstract":"Niche exploitation by an organism cumulatively results from its existing adaptations and phylogenetic history. The biological sonar of dolphins is an adaptation for object (e.g. prey or obstacle) detection and classification in visually limited environments. Current biomimetic modeling of echo discrimination by dolphins emphasizes the mechanical and neurological filtering of the peripheral auditory system prior to central nervous system processing of echoes. Anatomical data from, and psychoacoustic and neurophysiological experiments performed on bottlenose dolphins (Tursiops truncatus) have determined the structure of auditory tuning curves for a few tested frequencies. However, an optimal filter set has yet to be developed that demonstrates comparable frequency-dependent sensitivity across the range of dolphin hearing. Evolutionary computation techniques are employed to optimize the sensitivity of filters to that observed in the bottlenose dolphin, by seeding the population with bounded filter parameters and evolving the number, shape, and frequency distribution of individual filters. Comparisons of evolved and known biological tuning curves are discussed.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123034361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
EVOLVE IV is an evolutionary ecosystem model designed to explore niche proliferation and the emergence of inter-specific interactions. Organisms can interact by exchanging metabolites and by modifying their environment either to the benefit or detriment of neighboring organisms. Experiments indicate that niche formation occurs in the model.
{"title":"Computer experiments on the development of niche specialization in an artificial ecosystem","authors":"J. J. Brewster, M. Conrad","doi":"10.1109/CEC.1999.781957","DOIUrl":"https://doi.org/10.1109/CEC.1999.781957","url":null,"abstract":"EVOLVE IV is an evolutionary ecosystem model designed to explore niche proliferation and the emergence of inter-specific interactions. Organisms can interact by exchanging metabolites and by modifying their environment either to the benefit or detriment of neighboring organisms. Experiments indicate that niche formation occurs in the model.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114450473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Turbo codes have been an important revolution in the digital communications world. Since their discovery, the coding community has been trying to understand, explain and improve turbo codes. The floor phenomenon is the parallel concatenated convolutional turbo codes main problem. In this paper, genetic algorithms are used to lower the free distance of such a code. Results in terms of bit error rate are compared to the other main methods.
{"title":"Turbo codes optimization using genetic algorithms","authors":"N. Durand, J. Alliot, Boris Bartolome","doi":"10.1109/CEC.1999.782506","DOIUrl":"https://doi.org/10.1109/CEC.1999.782506","url":null,"abstract":"Turbo codes have been an important revolution in the digital communications world. Since their discovery, the coding community has been trying to understand, explain and improve turbo codes. The floor phenomenon is the parallel concatenated convolutional turbo codes main problem. In this paper, genetic algorithms are used to lower the free distance of such a code. Results in terms of bit error rate are compared to the other main methods.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122111786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}