Pub Date : 2022-07-18DOI: 10.1109/CEC55065.2022.9870313
Urban Škvorc, T. Eftimov, P. Korošec
Exploratory Landscape Analysis is a powerful technique that allows us to gain an understanding of a problem landscape solely by sampling the problem space. It has been successfully used in a number of applications, for example for the task of automatic algorithm selection. However, recent work has shown that Exploratory Landscape Analysis contains some specific weaknesses that its users should be aware of. As the technique is sample based, it has been shown to be sensitive to the choice of sampling strategy. Additionally, many landscape features are not invariant to transformations of the underlying samples which should have no effect on algorithm performance, specifically shifting and scaling. The analysis of the effect of shifting and scaling has so far only been demonstrated on a single problem set and dimensionality. In this paper, we perform a comprehensive analysis of the invariance of Exploratory Landscape Analysis features to these two transformations, by considering different sampling strate-gies, sampling sizes, problem dimensionalities, and benchmark problem sets to determine their individual and combined effect. We show that these factors have very limited influence on the features' invariance when they are considered either individually or combined.
{"title":"A Comprehensive Analysis of the Invariance of Exploratory Landscape Analysis Features to Function Transformations","authors":"Urban Škvorc, T. Eftimov, P. Korošec","doi":"10.1109/CEC55065.2022.9870313","DOIUrl":"https://doi.org/10.1109/CEC55065.2022.9870313","url":null,"abstract":"Exploratory Landscape Analysis is a powerful technique that allows us to gain an understanding of a problem landscape solely by sampling the problem space. It has been successfully used in a number of applications, for example for the task of automatic algorithm selection. However, recent work has shown that Exploratory Landscape Analysis contains some specific weaknesses that its users should be aware of. As the technique is sample based, it has been shown to be sensitive to the choice of sampling strategy. Additionally, many landscape features are not invariant to transformations of the underlying samples which should have no effect on algorithm performance, specifically shifting and scaling. The analysis of the effect of shifting and scaling has so far only been demonstrated on a single problem set and dimensionality. In this paper, we perform a comprehensive analysis of the invariance of Exploratory Landscape Analysis features to these two transformations, by considering different sampling strate-gies, sampling sizes, problem dimensionalities, and benchmark problem sets to determine their individual and combined effect. We show that these factors have very limited influence on the features' invariance when they are considered either individually or combined.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127719701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/CEC55065.2022.9870221
Ferrante Neri, Matthew Todd
This paper presents an experimental study on memetic strategies to enhance the performance of population-based metaheuristics for multimodal optimisation. The purpose of this work is to devise some recommendations about algorithmic design to allow a successful combination of local search and niching techniques. Six memetic strategies are presented and tested over five population-based algorithms endowed with niching techniques. Experimental results clearly show that local search enhances the performance of the framework for multimodal optimisation in terms of both peak ratio and success rate. The most promising results are obtained by the variants that employ an archive that pre-selects the solutions undergoing local search thus avoiding computational waste. Furthermore, promising results are obtained by variants that reduce the exploitation pressure of the population-based framework by using a simulated annealing logic in the selection process, leaving the exploitation task to the local search.
{"title":"A Study on Six Memetic Strategies for Multimodal Optimisation by Differential Evolution","authors":"Ferrante Neri, Matthew Todd","doi":"10.1109/CEC55065.2022.9870221","DOIUrl":"https://doi.org/10.1109/CEC55065.2022.9870221","url":null,"abstract":"This paper presents an experimental study on memetic strategies to enhance the performance of population-based metaheuristics for multimodal optimisation. The purpose of this work is to devise some recommendations about algorithmic design to allow a successful combination of local search and niching techniques. Six memetic strategies are presented and tested over five population-based algorithms endowed with niching techniques. Experimental results clearly show that local search enhances the performance of the framework for multimodal optimisation in terms of both peak ratio and success rate. The most promising results are obtained by the variants that employ an archive that pre-selects the solutions undergoing local search thus avoiding computational waste. Furthermore, promising results are obtained by variants that reduce the exploitation pressure of the population-based framework by using a simulated annealing logic in the selection process, leaving the exploitation task to the local search.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129577608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/CEC55065.2022.9870310
Jarrod Goschen, Anna Sergeevna Bosman, S. Gruner
Ongoing progress in computational intelligence (CI) has led to an increased desire to apply CI techniques for the pur-pose of improving software engineering processes, particularly software testing. Existing state-of-the-art automated software testing techniques focus on utilising search algorithms to discover input values that achieve high execution path coverage. These algorithms are trained on the same code that they intend to test, requiring instrumentation and lengthy search times to test each software component. This paper outlines a novel genetic programming framework, where the evolved solutions are not input values, but micro-programs that can repeatedly generate input values to efficiently explore a software component's input parameter domain. We also argue that our approach can be generalised such as to be applied to many different software systems, and is thus not specific to merely the particular software component on which it was trained.
{"title":"Genetic Micro-Programs for Automated Software Testing with Large Path Coverage","authors":"Jarrod Goschen, Anna Sergeevna Bosman, S. Gruner","doi":"10.1109/CEC55065.2022.9870310","DOIUrl":"https://doi.org/10.1109/CEC55065.2022.9870310","url":null,"abstract":"Ongoing progress in computational intelligence (CI) has led to an increased desire to apply CI techniques for the pur-pose of improving software engineering processes, particularly software testing. Existing state-of-the-art automated software testing techniques focus on utilising search algorithms to discover input values that achieve high execution path coverage. These algorithms are trained on the same code that they intend to test, requiring instrumentation and lengthy search times to test each software component. This paper outlines a novel genetic programming framework, where the evolved solutions are not input values, but micro-programs that can repeatedly generate input values to efficiently explore a software component's input parameter domain. We also argue that our approach can be generalised such as to be applied to many different software systems, and is thus not specific to merely the particular software component on which it was trained.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129680351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/CEC55065.2022.9870425
G. Acampora, Roberto Schiattarella, A. Vitiello
Genetic Algorithms (GAs) are optimization methods that search near-optimal solutions by applying well-known operations such as selection, crossover and mutation. In particular, crossover and mutation are aimed at creating new solutions from selected parents with the goal of discovering better and better solutions in the search space. In literature, several approaches have been defined to create new solutions from the mating pool to try to improve the performance of genetic optimization. In this paper, the literature is enriched by introducing a new mating operator that harnesses the stochastic nature of quantum computation to evolve individuals in a classical genetic workflow. This new approach, named Quantum Mating Operator, acts as a multi-parent operator that identifies alleles' frequency patterns from a collection of individuals selected by means of conventional selection operators, and encodes them through a quantum state. This state is successively mutated and measured to generate a new classical chromosome. As shown by experimental results, GAs equipped with the proposed operator outperform those equipped with traditional crossover and mutation operators when used to solve well-known benchmark functions.
{"title":"Quantum Mating Operator: A New Approach to Evolve Chromosomes in Genetic Algorithms","authors":"G. Acampora, Roberto Schiattarella, A. Vitiello","doi":"10.1109/CEC55065.2022.9870425","DOIUrl":"https://doi.org/10.1109/CEC55065.2022.9870425","url":null,"abstract":"Genetic Algorithms (GAs) are optimization methods that search near-optimal solutions by applying well-known operations such as selection, crossover and mutation. In particular, crossover and mutation are aimed at creating new solutions from selected parents with the goal of discovering better and better solutions in the search space. In literature, several approaches have been defined to create new solutions from the mating pool to try to improve the performance of genetic optimization. In this paper, the literature is enriched by introducing a new mating operator that harnesses the stochastic nature of quantum computation to evolve individuals in a classical genetic workflow. This new approach, named Quantum Mating Operator, acts as a multi-parent operator that identifies alleles' frequency patterns from a collection of individuals selected by means of conventional selection operators, and encodes them through a quantum state. This state is successively mutated and measured to generate a new classical chromosome. As shown by experimental results, GAs equipped with the proposed operator outperform those equipped with traditional crossover and mutation operators when used to solve well-known benchmark functions.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129430103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/CEC55065.2022.9870390
Yasuyuki Mitsui, Y. Yamakoshi, Hiroyuki Sato
This work addresses a real-world item stock allocation using evolutionary optimization for Japanese electric-commerce. We use the actual data of items to be ordered, existing warehouses, and order records from customers. The target area is all over Japan. The task is to find the optimal distribution of one thousand items to eight warehouses. The problem has two objectives: minimizing the total shipping cost and minimizing the average number of stocked warehouses. The problem also has constraints, including the warehouse capacities and the maximum possible number of shipping from each warehouse. Since the commonly used uniform crossover tends to be destructive in this problem, we propose four crossovers for the problem: the item, the warehouse, the item uniform, the warehouse uniform crossovers. Experimental results show that the proposed item crossover is suited to solve this problem, and the obtained item stock allocations can significantly reduce shipping and stocking costs compared with a human-made allocation.
{"title":"Evolutionary Real-world Item Stock Allocation for Japanese Electric Commerce","authors":"Yasuyuki Mitsui, Y. Yamakoshi, Hiroyuki Sato","doi":"10.1109/CEC55065.2022.9870390","DOIUrl":"https://doi.org/10.1109/CEC55065.2022.9870390","url":null,"abstract":"This work addresses a real-world item stock allocation using evolutionary optimization for Japanese electric-commerce. We use the actual data of items to be ordered, existing warehouses, and order records from customers. The target area is all over Japan. The task is to find the optimal distribution of one thousand items to eight warehouses. The problem has two objectives: minimizing the total shipping cost and minimizing the average number of stocked warehouses. The problem also has constraints, including the warehouse capacities and the maximum possible number of shipping from each warehouse. Since the commonly used uniform crossover tends to be destructive in this problem, we propose four crossovers for the problem: the item, the warehouse, the item uniform, the warehouse uniform crossovers. Experimental results show that the proposed item crossover is suited to solve this problem, and the obtained item stock allocations can significantly reduce shipping and stocking costs compared with a human-made allocation.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129467516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/CEC55065.2022.9870389
Truong Dang, Anh Vu Luong, Alan Wee-Chung Liew, J. Mccall, T. Nguyen
Deep Neural Networks (DNNs) have created a breakthrough in medical image analysis in recent years. Because clinical applications of automated medical analysis are required to be reliable, robust and accurate, it is necessary to devise effective DNNs based models for medical applications. In this paper, we propose an ensemble framework of DNNs for the problem of medical image segmentation with a note that combining multiple models can obtain better results compared to each constituent one. We introduce an effective combining strategy for individual segmentation models based on swarm intelligence, which is a family of optimization algorithms inspired by biological processes. The problem of expensive computational time of the optimizer during the objective function evaluation is relieved by using a surrogate-based method. We train a surrogate on the objective function information of some populations and then use it to predict the objective values of each candidate in the subsequent populations. Experiments run on a number of public datasets indicate that our framework achieves competitive results within reasonable computation time.
{"title":"Ensemble of deep learning models with surrogate-based optimization for medical image segmentation","authors":"Truong Dang, Anh Vu Luong, Alan Wee-Chung Liew, J. Mccall, T. Nguyen","doi":"10.1109/CEC55065.2022.9870389","DOIUrl":"https://doi.org/10.1109/CEC55065.2022.9870389","url":null,"abstract":"Deep Neural Networks (DNNs) have created a breakthrough in medical image analysis in recent years. Because clinical applications of automated medical analysis are required to be reliable, robust and accurate, it is necessary to devise effective DNNs based models for medical applications. In this paper, we propose an ensemble framework of DNNs for the problem of medical image segmentation with a note that combining multiple models can obtain better results compared to each constituent one. We introduce an effective combining strategy for individual segmentation models based on swarm intelligence, which is a family of optimization algorithms inspired by biological processes. The problem of expensive computational time of the optimizer during the objective function evaluation is relieved by using a surrogate-based method. We train a surrogate on the objective function information of some populations and then use it to predict the objective values of each candidate in the subsequent populations. Experiments run on a number of public datasets indicate that our framework achieves competitive results within reasonable computation time.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128938142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/CEC55065.2022.9870260
Sascha Mücke, N. Piatkowski
Probabilistic methods serve as the underlying frame-work of various machine learning techniques. When using these models, a central problem is that of computing the partition function, whose computation is intractable for many models of interest. Here, we present the first quantum-inspired method that is especially designed for computing fast approximations to the partition function. Our approach uses a novel hardware solver for quadratic unconstrained binary optimization problems that relies on evolutionary computation. The specialized design allows us to assess millions of candidate solutions per second, leading to high quality maximum a-posterior (MAP) estimates, even for hard instances. We investigate the expected run-time of our solver and devise new ultra-sparse parity constraints to combine our device with the WISH approximation scheme. A SIMD-like packing strategy further allows us to solve multiple MAP instances at once, resulting in high efficiency and an additional speed-up. Numerical experiments show that our quantum-inspired approach produces accurate and robust results. While pure software implementations of the WISH algorithm typically run on large compute clusters with hundreds of CPUs, our results are achieved on two FPGA boards which both consume below 10 Watts. Moreover, our results extend seamlessly to adiabatic quantum computers.
{"title":"Quantum- Inspired Structure- Preserving Probabilistic Inference","authors":"Sascha Mücke, N. Piatkowski","doi":"10.1109/CEC55065.2022.9870260","DOIUrl":"https://doi.org/10.1109/CEC55065.2022.9870260","url":null,"abstract":"Probabilistic methods serve as the underlying frame-work of various machine learning techniques. When using these models, a central problem is that of computing the partition function, whose computation is intractable for many models of interest. Here, we present the first quantum-inspired method that is especially designed for computing fast approximations to the partition function. Our approach uses a novel hardware solver for quadratic unconstrained binary optimization problems that relies on evolutionary computation. The specialized design allows us to assess millions of candidate solutions per second, leading to high quality maximum a-posterior (MAP) estimates, even for hard instances. We investigate the expected run-time of our solver and devise new ultra-sparse parity constraints to combine our device with the WISH approximation scheme. A SIMD-like packing strategy further allows us to solve multiple MAP instances at once, resulting in high efficiency and an additional speed-up. Numerical experiments show that our quantum-inspired approach produces accurate and robust results. While pure software implementations of the WISH algorithm typically run on large compute clusters with hundreds of CPUs, our results are achieved on two FPGA boards which both consume below 10 Watts. Moreover, our results extend seamlessly to adiabatic quantum computers.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121681649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/CEC55065.2022.9870438
Heba Abdelnabi, Mostafa Z. Ali, M. Daoud, R. Alazrai, A. Awajan, Robert Reynolds, P. N. Suganthan
Real-life problems can be expressed as optimization problems. These problems pose a challenge for researchers to design efficient algorithms that are capable of finding optimal solutions with the least budget. Stochastic Fractal Search (SFS) proved its powerfulness as a metaheuristic algorithm through the large research body that used it to optimize different industrial and engineering tasks. Nevertheless, as with any meta-heuristic algorithm and according to the “No Free Lunch” theorem, SFS may suffer from immature convergence and local minima trap. Thus, to address these issues, a popular Differential Evolution variant called Success-History based Adaptive Differential Evolution (SHADE) is used to enhance SFS performance in a unique three-phase hybrid framework. Moreover, a local search is also incorporated into the proposed framework to refine the quality of the generated solution and accelerate the hybrid algorithm convergence speed. The proposed hybrid algorithm, namely eMpSDE, is tested against a diverse set of varying complexity optimization problems, consisting of well-known standard unconstrained unimodal and multimodal test functions and some constrained engineering design problems. Then, a comparative analysis of the performance of the proposed hybrid algorithm is carried out with the recent state of art algorithms to validate its competitivity.
{"title":"An Enhanced Multi-Phase Stochastic Differential Evolution Framework for Numerical Optimization","authors":"Heba Abdelnabi, Mostafa Z. Ali, M. Daoud, R. Alazrai, A. Awajan, Robert Reynolds, P. N. Suganthan","doi":"10.1109/CEC55065.2022.9870438","DOIUrl":"https://doi.org/10.1109/CEC55065.2022.9870438","url":null,"abstract":"Real-life problems can be expressed as optimization problems. These problems pose a challenge for researchers to design efficient algorithms that are capable of finding optimal solutions with the least budget. Stochastic Fractal Search (SFS) proved its powerfulness as a metaheuristic algorithm through the large research body that used it to optimize different industrial and engineering tasks. Nevertheless, as with any meta-heuristic algorithm and according to the “No Free Lunch” theorem, SFS may suffer from immature convergence and local minima trap. Thus, to address these issues, a popular Differential Evolution variant called Success-History based Adaptive Differential Evolution (SHADE) is used to enhance SFS performance in a unique three-phase hybrid framework. Moreover, a local search is also incorporated into the proposed framework to refine the quality of the generated solution and accelerate the hybrid algorithm convergence speed. The proposed hybrid algorithm, namely eMpSDE, is tested against a diverse set of varying complexity optimization problems, consisting of well-known standard unconstrained unimodal and multimodal test functions and some constrained engineering design problems. Then, a comparative analysis of the performance of the proposed hybrid algorithm is carried out with the recent state of art algorithms to validate its competitivity.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124120942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/CEC55065.2022.9870327
Giacomo Zoppi, L. Vanneschi, M. Giacobini
In the field of Machine Learning, one of the most common and discussed questions is how to choose an adequate number of data observations, in order to train our models satisfactorily. In other words, find what is the right amount of data needed to create a model, that is neither underfitted nor overfitted, but instead is able to achieve a reasonable generalization ability. The problem grows in importance when we consider Genetic Programming, where fitness evaluation is often rather slow. Therefore, finding the minimum amount of data that enables us to discover the solution to a given problem could bring significant benefits. Using the notion of entropy in a dataset, we seek to understand the information gain obtainable from each additional data point. We then look for the smallest percentage of data that corresponds to enough information to yield satisfactory results. We present, as a first step, an example derived from the state of art. Then, we question a relevant part of our procedure and introduce two case studies to experimentally validate our theoretical hypothesis.
{"title":"Reducing the Number of Training Cases in Genetic Programming","authors":"Giacomo Zoppi, L. Vanneschi, M. Giacobini","doi":"10.1109/CEC55065.2022.9870327","DOIUrl":"https://doi.org/10.1109/CEC55065.2022.9870327","url":null,"abstract":"In the field of Machine Learning, one of the most common and discussed questions is how to choose an adequate number of data observations, in order to train our models satisfactorily. In other words, find what is the right amount of data needed to create a model, that is neither underfitted nor overfitted, but instead is able to achieve a reasonable generalization ability. The problem grows in importance when we consider Genetic Programming, where fitness evaluation is often rather slow. Therefore, finding the minimum amount of data that enables us to discover the solution to a given problem could bring significant benefits. Using the notion of entropy in a dataset, we seek to understand the information gain obtainable from each additional data point. We then look for the smallest percentage of data that corresponds to enough information to yield satisfactory results. We present, as a first step, an example derived from the state of art. Then, we question a relevant part of our procedure and introduce two case studies to experimentally validate our theoretical hypothesis.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117303759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/CEC55065.2022.9870391
Michael Dubé, S. Houghten
Personal contact networks are used to represent the social connections that exist between individuals within a population. Producing accurate networks that represent the actual vectors of infection that exist within a network can be useful for modelling epidemic trajectory and outcomes, which is significantly impacted by a network's structure. An evolutionary algorithm is used to evolve these networks subject to two fitness measures: epidemic duration and epidemic spread through a population. With each infection there is a small probability of a new variant being generated. Being infected with one variant provides partial immunity to future variants. This allows us to evaluate the impact of each variant, a significant innovation in comparison to other work. The amount by which each variant was allowed to change had a significant impact upon epidemic spread. For epidemic duration, the probability of new variants was the primary cause of increased epidemic duration.
{"title":"Now I Know My Alpha, Beta, Gammas: Variants in an Epidemic Scheme","authors":"Michael Dubé, S. Houghten","doi":"10.1109/CEC55065.2022.9870391","DOIUrl":"https://doi.org/10.1109/CEC55065.2022.9870391","url":null,"abstract":"Personal contact networks are used to represent the social connections that exist between individuals within a population. Producing accurate networks that represent the actual vectors of infection that exist within a network can be useful for modelling epidemic trajectory and outcomes, which is significantly impacted by a network's structure. An evolutionary algorithm is used to evolve these networks subject to two fitness measures: epidemic duration and epidemic spread through a population. With each infection there is a small probability of a new variant being generated. Being infected with one variant provides partial immunity to future variants. This allows us to evaluate the impact of each variant, a significant innovation in comparison to other work. The amount by which each variant was allowed to change had a significant impact upon epidemic spread. For epidemic duration, the probability of new variants was the primary cause of increased epidemic duration.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131456769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}