Pub Date : 2013-06-20DOI: 10.1109/CEC.2013.6557848
M. Zambrano-Bigiarini, M. Clerc, Rodrigo Rojas-Mujica
In this work we benchmark, for the first time, the latest Standard Particle Swarm Optimisation algorithm (SPSO-2011) against the 28 test functions designed for the Special Session on Real-Parameter Single Objective Optimisation at CEC-2013. SPSO-2011 is a major improvement over previous PSO versions, with an adaptive random topology and rotational invariance constituting the main advancements. Results showed an outstanding performance of SPSO-2011 for the family of unimodal and separable test functions, with a fast convergence to the global optimum, while good performance was observed for four rotated multimodal functions. Conversely, SPSO-2011 showed the weakest performance for all composition problems (i.e. highly complex functions specially designed for this competition) and certain multimodal test functions. In general, a fast convergence towards the region of the global optimum was achieved, requiring less than 10E+03 function evaluations. However, for most composition and multimodal functions SPSO2011 showed a limited capability to “escape” from sub-optimal regions. Despite this limitation, a desirable feature of SPSO-2011 was its scalable behaviour, which observed up to 50-dimensional problems, i.e. keeping a similar performance across dimensions with no need for increasing the population size. Therefore, it seems advisable that future PSO improvements be focused on enhancing the algorithm's ability to solve non-separable and asymmetrical functions, with a large number of local minima and a second global minimum located far from the true optimum. This work is the first effort towards providing a baseline for a fair comparison of future PSO improvements.
{"title":"Standard Particle Swarm Optimisation 2011 at CEC-2013: A baseline for future PSO improvements","authors":"M. Zambrano-Bigiarini, M. Clerc, Rodrigo Rojas-Mujica","doi":"10.1109/CEC.2013.6557848","DOIUrl":"https://doi.org/10.1109/CEC.2013.6557848","url":null,"abstract":"In this work we benchmark, for the first time, the latest Standard Particle Swarm Optimisation algorithm (SPSO-2011) against the 28 test functions designed for the Special Session on Real-Parameter Single Objective Optimisation at CEC-2013. SPSO-2011 is a major improvement over previous PSO versions, with an adaptive random topology and rotational invariance constituting the main advancements. Results showed an outstanding performance of SPSO-2011 for the family of unimodal and separable test functions, with a fast convergence to the global optimum, while good performance was observed for four rotated multimodal functions. Conversely, SPSO-2011 showed the weakest performance for all composition problems (i.e. highly complex functions specially designed for this competition) and certain multimodal test functions. In general, a fast convergence towards the region of the global optimum was achieved, requiring less than 10E+03 function evaluations. However, for most composition and multimodal functions SPSO2011 showed a limited capability to “escape” from sub-optimal regions. Despite this limitation, a desirable feature of SPSO-2011 was its scalable behaviour, which observed up to 50-dimensional problems, i.e. keeping a similar performance across dimensions with no need for increasing the population size. Therefore, it seems advisable that future PSO improvements be focused on enhancing the algorithm's ability to solve non-separable and asymmetrical functions, with a large number of local minima and a second global minimum located far from the true optimum. This work is the first effort towards providing a baseline for a fair comparison of future PSO improvements.","PeriodicalId":211988,"journal":{"name":"2013 IEEE Congress on Evolutionary Computation","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134094285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-20DOI: 10.1109/CEC.2013.6557758
H. Ishibuchi, Yuki Tanigaki, Naoya Akedo, Y. Nojima
An important implementation issue in the design of hybrid evolutionary multiobjective optimization algorithms with local search (i.e., multiobjective memetic algorithms) is how to strike a balance between local search and global search. If local search is applied to all individuals at every generation, almost all computation time is spent by local search. As a result, global search ability of memetic algorithms is not well utilized. We can use three ideas for decreasing the computation load of local search. One idea is to apply local search to only a small number of individuals. This idea can be implemented by introducing a local search probability, which is used to choose only a small number of initial solutions for local search from the current population. Another idea is a periodical (i.e., intermittent) use of local search. This idea can be implemented by introducing a local search interval (e.g., every 10 generations), which is used to specify when local search is applied. The other idea is an early termination of local search. Local search for each initial solution is terminated after a small number of neighbors are examined. This idea can be implemented by introducing a local search length, which is the number of examined neighbors in a series of iterated local search from a single initial solution. In this paper, we discuss the use of these three ideas to strike a local-global search balance. Through computational experiments on a two-objective 500-item knapsack problem, we compare various settings of local search such as short local search from all individuals at every generation, long local search from only a few individuals at every generation, and periodical long local search from all individuals. Global search in this paper means genetic search by crossover and mutation in multiobjective memetic algorithms.
{"title":"How to strike a balance between local search and global search in multiobjective memetic algorithms for multiobjective 0/1 knapsack problems","authors":"H. Ishibuchi, Yuki Tanigaki, Naoya Akedo, Y. Nojima","doi":"10.1109/CEC.2013.6557758","DOIUrl":"https://doi.org/10.1109/CEC.2013.6557758","url":null,"abstract":"An important implementation issue in the design of hybrid evolutionary multiobjective optimization algorithms with local search (i.e., multiobjective memetic algorithms) is how to strike a balance between local search and global search. If local search is applied to all individuals at every generation, almost all computation time is spent by local search. As a result, global search ability of memetic algorithms is not well utilized. We can use three ideas for decreasing the computation load of local search. One idea is to apply local search to only a small number of individuals. This idea can be implemented by introducing a local search probability, which is used to choose only a small number of initial solutions for local search from the current population. Another idea is a periodical (i.e., intermittent) use of local search. This idea can be implemented by introducing a local search interval (e.g., every 10 generations), which is used to specify when local search is applied. The other idea is an early termination of local search. Local search for each initial solution is terminated after a small number of neighbors are examined. This idea can be implemented by introducing a local search length, which is the number of examined neighbors in a series of iterated local search from a single initial solution. In this paper, we discuss the use of these three ideas to strike a local-global search balance. Through computational experiments on a two-objective 500-item knapsack problem, we compare various settings of local search such as short local search from all individuals at every generation, long local search from only a few individuals at every generation, and periodical long local search from all individuals. Global search in this paper means genetic search by crossover and mutation in multiobjective memetic algorithms.","PeriodicalId":211988,"journal":{"name":"2013 IEEE Congress on Evolutionary Computation","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131566182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-20DOI: 10.1109/CEC.2013.6557589
Seamus Hill, C. O'Riordan
This paper examines the impact of changes in dimensionality on a multi-layered genotype-phenotype mapped GA. To gain an understanding of the impact we carry out a series of experiments on a number of well understood problems and compare the performance of a simple GA (SGA) to that of a multi-layered GA (MGA) to demonstrate their ability to search landscapes with varying degrees of difficulty due to changes in the dimensionality of each function. The paper also examines the impact of diversity maintenance in assisting the search and identifies the natural increase in diversity as the level of problem difficulty increases, as a result of the layered Genotype-Phenotype mapping. Initial results indicate that it may be advantageous to include a multi-layered genotype-phenotype mapping under certain circumstances.
{"title":"Analysing the impact of dimensionality on diversity in a multi-layered Genotype-Phenotype mapped genetic algorithm","authors":"Seamus Hill, C. O'Riordan","doi":"10.1109/CEC.2013.6557589","DOIUrl":"https://doi.org/10.1109/CEC.2013.6557589","url":null,"abstract":"This paper examines the impact of changes in dimensionality on a multi-layered genotype-phenotype mapped GA. To gain an understanding of the impact we carry out a series of experiments on a number of well understood problems and compare the performance of a simple GA (SGA) to that of a multi-layered GA (MGA) to demonstrate their ability to search landscapes with varying degrees of difficulty due to changes in the dimensionality of each function. The paper also examines the impact of diversity maintenance in assisting the search and identifies the natural increase in diversity as the level of problem difficulty increases, as a result of the layered Genotype-Phenotype mapping. Initial results indicate that it may be advantageous to include a multi-layered genotype-phenotype mapping under certain circumstances.","PeriodicalId":211988,"journal":{"name":"2013 IEEE Congress on Evolutionary Computation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131188335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-20DOI: 10.1109/CEC.2013.6557944
Honghao Chang, F. Zuren, Zhigang Ren
Many complex networks have been shown to have community structure. How to detect the communities is of great importance for understanding the organization and function of networks. Due to its NP-hard property, this problem is difficult to solve. In this paper, we propose an Ant Colony Optimization (ACO) approach to address the community detection problem by maximizing the modularity measure. Our algorithm follows the scheme of max-min ant system, and has some new features to accommodate the characteristics of complex networks. First, the solutions take the form of a locus-based adjacency representation, in which the communities are coded as connected components of a graph. Second, the structural information is incorporated into ACO, and we propose a new kind of heuristic based on the correlation between vertices. Experimental results obtained from tests on the LFR benchmark and four real-life networks demonstrate that our algorithm can improve the modularity value, and also can successfully detect the community structure.
{"title":"Community detection using Ant Colony Optimization","authors":"Honghao Chang, F. Zuren, Zhigang Ren","doi":"10.1109/CEC.2013.6557944","DOIUrl":"https://doi.org/10.1109/CEC.2013.6557944","url":null,"abstract":"Many complex networks have been shown to have community structure. How to detect the communities is of great importance for understanding the organization and function of networks. Due to its NP-hard property, this problem is difficult to solve. In this paper, we propose an Ant Colony Optimization (ACO) approach to address the community detection problem by maximizing the modularity measure. Our algorithm follows the scheme of max-min ant system, and has some new features to accommodate the characteristics of complex networks. First, the solutions take the form of a locus-based adjacency representation, in which the communities are coded as connected components of a graph. Second, the structural information is incorporated into ACO, and we propose a new kind of heuristic based on the correlation between vertices. Experimental results obtained from tests on the LFR benchmark and four real-life networks demonstrate that our algorithm can improve the modularity value, and also can successfully detect the community structure.","PeriodicalId":211988,"journal":{"name":"2013 IEEE Congress on Evolutionary Computation","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127798594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-20DOI: 10.1109/CEC.2013.6557580
F. O. França, G. P. Coelho
The analysis of complex networks is an important research topic that helps us understand the underlying behavior of complex systems and the interactions of their components. One particularly relevant analysis is the detection of communities formed by such interactions. Most community detection algorithms work as optimization tools that minimize a given quality function, while assuming that each node belongs to a single community. However, most complex networks contain nodes that belong to two or more communities, which are called bridges. The identification of bridges is crucial to several problems, as they often play important roles in the system described by the network. By exploiting the multimodality of quality functions, it is possible to obtain distinct optimal communities where, in each solution, each bridge node belongs to a distinct community. This paper proposes a technique that tries to identify a set of (possibly) overlapping communities by combining diverse solutions contained in a pool, which correspond to disjoint community partitions of a given network. To obtain the pool of partitions, an adapted version of the immune-inspired algorithm named cob-aiNet[C] was adopted here. The proposed methodology was applied to four real-world social networks and the obtained results were compared to those reported in the literature. The comparisons have shown that the proposed approach is competitive and even capable of overcoming the best results reported for some of the problems.
{"title":"Identifying overlapping communities in complex networks with multimodal optimization","authors":"F. O. França, G. P. Coelho","doi":"10.1109/CEC.2013.6557580","DOIUrl":"https://doi.org/10.1109/CEC.2013.6557580","url":null,"abstract":"The analysis of complex networks is an important research topic that helps us understand the underlying behavior of complex systems and the interactions of their components. One particularly relevant analysis is the detection of communities formed by such interactions. Most community detection algorithms work as optimization tools that minimize a given quality function, while assuming that each node belongs to a single community. However, most complex networks contain nodes that belong to two or more communities, which are called bridges. The identification of bridges is crucial to several problems, as they often play important roles in the system described by the network. By exploiting the multimodality of quality functions, it is possible to obtain distinct optimal communities where, in each solution, each bridge node belongs to a distinct community. This paper proposes a technique that tries to identify a set of (possibly) overlapping communities by combining diverse solutions contained in a pool, which correspond to disjoint community partitions of a given network. To obtain the pool of partitions, an adapted version of the immune-inspired algorithm named cob-aiNet[C] was adopted here. The proposed methodology was applied to four real-world social networks and the obtained results were compared to those reported in the literature. The comparisons have shown that the proposed approach is competitive and even capable of overcoming the best results reported for some of the problems.","PeriodicalId":211988,"journal":{"name":"2013 IEEE Congress on Evolutionary Computation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131290002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-20DOI: 10.1109/CEC.2013.6557799
N. Padhye, Pulkit Mittal, K. Deb
In this paper, we apply Differential Evolution (DE) algorithm in combination with a recently proposed constraint-handling strategy and study the performance 01' the resulting algorithm on CEC'13 test suite [1], and other constrained optimization problems. The goal of this exercise is to clearly identify and highlight the challenges encountered with the DE search while solving a range of optimization problems. We emphasize that understanding and resolving fundamental issues of a search procedure and considering the nature of the optimization problems at hand is the key to effective deployment of evolutionary procedures for search and optimization.
{"title":"Differential evolution: Performances and analyses","authors":"N. Padhye, Pulkit Mittal, K. Deb","doi":"10.1109/CEC.2013.6557799","DOIUrl":"https://doi.org/10.1109/CEC.2013.6557799","url":null,"abstract":"In this paper, we apply Differential Evolution (DE) algorithm in combination with a recently proposed constraint-handling strategy and study the performance 01' the resulting algorithm on CEC'13 test suite [1], and other constrained optimization problems. The goal of this exercise is to clearly identify and highlight the challenges encountered with the DE search while solving a range of optimization problems. We emphasize that understanding and resolving fundamental issues of a search procedure and considering the nature of the optimization problems at hand is the key to effective deployment of evolutionary procedures for search and optimization.","PeriodicalId":211988,"journal":{"name":"2013 IEEE Congress on Evolutionary Computation","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131663961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-20DOI: 10.1109/CEC.2013.6557860
Liam Cervante, Bing Xue, L. Shang, Mengjie Zhang
Dimension reduction plays an important role in many classification tasks. In this work, we propose a new filter dimension reduction algorithm (PSOPRSE) using binary particle swarm optimisation and probabilistic rough set theory. PSOPRSE aims to maximise a classification performance measure and minimise a newly developed measure reflecting the number of attributes. Both measures are formed by probabilistic rough set theory. PSOPRSE is compared with two existing PSO based algorithms and two traditional filter dimension reduction algorithms on six discrete datasets of varying difficulty. Five continues datasets including a large number of attributes are discretised and used to further examine the performance of PSOPRSE. Three learning algorithms, namely decision trees, nearest neighbour algorithms and naive Bayes, are used in the experiments to examine the generality of PSOPRSE. The results show that PSOPRSE can significantly decrease the number of attributes and maintain or improve the classification performance over using all attributes. In most cases, PSOPRSE outperforms the first PSO based algorithm and achieves better or much better classification performance than the second PSO based algorithm and the two traditional methods, although the number of attributes is slightly large in some cases. The results also show that PSOPRSE is general to the three different classification algorithms.
{"title":"Binary particle swarm optimisation and rough set theory for dimension reduction in classification","authors":"Liam Cervante, Bing Xue, L. Shang, Mengjie Zhang","doi":"10.1109/CEC.2013.6557860","DOIUrl":"https://doi.org/10.1109/CEC.2013.6557860","url":null,"abstract":"Dimension reduction plays an important role in many classification tasks. In this work, we propose a new filter dimension reduction algorithm (PSOPRSE) using binary particle swarm optimisation and probabilistic rough set theory. PSOPRSE aims to maximise a classification performance measure and minimise a newly developed measure reflecting the number of attributes. Both measures are formed by probabilistic rough set theory. PSOPRSE is compared with two existing PSO based algorithms and two traditional filter dimension reduction algorithms on six discrete datasets of varying difficulty. Five continues datasets including a large number of attributes are discretised and used to further examine the performance of PSOPRSE. Three learning algorithms, namely decision trees, nearest neighbour algorithms and naive Bayes, are used in the experiments to examine the generality of PSOPRSE. The results show that PSOPRSE can significantly decrease the number of attributes and maintain or improve the classification performance over using all attributes. In most cases, PSOPRSE outperforms the first PSO based algorithm and achieves better or much better classification performance than the second PSO based algorithm and the two traditional methods, although the number of attributes is slightly large in some cases. The results also show that PSOPRSE is general to the three different classification algorithms.","PeriodicalId":211988,"journal":{"name":"2013 IEEE Congress on Evolutionary Computation","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127309441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-20DOI: 10.1109/CEC.2013.6557965
P. LaRoche, A. Burrows, A. N. Zincir-Heywood
Securing todays computer networks requires numerous technologies to constantly be developed, refined and challenged. One area of research aiding in this process is that of protocol analysis, the study of the methods with which networks communicate. Our specific area of interest, the interaction with different protocol implementations, is a crucial component of this domain. Our work aims to identify and highlight a protocols states and state transitions, while minimizing the required a priori knowledge known about the protocol and its different versions (implementations). To this end, our approach uses a Genetic Programming (GP) based technique in order to analyze a client or a server of a given protocol via interacting with it with minimum a priori information. We evaluate our system against another well-known system from the literature on two different protocols, namely Dynamic Host Configuration Protocol (DHCP) and File Transfer Protocol (FTP). We measure the performances of these two systems in terms of the similarities and differences seen in the state diagrams produced for the protocols under testing. Results show that, by using our approach, it is possible to identify the different versions of a given protocol.
{"title":"How far an evolutionary approach can go for protocol state analysis and discovery","authors":"P. LaRoche, A. Burrows, A. N. Zincir-Heywood","doi":"10.1109/CEC.2013.6557965","DOIUrl":"https://doi.org/10.1109/CEC.2013.6557965","url":null,"abstract":"Securing todays computer networks requires numerous technologies to constantly be developed, refined and challenged. One area of research aiding in this process is that of protocol analysis, the study of the methods with which networks communicate. Our specific area of interest, the interaction with different protocol implementations, is a crucial component of this domain. Our work aims to identify and highlight a protocols states and state transitions, while minimizing the required a priori knowledge known about the protocol and its different versions (implementations). To this end, our approach uses a Genetic Programming (GP) based technique in order to analyze a client or a server of a given protocol via interacting with it with minimum a priori information. We evaluate our system against another well-known system from the literature on two different protocols, namely Dynamic Host Configuration Protocol (DHCP) and File Transfer Protocol (FTP). We measure the performances of these two systems in terms of the similarities and differences seen in the state diagrams produced for the protocols under testing. Results show that, by using our approach, it is possible to identify the different versions of a given protocol.","PeriodicalId":211988,"journal":{"name":"2013 IEEE Congress on Evolutionary Computation","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114367548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-20DOI: 10.1109/CEC.2013.6557663
Khalid M. Salama, F. E. B. Otero
Ant-Miner is a classification rule discovery algorithm that is based on Ant Colony Optimization (ACO) metaheuristic. cAnt-Miner is the extended version of the algorithm that handles continuous attributes on-the-fly during the rule construction process, while μAnt-Miner is an extension of the algorithm that selects the rule class prior to its construction, and utilizes multiple pheromone types, one for each permitted rule class. In this paper, we combine these two algorithms to derive a new approach for learning classification rules using ACO. The proposed approach is based on using the measure function for 1) computing the heuristics for rule term selection, 2) a criteria for discretizing continuous attributes, and 3) evaluating the quality of the constructed rule for pheromone update as well. We explore the effect of using different measure functions for on the output model in terms of predictive accuracy and model size. Empirical evaluations found that hypothesis of different functions produce different results are acceptable according to Friedman's statistical test.
{"title":"Using a unified measure function for heuristics, discretization, and rule quality evaluation in Ant-Miner","authors":"Khalid M. Salama, F. E. B. Otero","doi":"10.1109/CEC.2013.6557663","DOIUrl":"https://doi.org/10.1109/CEC.2013.6557663","url":null,"abstract":"Ant-Miner is a classification rule discovery algorithm that is based on Ant Colony Optimization (ACO) metaheuristic. cAnt-Miner is the extended version of the algorithm that handles continuous attributes on-the-fly during the rule construction process, while μAnt-Miner is an extension of the algorithm that selects the rule class prior to its construction, and utilizes multiple pheromone types, one for each permitted rule class. In this paper, we combine these two algorithms to derive a new approach for learning classification rules using ACO. The proposed approach is based on using the measure function for 1) computing the heuristics for rule term selection, 2) a criteria for discretizing continuous attributes, and 3) evaluating the quality of the constructed rule for pheromone update as well. We explore the effect of using different measure functions for on the output model in terms of predictive accuracy and model size. Empirical evaluations found that hypothesis of different functions produce different results are acceptable according to Friedman's statistical test.","PeriodicalId":211988,"journal":{"name":"2013 IEEE Congress on Evolutionary Computation","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114651649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-20DOI: 10.1109/CEC.2013.6557596
J. Yu, V. Li, Albert Y. S. Lam
An electric vehicle (EV) may be used as energy storage which allows the bi-directional electricity flow between the vehicle's battery and the electric power grid. In order to flatten the load profile of the electricity system, EV scheduling has become a hot research topic in recent years. In this paper, we propose a new formulation of the joint scheduling of EV and Unit Commitment (UC), called EVUC. Our formulation considers the characteristics of EVs while optimizing the system total running cost. We employ Chemical Reaction Optimization (CRO), a general-purpose optimization algorithm to solve this problem and the simulation results on a widely used set of instances indicate that CRO can effectively optimize this problem.
{"title":"Optimal V2G scheduling of electric vehicles and Unit Commitment using Chemical Reaction Optimization","authors":"J. Yu, V. Li, Albert Y. S. Lam","doi":"10.1109/CEC.2013.6557596","DOIUrl":"https://doi.org/10.1109/CEC.2013.6557596","url":null,"abstract":"An electric vehicle (EV) may be used as energy storage which allows the bi-directional electricity flow between the vehicle's battery and the electric power grid. In order to flatten the load profile of the electricity system, EV scheduling has become a hot research topic in recent years. In this paper, we propose a new formulation of the joint scheduling of EV and Unit Commitment (UC), called EVUC. Our formulation considers the characteristics of EVs while optimizing the system total running cost. We employ Chemical Reaction Optimization (CRO), a general-purpose optimization algorithm to solve this problem and the simulation results on a widely used set of instances indicate that CRO can effectively optimize this problem.","PeriodicalId":211988,"journal":{"name":"2013 IEEE Congress on Evolutionary Computation","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117234584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}