FastICA and Infomax are the most popular algorithms for calculating independent components. These two optimization process usually lead to unstable results. To overcome this drawback, a genetic algorithm for independent component analysis has been developed with enhancement of the independence of the resulting components. By modifying the FastICA to start from given initial point and adopting a new feasible fitness function, the original target of obtaining the maximum mutual independence is achieved. The proposed method is evaluated and tested on a numerical simulative data set from the measures of the normalized mutual information, negentropy and kurtosis, together with the accuracy of the estimated components and mixing vectors. Experimental results on simulated data demonstrate that compared to FastICA and Infomax, the proposed algorithm can give more accurate results together with stronger independence.
{"title":"Independent component analysis based on genetic algorithms","authors":"Gaojin Wen, Chunxiao Zhang, Zhaorong Lin, Zhiming Shang, Hongming Wang, Qian Zhang","doi":"10.1109/ICNC.2014.6975837","DOIUrl":"https://doi.org/10.1109/ICNC.2014.6975837","url":null,"abstract":"FastICA and Infomax are the most popular algorithms for calculating independent components. These two optimization process usually lead to unstable results. To overcome this drawback, a genetic algorithm for independent component analysis has been developed with enhancement of the independence of the resulting components. By modifying the FastICA to start from given initial point and adopting a new feasible fitness function, the original target of obtaining the maximum mutual independence is achieved. The proposed method is evaluated and tested on a numerical simulative data set from the measures of the normalized mutual information, negentropy and kurtosis, together with the accuracy of the estimated components and mixing vectors. Experimental results on simulated data demonstrate that compared to FastICA and Infomax, the proposed algorithm can give more accurate results together with stronger independence.","PeriodicalId":208779,"journal":{"name":"2014 10th International Conference on Natural Computation (ICNC)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124591675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-08DOI: 10.1109/ICNC.2014.6975933
Suqi Zhang, Jing Wang, Qing Wu, Jin Zhan
Aiming at the defects of Genetic Algorithm (GA) for solving the Maximum Clique Problem (MCP) in more complicated, long-running and poor generality, a fast genetic algorithm (FGA) is proposed in this paper. A new chromosome repair method on the degree, elitist selection based on random repairing, uniform crossover and inversion mutation are adopted in the new algorithm. These components can speed up the search and effectively prevent the algorithm from trapping into the local optimum. The algorithm was tested on DIMACS benchmark graphs. Experimental results show that FGA has better performance and high generality.
{"title":"A fast genetic algorithm for solving the maximum clique problem","authors":"Suqi Zhang, Jing Wang, Qing Wu, Jin Zhan","doi":"10.1109/ICNC.2014.6975933","DOIUrl":"https://doi.org/10.1109/ICNC.2014.6975933","url":null,"abstract":"Aiming at the defects of Genetic Algorithm (GA) for solving the Maximum Clique Problem (MCP) in more complicated, long-running and poor generality, a fast genetic algorithm (FGA) is proposed in this paper. A new chromosome repair method on the degree, elitist selection based on random repairing, uniform crossover and inversion mutation are adopted in the new algorithm. These components can speed up the search and effectively prevent the algorithm from trapping into the local optimum. The algorithm was tested on DIMACS benchmark graphs. Experimental results show that FGA has better performance and high generality.","PeriodicalId":208779,"journal":{"name":"2014 10th International Conference on Natural Computation (ICNC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125003475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-08DOI: 10.1109/ICNC.2014.6975939
Donglin Cao, Dazhen Lin, Yanping Lv
ECG is a kind of high dimensional dataset and the useful information of illness only exists in few heartbeats. To achieve a good classification performance, most existing approaches used features proposed by human experts, and there is no approach for automatic useful feature extraction. To solve that problem, we propose an ECG Codebook Model (ECGCM) which automatically builds a small number of codes to represent the high dimension ECG data. ECGCM not only greatly reduces the dimension of ECG, but also contains more meaningful semantic information for Myocardial Infarction detection. Our experiment results show that ECGCM achieves 2% and 20.5% improvement in sensitivity and specificity respectively in Myocardial Infarction detection.
{"title":"ECG codebook model for Myocardial Infarction detection","authors":"Donglin Cao, Dazhen Lin, Yanping Lv","doi":"10.1109/ICNC.2014.6975939","DOIUrl":"https://doi.org/10.1109/ICNC.2014.6975939","url":null,"abstract":"ECG is a kind of high dimensional dataset and the useful information of illness only exists in few heartbeats. To achieve a good classification performance, most existing approaches used features proposed by human experts, and there is no approach for automatic useful feature extraction. To solve that problem, we propose an ECG Codebook Model (ECGCM) which automatically builds a small number of codes to represent the high dimension ECG data. ECGCM not only greatly reduces the dimension of ECG, but also contains more meaningful semantic information for Myocardial Infarction detection. Our experiment results show that ECGCM achieves 2% and 20.5% improvement in sensitivity and specificity respectively in Myocardial Infarction detection.","PeriodicalId":208779,"journal":{"name":"2014 10th International Conference on Natural Computation (ICNC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131313545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-08DOI: 10.1109/ICNC.2014.6975814
Yalei Quan, X. Yang
Drum water level is an important parameter for boilers on both thermal power plant and nuclear power plant. It is hard to measure the level correctly. So it brings some difficulties to the control based on the drum water level, even the alarm. Usually, more than three water gauges are installed for drum water level measurement. And it adopts two-out-of-three strategy for obtaining the final alarm signal in distributed control system (DCS), which is often the false alarm. Without the right alarm, it is to result in very serious disaster on power plant. One approach based on Back-Propagation (BP) Neural Network is proposed in this paper for solving the problem. The measurements from different water gauges are inputted into the BP Neural Network after fuzzy process and the output of the Network represents the type of alarm. Some data of the drum water level from a nuclear power plant is applied with the method of the paper. From the experiments, it can be seen that the alarm accuracy is increased rapidly.
{"title":"A method for alarming water level of boiler drum on nuclear power plant based on BP Neural Network","authors":"Yalei Quan, X. Yang","doi":"10.1109/ICNC.2014.6975814","DOIUrl":"https://doi.org/10.1109/ICNC.2014.6975814","url":null,"abstract":"Drum water level is an important parameter for boilers on both thermal power plant and nuclear power plant. It is hard to measure the level correctly. So it brings some difficulties to the control based on the drum water level, even the alarm. Usually, more than three water gauges are installed for drum water level measurement. And it adopts two-out-of-three strategy for obtaining the final alarm signal in distributed control system (DCS), which is often the false alarm. Without the right alarm, it is to result in very serious disaster on power plant. One approach based on Back-Propagation (BP) Neural Network is proposed in this paper for solving the problem. The measurements from different water gauges are inputted into the BP Neural Network after fuzzy process and the output of the Network represents the type of alarm. Some data of the drum water level from a nuclear power plant is applied with the method of the paper. From the experiments, it can be seen that the alarm accuracy is increased rapidly.","PeriodicalId":208779,"journal":{"name":"2014 10th International Conference on Natural Computation (ICNC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130092908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-08DOI: 10.1109/ICNC.2014.6975831
G. Hang, Xuanchang Zhou, Yang Yang, Danyan Zhang
A No Race (NORA) dynamic logic using neuron-MOS transistor is presented. The circuit is designed using the n-channel neuron-MOS transistor instead of the nMOS logic block or pMOS logic block in the conventional NORA dynamic logic circuit. The proposed full-adder shows that the logic block of NORA circuit can be simplified by utilizing neuron-MOS transistor. A simple synthesis technique of the n-channel neuron-MOS logic block by employing summation signal is discussed. HSPICE simulation results using TSMC 0.35μm 2-ploy 4-metal CMOS process with 1.5V power supply, have verified the effectiveness of the proposed neuron-MOS-based NORA circuits. For comparison, the power consumption and the output delay of the proposed NORA adders are measured during the simulations.
提出了一种基于神经元- mos晶体管的无竞态动态逻辑。该电路采用n通道神经元- mos晶体管设计,取代了传统NORA动态逻辑电路中的nMOS逻辑块或pMOS逻辑块。所提出的全加法器表明,利用神经元- mos晶体管可以简化NORA电路的逻辑块。讨论了一种利用求和信号合成n通道神经元- mos逻辑块的简单方法。采用台积电0.35μm 2-ploy 4金属CMOS工艺和1.5V电源的HSPICE仿真结果验证了所提出的神经元- mos -based NORA电路的有效性。为了比较,在仿真过程中测量了所提出的NORA加法器的功耗和输出延迟。
{"title":"NORA circuit design using neuron-MOS transistors","authors":"G. Hang, Xuanchang Zhou, Yang Yang, Danyan Zhang","doi":"10.1109/ICNC.2014.6975831","DOIUrl":"https://doi.org/10.1109/ICNC.2014.6975831","url":null,"abstract":"A No Race (NORA) dynamic logic using neuron-MOS transistor is presented. The circuit is designed using the n-channel neuron-MOS transistor instead of the nMOS logic block or pMOS logic block in the conventional NORA dynamic logic circuit. The proposed full-adder shows that the logic block of NORA circuit can be simplified by utilizing neuron-MOS transistor. A simple synthesis technique of the n-channel neuron-MOS logic block by employing summation signal is discussed. HSPICE simulation results using TSMC 0.35μm 2-ploy 4-metal CMOS process with 1.5V power supply, have verified the effectiveness of the proposed neuron-MOS-based NORA circuits. For comparison, the power consumption and the output delay of the proposed NORA adders are measured during the simulations.","PeriodicalId":208779,"journal":{"name":"2014 10th International Conference on Natural Computation (ICNC)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121826460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-08DOI: 10.1109/ICNC.2014.6975835
Lijun Wei, Zhenzhen Zhang, A. Lim
This paper introduces and solves a new practical variant of integrated routing and loading problem called the capacitated vehicle routing problem minimizing fuel consumption under three-dimensional loading constraints (3L-FCVRP). This problem requires to design routes for a fleet of homogeneous vehicles located at the central depot to serve all customers, whose demand are formed by a set of three-dimensional, rectangular, weighted items. Different from the well-studied problem: capacitated vehicle routing problem with 3D loading constraints (3L-CVRP) in literature, the objective of 3L-FCVRP is to minimize the total fuel consumption instead of travel distance. The fuel consumption rate is assumed to be proportionate to the total weight of the vehicle. A route is feasible only if a feasible loading plan to load the demanded items into the vehicle exists and the loading plan must satisfy a set of practical constraints. To solve this problem, the evolutionary local search (ELS) framework incorporating with recombination method is employed to explore the solution space and an open space based heuristic is used to examine the feasibility of solutions. To verify the effectiveness of our approach, we first test ELS on the instances of 3L-CVRP, which can be seen as a special case of 3L-FCVRP. The results demonstrate that ELS outperforms all existing approaches on average and improves the best known solutions for most of the instances. Then, we generated data for 3L-FCVRP and reported the detailed results of ELS for future comparisons.
{"title":"An evolutionary local search for the capacitated vehicle routing problem minimizing fuel consumption under three-dimensional loading constraints","authors":"Lijun Wei, Zhenzhen Zhang, A. Lim","doi":"10.1109/ICNC.2014.6975835","DOIUrl":"https://doi.org/10.1109/ICNC.2014.6975835","url":null,"abstract":"This paper introduces and solves a new practical variant of integrated routing and loading problem called the capacitated vehicle routing problem minimizing fuel consumption under three-dimensional loading constraints (3L-FCVRP). This problem requires to design routes for a fleet of homogeneous vehicles located at the central depot to serve all customers, whose demand are formed by a set of three-dimensional, rectangular, weighted items. Different from the well-studied problem: capacitated vehicle routing problem with 3D loading constraints (3L-CVRP) in literature, the objective of 3L-FCVRP is to minimize the total fuel consumption instead of travel distance. The fuel consumption rate is assumed to be proportionate to the total weight of the vehicle. A route is feasible only if a feasible loading plan to load the demanded items into the vehicle exists and the loading plan must satisfy a set of practical constraints. To solve this problem, the evolutionary local search (ELS) framework incorporating with recombination method is employed to explore the solution space and an open space based heuristic is used to examine the feasibility of solutions. To verify the effectiveness of our approach, we first test ELS on the instances of 3L-CVRP, which can be seen as a special case of 3L-FCVRP. The results demonstrate that ELS outperforms all existing approaches on average and improves the best known solutions for most of the instances. Then, we generated data for 3L-FCVRP and reported the detailed results of ELS for future comparisons.","PeriodicalId":208779,"journal":{"name":"2014 10th International Conference on Natural Computation (ICNC)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116392162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-08DOI: 10.1109/ICNC.2014.6975882
Carine Pierrette Mukamakuza, Jia-yang Wang, Li Li
In this paper analysis of reduction and dynamic reducts of an information data is presented. The method of reduction in information system is explained first, the information was assumed to be in a two-dimension or in a matrix form. A discernibility matrix of the data was constructed, and then all reducts from that matrix were found. The best (optimum) reduct was selected from all reducts; that was achieved by considering the one with the highest level of frequency by using Java programming and Weka tool. Three methods of dynamic reducts computation are introduced namely: The new type of Reduct in the object-oriented rough set model which is called dynamic reduct, the method of dynamic reduct calculation based on calculating of reduct traces and the generation F-dynamic reduct using cascading Hashes. The analysis of those three methods led to their improvement through adding one step in each algorithm which was the method of getting the optimum reducts from all reducts calculated in first steps of each algorithm. As result, the dynamic reducts were generated from optimum reducts and not from all reducts. Thus by generating an improved dynamic reducts, improvement of those three methods for calculation of dynamic reducts is achieved.
{"title":"Dynamic reducts computation analysis based on rough sets","authors":"Carine Pierrette Mukamakuza, Jia-yang Wang, Li Li","doi":"10.1109/ICNC.2014.6975882","DOIUrl":"https://doi.org/10.1109/ICNC.2014.6975882","url":null,"abstract":"In this paper analysis of reduction and dynamic reducts of an information data is presented. The method of reduction in information system is explained first, the information was assumed to be in a two-dimension or in a matrix form. A discernibility matrix of the data was constructed, and then all reducts from that matrix were found. The best (optimum) reduct was selected from all reducts; that was achieved by considering the one with the highest level of frequency by using Java programming and Weka tool. Three methods of dynamic reducts computation are introduced namely: The new type of Reduct in the object-oriented rough set model which is called dynamic reduct, the method of dynamic reduct calculation based on calculating of reduct traces and the generation F-dynamic reduct using cascading Hashes. The analysis of those three methods led to their improvement through adding one step in each algorithm which was the method of getting the optimum reducts from all reducts calculated in first steps of each algorithm. As result, the dynamic reducts were generated from optimum reducts and not from all reducts. Thus by generating an improved dynamic reducts, improvement of those three methods for calculation of dynamic reducts is achieved.","PeriodicalId":208779,"journal":{"name":"2014 10th International Conference on Natural Computation (ICNC)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127311341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-08DOI: 10.1109/ICNC.2014.6975902
Peili Yang, Zhongqi Yang, Shien Ge
Cache-based side channel attack has been extensively studied in recent years due to the possibly high damage it would cause. We are interested in replaying one instance of such attack to prove its feasibility and explore its details. Based on the literature review, we implemented a cache-based covert channel on a x86 machine and evaluated its performance by statistical analysis. Under certain limitations, we found that the channel can achieve a bandwidth as high as 1MB/s with over 99% accuracy, which is fairly enough to carry large amount of information.
{"title":"Establishing covert channel on shared cache architecture","authors":"Peili Yang, Zhongqi Yang, Shien Ge","doi":"10.1109/ICNC.2014.6975902","DOIUrl":"https://doi.org/10.1109/ICNC.2014.6975902","url":null,"abstract":"Cache-based side channel attack has been extensively studied in recent years due to the possibly high damage it would cause. We are interested in replaying one instance of such attack to prove its feasibility and explore its details. Based on the literature review, we implemented a cache-based covert channel on a x86 machine and evaluated its performance by statistical analysis. Under certain limitations, we found that the channel can achieve a bandwidth as high as 1MB/s with over 99% accuracy, which is fairly enough to carry large amount of information.","PeriodicalId":208779,"journal":{"name":"2014 10th International Conference on Natural Computation (ICNC)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130497609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-08DOI: 10.1109/ICNC.2014.6975862
Erick Castellanos, F. Ramos, M. Ramos
Plant's simulation through Lindenmayer Systems is a well know field, but most of the work in the area focus on the growth part of the developmental process. From an artificial life perspective, it is desired to have a simulation that includes all the stages of the cycle of life of a plant. That is the reason why this paper target the last stage and propose a strategy to include the concept of death through Lindenmayer systems. By using parametric and context-sensitive Lindenmayer systems in the modeling and simulation, the semantics of the mentioned concept can be captured and, thereby, with the proper interpretation, a graphic result, at a morphological level, can be displayed. A proof of concept that includes most of the concepts covered is also given.
{"title":"Semantic death in plant's simulation using Lindenmayer systems","authors":"Erick Castellanos, F. Ramos, M. Ramos","doi":"10.1109/ICNC.2014.6975862","DOIUrl":"https://doi.org/10.1109/ICNC.2014.6975862","url":null,"abstract":"Plant's simulation through Lindenmayer Systems is a well know field, but most of the work in the area focus on the growth part of the developmental process. From an artificial life perspective, it is desired to have a simulation that includes all the stages of the cycle of life of a plant. That is the reason why this paper target the last stage and propose a strategy to include the concept of death through Lindenmayer systems. By using parametric and context-sensitive Lindenmayer systems in the modeling and simulation, the semantics of the mentioned concept can be captured and, thereby, with the proper interpretation, a graphic result, at a morphological level, can be displayed. A proof of concept that includes most of the concepts covered is also given.","PeriodicalId":208779,"journal":{"name":"2014 10th International Conference on Natural Computation (ICNC)","volume":"13 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131419263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-08DOI: 10.1109/ICNC.2014.6975910
Xiaoxiao Wang, L. Jiao
This paper aims to propose a modified species conservation technique for reversible logic circuits synthesis which is characterized by multimodal and large search space. The species conservation technique is tailored to adapt the uncertainty caused by the variable length representation. The different species is divided according to a new similarity definition and the similarity threshold is dynamically adjusted with the increasing of the chromosome length to ensure the search space exploring. A species elimination and restart search are conducted to avoid redundant search when a species converged. The same reproduction probability, other than that proportionate to its ranking, is given to different species. Experiments have been performed on a series of benchmark test functions. Comparison is primarily conducted to show the superior performance different to the basic evolutionary algorithm without species conservation mechanism and the original species conservation method.
{"title":"Synthesis of reversible logic circuit using a species conservation method","authors":"Xiaoxiao Wang, L. Jiao","doi":"10.1109/ICNC.2014.6975910","DOIUrl":"https://doi.org/10.1109/ICNC.2014.6975910","url":null,"abstract":"This paper aims to propose a modified species conservation technique for reversible logic circuits synthesis which is characterized by multimodal and large search space. The species conservation technique is tailored to adapt the uncertainty caused by the variable length representation. The different species is divided according to a new similarity definition and the similarity threshold is dynamically adjusted with the increasing of the chromosome length to ensure the search space exploring. A species elimination and restart search are conducted to avoid redundant search when a species converged. The same reproduction probability, other than that proportionate to its ranking, is given to different species. Experiments have been performed on a series of benchmark test functions. Comparison is primarily conducted to show the superior performance different to the basic evolutionary algorithm without species conservation mechanism and the original species conservation method.","PeriodicalId":208779,"journal":{"name":"2014 10th International Conference on Natural Computation (ICNC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132537787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}