{"title":"基于信息素学习策略的全局连续优化自适应差分进化算法","authors":"Pirapong Singsathid, P. Puphasuk, J. Wetweerapong","doi":"10.2478/fcds-2023-0010","DOIUrl":null,"url":null,"abstract":"Abstract Differential evolution algorithm (DE) is a well-known population-based method for solving continuous optimization problems. It has a simple structure and is easy to adapt to a wide range of applications. However, with suitable population sizes, its performance depends on the two main control parameters: scaling factor (F ) and crossover rate (CR). The classical DE method can achieve high performance by a time-consuming tunning process or a sophisticated adaptive control implementation. We propose in this paper an adaptive differential evolution algorithm with a pheromone-based learning strategy (ADE-PS) inspired by ant colony optimization (ACO). The ADE-PS embeds a pheromone-based mechanism that manages the probabilities associated with the partition values of F and CR. It also introduces a resetting strategy to reset the pheromone at a specific time to unlearn and relearn the progressing search. The preliminary experiments find a suitable number of subintervals (ns) for partitioning the control parameter ranges and the reset period (rs) for resetting the pheromone. Then the comparison experiments evaluate ADE-PS using the suitable ns and rs against some adaptive DE methods in the literature. The results show that ADE-PS is more reliable and outperforms several well-known methods in the literature.","PeriodicalId":42909,"journal":{"name":"Foundations of Computing and Decision Sciences","volume":"48 1","pages":"243 - 266"},"PeriodicalIF":1.8000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adaptive differential evolution algorithm with a pheromone-based learning strategy for global continuous optimization\",\"authors\":\"Pirapong Singsathid, P. Puphasuk, J. Wetweerapong\",\"doi\":\"10.2478/fcds-2023-0010\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Differential evolution algorithm (DE) is a well-known population-based method for solving continuous optimization problems. It has a simple structure and is easy to adapt to a wide range of applications. However, with suitable population sizes, its performance depends on the two main control parameters: scaling factor (F ) and crossover rate (CR). The classical DE method can achieve high performance by a time-consuming tunning process or a sophisticated adaptive control implementation. We propose in this paper an adaptive differential evolution algorithm with a pheromone-based learning strategy (ADE-PS) inspired by ant colony optimization (ACO). The ADE-PS embeds a pheromone-based mechanism that manages the probabilities associated with the partition values of F and CR. It also introduces a resetting strategy to reset the pheromone at a specific time to unlearn and relearn the progressing search. The preliminary experiments find a suitable number of subintervals (ns) for partitioning the control parameter ranges and the reset period (rs) for resetting the pheromone. Then the comparison experiments evaluate ADE-PS using the suitable ns and rs against some adaptive DE methods in the literature. The results show that ADE-PS is more reliable and outperforms several well-known methods in the literature.\",\"PeriodicalId\":42909,\"journal\":{\"name\":\"Foundations of Computing and Decision Sciences\",\"volume\":\"48 1\",\"pages\":\"243 - 266\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2023-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Foundations of Computing and Decision Sciences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2478/fcds-2023-0010\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Foundations of Computing and Decision Sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2478/fcds-2023-0010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Adaptive differential evolution algorithm with a pheromone-based learning strategy for global continuous optimization
Abstract Differential evolution algorithm (DE) is a well-known population-based method for solving continuous optimization problems. It has a simple structure and is easy to adapt to a wide range of applications. However, with suitable population sizes, its performance depends on the two main control parameters: scaling factor (F ) and crossover rate (CR). The classical DE method can achieve high performance by a time-consuming tunning process or a sophisticated adaptive control implementation. We propose in this paper an adaptive differential evolution algorithm with a pheromone-based learning strategy (ADE-PS) inspired by ant colony optimization (ACO). The ADE-PS embeds a pheromone-based mechanism that manages the probabilities associated with the partition values of F and CR. It also introduces a resetting strategy to reset the pheromone at a specific time to unlearn and relearn the progressing search. The preliminary experiments find a suitable number of subintervals (ns) for partitioning the control parameter ranges and the reset period (rs) for resetting the pheromone. Then the comparison experiments evaluate ADE-PS using the suitable ns and rs against some adaptive DE methods in the literature. The results show that ADE-PS is more reliable and outperforms several well-known methods in the literature.