Karam M. Sallam, Mohamed Abdel-Basset, Mohammed El-Abd, A. W. Mohamed
{"title":"IMODEII:基于强化学习的改进IMODE算法","authors":"Karam M. Sallam, Mohamed Abdel-Basset, Mohammed El-Abd, A. W. Mohamed","doi":"10.1109/CEC55065.2022.9870420","DOIUrl":null,"url":null,"abstract":"The success of differential evolution algorithm depends on its offspring breeding strategy and the associated control parameters. Improved Multi-Operator Differential Evolution (IMODE) proved its efficiency and ranked first in the CEC2020 competition. In this paper, an improved IMODE, called IMODEII, is introduced. In IMODEII, Reinforcement Learning (RL), a computational methodology that simulates interaction-based learning, is used as an adaptive operator selection approach. RL is used to select the best-performing action among three of them in the optimization process to evolve a set of solution based on the population state and reward value. Different from IMODE, only two mutation strategies have been used in IMODEII. We tested the performance of the proposed IMODEII by considering 12 benchmark functions with 10 and 20 variables taken from CEC2022 competition on single objective bound constrained numerical optimisation. A comparison between the proposed IMODEII and the state-of-the-art algorithms is conducted, with the results demonstrating the efficiency of the proposed IMODEII.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"IMODEII: an Improved IMODE algorithm based on the Reinforcement Learning\",\"authors\":\"Karam M. Sallam, Mohamed Abdel-Basset, Mohammed El-Abd, A. W. Mohamed\",\"doi\":\"10.1109/CEC55065.2022.9870420\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The success of differential evolution algorithm depends on its offspring breeding strategy and the associated control parameters. Improved Multi-Operator Differential Evolution (IMODE) proved its efficiency and ranked first in the CEC2020 competition. In this paper, an improved IMODE, called IMODEII, is introduced. In IMODEII, Reinforcement Learning (RL), a computational methodology that simulates interaction-based learning, is used as an adaptive operator selection approach. RL is used to select the best-performing action among three of them in the optimization process to evolve a set of solution based on the population state and reward value. Different from IMODE, only two mutation strategies have been used in IMODEII. We tested the performance of the proposed IMODEII by considering 12 benchmark functions with 10 and 20 variables taken from CEC2022 competition on single objective bound constrained numerical optimisation. A comparison between the proposed IMODEII and the state-of-the-art algorithms is conducted, with the results demonstrating the efficiency of the proposed IMODEII.\",\"PeriodicalId\":153241,\"journal\":{\"name\":\"2022 IEEE Congress on Evolutionary Computation (CEC)\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE Congress on Evolutionary Computation (CEC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CEC55065.2022.9870420\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Congress on Evolutionary Computation (CEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CEC55065.2022.9870420","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
IMODEII: an Improved IMODE algorithm based on the Reinforcement Learning
The success of differential evolution algorithm depends on its offspring breeding strategy and the associated control parameters. Improved Multi-Operator Differential Evolution (IMODE) proved its efficiency and ranked first in the CEC2020 competition. In this paper, an improved IMODE, called IMODEII, is introduced. In IMODEII, Reinforcement Learning (RL), a computational methodology that simulates interaction-based learning, is used as an adaptive operator selection approach. RL is used to select the best-performing action among three of them in the optimization process to evolve a set of solution based on the population state and reward value. Different from IMODE, only two mutation strategies have been used in IMODEII. We tested the performance of the proposed IMODEII by considering 12 benchmark functions with 10 and 20 variables taken from CEC2022 competition on single objective bound constrained numerical optimisation. A comparison between the proposed IMODEII and the state-of-the-art algorithms is conducted, with the results demonstrating the efficiency of the proposed IMODEII.