{"title":"Multi-strategy Improved Multi-objective Harris Hawk Optimization Algorithm with Elite Opposition-based Learning","authors":"Fulin Tian, Jiayang Wang, Fei Chu, Lin Zhou","doi":"10.1145/3590003.3590030","DOIUrl":null,"url":null,"abstract":"Abstract: To make up for the deficiencies of the Harris hawk optimization algorithm (HHO) in solving multi-objective optimization problems with low algorithm accuracy, slow rate of convergence, and easily fall into the trap of local optima, a multi-strategy improved multi-objective Harris hawk optimization algorithm with elite opposition-based learning (MO-EMHHO) is proposed. First, the population is initialized by Sobol sequences to increase population diversity. Second, incorporate the elite backward learning strategy to improve population diversity and quality. Further, an external profile maintenance method based on an adaptive grid strategy is proposed to make the solution better contracted to the real Pareto frontier. Subsequently, optimize the update strategy of the original algorithm in a non-linear energy update way to improve the exploration and development of the algorithm. Finally, improving the diversity of the algorithm and the uniformity of the solution set using an adaptive variation strategy based on Gaussian random wandering. Experimental comparison of the multi-objective particle swarm algorithm (MOPSO), multi-objective gray wolf algorithm (MOGWO), and multi-objective Harris Hawk algorithm (MOHHO) on the commonly used benchmark functions shows that the MO-EMHHO outperforms the other compared algorithms in terms of optimization seeking accuracy, convergence speed and stability, and provides a new solution to the multi-objective optimization problem.","PeriodicalId":340225,"journal":{"name":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3590003.3590030","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Abstract: To make up for the deficiencies of the Harris hawk optimization algorithm (HHO) in solving multi-objective optimization problems with low algorithm accuracy, slow rate of convergence, and easily fall into the trap of local optima, a multi-strategy improved multi-objective Harris hawk optimization algorithm with elite opposition-based learning (MO-EMHHO) is proposed. First, the population is initialized by Sobol sequences to increase population diversity. Second, incorporate the elite backward learning strategy to improve population diversity and quality. Further, an external profile maintenance method based on an adaptive grid strategy is proposed to make the solution better contracted to the real Pareto frontier. Subsequently, optimize the update strategy of the original algorithm in a non-linear energy update way to improve the exploration and development of the algorithm. Finally, improving the diversity of the algorithm and the uniformity of the solution set using an adaptive variation strategy based on Gaussian random wandering. Experimental comparison of the multi-objective particle swarm algorithm (MOPSO), multi-objective gray wolf algorithm (MOGWO), and multi-objective Harris Hawk algorithm (MOHHO) on the commonly used benchmark functions shows that the MO-EMHHO outperforms the other compared algorithms in terms of optimization seeking accuracy, convergence speed and stability, and provides a new solution to the multi-objective optimization problem.