{"title":"HRA: A Multi-Criteria Framework for Ranking Metaheuristic Optimization Algorithms","authors":"Evgenia-Maria K. Goula, Dimitris G. Sotiropoulos","doi":"arxiv-2409.11617","DOIUrl":null,"url":null,"abstract":"Metaheuristic algorithms are essential for solving complex optimization\nproblems in different fields. However, the difficulty in comparing and rating\nthese algorithms remains due to the wide range of performance metrics and\nproblem dimensions usually involved. On the other hand, nonparametric\nstatistical methods and post hoc tests are time-consuming, especially when we\nonly need to identify the top performers among many algorithms. The\nHierarchical Rank Aggregation (HRA) algorithm aims to efficiently rank\nmetaheuristic algorithms based on their performance across many criteria and\ndimensions. The HRA employs a hierarchical framework that begins with\ncollecting performance metrics on various benchmark functions and dimensions.\nRank-based normalization is employed for each performance measure to ensure\ncomparability and the robust TOPSIS aggregation is applied to combine these\nrankings at several hierarchical levels, resulting in a comprehensive ranking\nof the algorithms. Our study uses data from the CEC 2017 competition to\ndemonstrate the robustness and efficacy of the HRA framework. It examines 30\nbenchmark functions and evaluates the performance of 13 metaheuristic\nalgorithms across five performance indicators in four distinct dimensions. This\npresentation highlights the potential of the HRA to enhance the interpretation\nof the comparative advantages and disadvantages of various algorithms by\nsimplifying practitioners' choices of the most appropriate algorithm for\ncertain optimization problems.","PeriodicalId":501291,"journal":{"name":"arXiv - CS - Performance","volume":"31 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Performance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11617","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Metaheuristic algorithms are essential for solving complex optimization
problems in different fields. However, the difficulty in comparing and rating
these algorithms remains due to the wide range of performance metrics and
problem dimensions usually involved. On the other hand, nonparametric
statistical methods and post hoc tests are time-consuming, especially when we
only need to identify the top performers among many algorithms. The
Hierarchical Rank Aggregation (HRA) algorithm aims to efficiently rank
metaheuristic algorithms based on their performance across many criteria and
dimensions. The HRA employs a hierarchical framework that begins with
collecting performance metrics on various benchmark functions and dimensions.
Rank-based normalization is employed for each performance measure to ensure
comparability and the robust TOPSIS aggregation is applied to combine these
rankings at several hierarchical levels, resulting in a comprehensive ranking
of the algorithms. Our study uses data from the CEC 2017 competition to
demonstrate the robustness and efficacy of the HRA framework. It examines 30
benchmark functions and evaluates the performance of 13 metaheuristic
algorithms across five performance indicators in four distinct dimensions. This
presentation highlights the potential of the HRA to enhance the interpretation
of the comparative advantages and disadvantages of various algorithms by
simplifying practitioners' choices of the most appropriate algorithm for
certain optimization problems.