{"title":"以两个单独的填充标准为指导的昂贵的多目标进化优化","authors":"Shufen Qin, Chaoli Sun, Farooq Akhtar, Gang Xie","doi":"10.1007/s12293-023-00404-0","DOIUrl":null,"url":null,"abstract":"<p>Recently, surrogate-assisted multi-objective evolutionary algorithms have achieved much attention for solving computationally expensive multi-/many-objective optimization problems. An effective infill sampling strategy is critical in surrogate-assisted multi-objective evolutionary optimization to assist evolutionary algorithms in identifying the optimal non-dominated solutions. This paper proposes a Kriging-assisted many-objective optimization algorithm guided by two infill sampling criteria to self-adaptively select two new solutions for expensive objective function evaluations to improve history models. The first uncertainty-based criterion selects the solution for expensive function evaluations with the maximum approximation uncertainty to improve the chance of discovering the optimal region. The approximation uncertainty of a solution is the weighted sum of approximation uncertainties on all objectives. The other indicator-based criterion selects the solution with the best indicator value to accelerate exploiting the non-dominated optimal solutions. The indicator of an individual is defined by the convergence-based and crowding-based distances in the objective space. Finally, two multi-objective test suites, DTLZ and MaF, and three real-world applications are applied to test the performance of the proposed method and four compared classical surrogate-assisted multi-objective evolutionary algorithms. The results show that the proposed algorithm is more competitive on most optimization problems.</p>","PeriodicalId":48780,"journal":{"name":"Memetic Computing","volume":"7 1","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Expensive many-objective evolutionary optimization guided by two individual infill criteria\",\"authors\":\"Shufen Qin, Chaoli Sun, Farooq Akhtar, Gang Xie\",\"doi\":\"10.1007/s12293-023-00404-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Recently, surrogate-assisted multi-objective evolutionary algorithms have achieved much attention for solving computationally expensive multi-/many-objective optimization problems. An effective infill sampling strategy is critical in surrogate-assisted multi-objective evolutionary optimization to assist evolutionary algorithms in identifying the optimal non-dominated solutions. This paper proposes a Kriging-assisted many-objective optimization algorithm guided by two infill sampling criteria to self-adaptively select two new solutions for expensive objective function evaluations to improve history models. The first uncertainty-based criterion selects the solution for expensive function evaluations with the maximum approximation uncertainty to improve the chance of discovering the optimal region. The approximation uncertainty of a solution is the weighted sum of approximation uncertainties on all objectives. The other indicator-based criterion selects the solution with the best indicator value to accelerate exploiting the non-dominated optimal solutions. The indicator of an individual is defined by the convergence-based and crowding-based distances in the objective space. Finally, two multi-objective test suites, DTLZ and MaF, and three real-world applications are applied to test the performance of the proposed method and four compared classical surrogate-assisted multi-objective evolutionary algorithms. The results show that the proposed algorithm is more competitive on most optimization problems.</p>\",\"PeriodicalId\":48780,\"journal\":{\"name\":\"Memetic Computing\",\"volume\":\"7 1\",\"pages\":\"\"},\"PeriodicalIF\":3.3000,\"publicationDate\":\"2023-12-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Memetic Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s12293-023-00404-0\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Memetic Computing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s12293-023-00404-0","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Expensive many-objective evolutionary optimization guided by two individual infill criteria
Recently, surrogate-assisted multi-objective evolutionary algorithms have achieved much attention for solving computationally expensive multi-/many-objective optimization problems. An effective infill sampling strategy is critical in surrogate-assisted multi-objective evolutionary optimization to assist evolutionary algorithms in identifying the optimal non-dominated solutions. This paper proposes a Kriging-assisted many-objective optimization algorithm guided by two infill sampling criteria to self-adaptively select two new solutions for expensive objective function evaluations to improve history models. The first uncertainty-based criterion selects the solution for expensive function evaluations with the maximum approximation uncertainty to improve the chance of discovering the optimal region. The approximation uncertainty of a solution is the weighted sum of approximation uncertainties on all objectives. The other indicator-based criterion selects the solution with the best indicator value to accelerate exploiting the non-dominated optimal solutions. The indicator of an individual is defined by the convergence-based and crowding-based distances in the objective space. Finally, two multi-objective test suites, DTLZ and MaF, and three real-world applications are applied to test the performance of the proposed method and four compared classical surrogate-assisted multi-objective evolutionary algorithms. The results show that the proposed algorithm is more competitive on most optimization problems.
Memetic ComputingCOMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-OPERATIONS RESEARCH & MANAGEMENT SCIENCE
CiteScore
6.80
自引率
12.80%
发文量
31
期刊介绍:
Memes have been defined as basic units of transferrable information that reside in the brain and are propagated across populations through the process of imitation. From an algorithmic point of view, memes have come to be regarded as building-blocks of prior knowledge, expressed in arbitrary computational representations (e.g., local search heuristics, fuzzy rules, neural models, etc.), that have been acquired through experience by a human or machine, and can be imitated (i.e., reused) across problems.
The Memetic Computing journal welcomes papers incorporating the aforementioned socio-cultural notion of memes into artificial systems, with particular emphasis on enhancing the efficacy of computational and artificial intelligence techniques for search, optimization, and machine learning through explicit prior knowledge incorporation. The goal of the journal is to thus be an outlet for high quality theoretical and applied research on hybrid, knowledge-driven computational approaches that may be characterized under any of the following categories of memetics:
Type 1: General-purpose algorithms integrated with human-crafted heuristics that capture some form of prior domain knowledge; e.g., traditional memetic algorithms hybridizing evolutionary global search with a problem-specific local search.
Type 2: Algorithms with the ability to automatically select, adapt, and reuse the most appropriate heuristics from a diverse pool of available choices; e.g., learning a mapping between global search operators and multiple local search schemes, given an optimization problem at hand.
Type 3: Algorithms that autonomously learn with experience, adaptively reusing data and/or machine learning models drawn from related problems as prior knowledge in new target tasks of interest; examples include, but are not limited to, transfer learning and optimization, multi-task learning and optimization, or any other multi-X evolutionary learning and optimization methodologies.