Songbai Liu;Jun Li;Qiuzhen Lin;Ye Tian;Jianqiang Li;Kay Chen Tan
{"title":"通过基于自动编码器的问题转换实现进化式大规模多目标优化","authors":"Songbai Liu;Jun Li;Qiuzhen Lin;Ye Tian;Jianqiang Li;Kay Chen Tan","doi":"10.1109/TETCI.2024.3369629","DOIUrl":null,"url":null,"abstract":"Addressing the challenge of efficiently handling high-dimensional search spaces in solving large-scale multiobjective optimization problems (LMOPs) becomes an emerging research topic in evolutionary computation. In response, this paper proposes a new evolutionary optimizer with a tactic of autoencoder-based problem transformation (APT). The APT involves creating an autoencoder to learn the relative importance of each variable by competitively reconstructing the dominated and non-dominated solutions. Using the learned importance, all variables are divided into multiple groups without consuming any function evaluations. The number of groups dynamically increases according to the population's evolutionary status. Each variable group has an associated autoencoder, transforming the search space into an adaptable small-scale representation space. Thus, the search process occurs within these dynamic representation spaces, leading to effective production of offspring solutions. To assess the effectiveness of APT, extensive testing is performed on benchmark suites and real-world LMOPs, encompassing variable sizes ranging from 10\n<sup>3</sup>\n to 10\n<sup>4</sup>\n. The comparative results demonstrate the advantages of our proposed optimizer in solving these LMOPs with a limited budget of 10\n<sup>5</sup>\n function evaluations.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":5.3000,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evolutionary Large-Scale Multiobjective Optimization via Autoencoder-Based Problem Transformation\",\"authors\":\"Songbai Liu;Jun Li;Qiuzhen Lin;Ye Tian;Jianqiang Li;Kay Chen Tan\",\"doi\":\"10.1109/TETCI.2024.3369629\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Addressing the challenge of efficiently handling high-dimensional search spaces in solving large-scale multiobjective optimization problems (LMOPs) becomes an emerging research topic in evolutionary computation. In response, this paper proposes a new evolutionary optimizer with a tactic of autoencoder-based problem transformation (APT). The APT involves creating an autoencoder to learn the relative importance of each variable by competitively reconstructing the dominated and non-dominated solutions. Using the learned importance, all variables are divided into multiple groups without consuming any function evaluations. The number of groups dynamically increases according to the population's evolutionary status. Each variable group has an associated autoencoder, transforming the search space into an adaptable small-scale representation space. Thus, the search process occurs within these dynamic representation spaces, leading to effective production of offspring solutions. To assess the effectiveness of APT, extensive testing is performed on benchmark suites and real-world LMOPs, encompassing variable sizes ranging from 10\\n<sup>3</sup>\\n to 10\\n<sup>4</sup>\\n. The comparative results demonstrate the advantages of our proposed optimizer in solving these LMOPs with a limited budget of 10\\n<sup>5</sup>\\n function evaluations.\",\"PeriodicalId\":13135,\"journal\":{\"name\":\"IEEE Transactions on Emerging Topics in Computational Intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2024-03-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Emerging Topics in Computational Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10462591/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Emerging Topics in Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10462591/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Evolutionary Large-Scale Multiobjective Optimization via Autoencoder-Based Problem Transformation
Addressing the challenge of efficiently handling high-dimensional search spaces in solving large-scale multiobjective optimization problems (LMOPs) becomes an emerging research topic in evolutionary computation. In response, this paper proposes a new evolutionary optimizer with a tactic of autoencoder-based problem transformation (APT). The APT involves creating an autoencoder to learn the relative importance of each variable by competitively reconstructing the dominated and non-dominated solutions. Using the learned importance, all variables are divided into multiple groups without consuming any function evaluations. The number of groups dynamically increases according to the population's evolutionary status. Each variable group has an associated autoencoder, transforming the search space into an adaptable small-scale representation space. Thus, the search process occurs within these dynamic representation spaces, leading to effective production of offspring solutions. To assess the effectiveness of APT, extensive testing is performed on benchmark suites and real-world LMOPs, encompassing variable sizes ranging from 10
3
to 10
4
. The comparative results demonstrate the advantages of our proposed optimizer in solving these LMOPs with a limited budget of 10
5
function evaluations.
期刊介绍:
The IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) publishes original articles on emerging aspects of computational intelligence, including theory, applications, and surveys.
TETCI is an electronics only publication. TETCI publishes six issues per year.
Authors are encouraged to submit manuscripts in any emerging topic in computational intelligence, especially nature-inspired computing topics not covered by other IEEE Computational Intelligence Society journals. A few such illustrative examples are glial cell networks, computational neuroscience, Brain Computer Interface, ambient intelligence, non-fuzzy computing with words, artificial life, cultural learning, artificial endocrine networks, social reasoning, artificial hormone networks, computational intelligence for the IoT and Smart-X technologies.