A New Multitask Joint Learning Framework for Expensive Multi-Objective Optimization Problems

IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-02-12 DOI:10.1109/TETCI.2024.3359042
Jianping Luo;Yongfei Dong;Qiqi Liu;Zexuan Zhu;Wenming Cao;Kay Chen Tan;Yaochu Jin
{"title":"A New Multitask Joint Learning Framework for Expensive Multi-Objective Optimization Problems","authors":"Jianping Luo;Yongfei Dong;Qiqi Liu;Zexuan Zhu;Wenming Cao;Kay Chen Tan;Yaochu Jin","doi":"10.1109/TETCI.2024.3359042","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a multi-objective optimization algorithm based on multitask conditional neural processes (MTCNPs) to deal with expensive multi-objective optimization problems (MOPs). In the proposed algorithm, an MOP is decomposed into several subproblems. Several related subproblems are assigned to a task group and jointly handled using an MTCNPs surrogate model, in which multi-task learning is incorporated to exploit the similarity across the subproblems via joint surrogate model learning. Each subproblem in a task group is modeled by a conditional neural processes (CNPs) instead of a Gaussian Process (GP), thus avoiding the calculation of the GP covariance matrix. In addition, multiple subproblems are jointly learned through a multi-layer similarity network with activation function, which can measure and utilize the similarity and useful information among subproblems more effectively and improve the accuracy and robustness of the surrogate model. Experimental studies under several scenarios indicate that the proposed algorithm performs better than several state-of-the-art multi-objective evolutionary algorithms for expensive MOPs. The parameter sensitivity and effectiveness of the proposed algorithm are analyzed in detail.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":5.3000,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Emerging Topics in Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10433214/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In this paper, we propose a multi-objective optimization algorithm based on multitask conditional neural processes (MTCNPs) to deal with expensive multi-objective optimization problems (MOPs). In the proposed algorithm, an MOP is decomposed into several subproblems. Several related subproblems are assigned to a task group and jointly handled using an MTCNPs surrogate model, in which multi-task learning is incorporated to exploit the similarity across the subproblems via joint surrogate model learning. Each subproblem in a task group is modeled by a conditional neural processes (CNPs) instead of a Gaussian Process (GP), thus avoiding the calculation of the GP covariance matrix. In addition, multiple subproblems are jointly learned through a multi-layer similarity network with activation function, which can measure and utilize the similarity and useful information among subproblems more effectively and improve the accuracy and robustness of the surrogate model. Experimental studies under several scenarios indicate that the proposed algorithm performs better than several state-of-the-art multi-objective evolutionary algorithms for expensive MOPs. The parameter sensitivity and effectiveness of the proposed algorithm are analyzed in detail.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
针对昂贵的多目标优化问题的新型多任务联合学习框架
本文提出了一种基于多任务条件神经过程(MTCNPs)的多目标优化算法,用于处理昂贵的多目标优化问题(MOPs)。在提议的算法中,一个 MOP 被分解成几个子问题。几个相关的子问题被分配到一个任务组,并使用 MTCNPs 代理模型进行联合处理,其中包含多任务学习,通过联合代理模型学习来利用各子问题之间的相似性。任务组中的每个子问题都采用条件神经过程(CNPs)建模,而不是高斯过程(GP),从而避免了计算 GP 协方差矩阵。此外,通过带激活函数的多层相似性网络对多个子问题进行联合学习,可以更有效地测量和利用子问题之间的相似性和有用信息,提高代用模型的准确性和鲁棒性。多种场景下的实验研究表明,针对昂贵的澳门威尼斯人官网程,所提出的算法比几种最先进的多目标进化算法性能更好。本文详细分析了所提算法的参数敏感性和有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
10.30
自引率
7.50%
发文量
147
期刊介绍: The IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) publishes original articles on emerging aspects of computational intelligence, including theory, applications, and surveys. TETCI is an electronics only publication. TETCI publishes six issues per year. Authors are encouraged to submit manuscripts in any emerging topic in computational intelligence, especially nature-inspired computing topics not covered by other IEEE Computational Intelligence Society journals. A few such illustrative examples are glial cell networks, computational neuroscience, Brain Computer Interface, ambient intelligence, non-fuzzy computing with words, artificial life, cultural learning, artificial endocrine networks, social reasoning, artificial hormone networks, computational intelligence for the IoT and Smart-X technologies.
期刊最新文献
Table of Contents IEEE Computational Intelligence Society Information IEEE Transactions on Emerging Topics in Computational Intelligence Information for Authors IEEE Transactions on Emerging Topics in Computational Intelligence Publication Information A Novel Multi-Source Information Fusion Method Based on Dependency Interval
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1