Solving sparse multi-objective optimization problems via dynamic adaptive grouping and reward-penalty sparse strategies

IF 8.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Swarm and Evolutionary Computation Pub Date : 2025-04-01 Epub Date: 2025-02-18 DOI:10.1016/j.swevo.2025.101881
Zhenxing Yu , Qinwei Fan , Jacek M. Zurada , Jigen Peng , Haiyang Li , Jian Wang
{"title":"Solving sparse multi-objective optimization problems via dynamic adaptive grouping and reward-penalty sparse strategies","authors":"Zhenxing Yu ,&nbsp;Qinwei Fan ,&nbsp;Jacek M. Zurada ,&nbsp;Jigen Peng ,&nbsp;Haiyang Li ,&nbsp;Jian Wang","doi":"10.1016/j.swevo.2025.101881","DOIUrl":null,"url":null,"abstract":"<div><div>Sparse Multi-Objective Optimization Problems (SMOPs) are commonly encountered in various fields such as machine learning, signal processing, and data mining. While evolutionary algorithms have shown good performance in tackling complex problems, many algorithms tend to exhibit performance degradation when dealing with SMOPs. The primary reasons for this performance decline are the curse of dimensionality and the inability to effectively leverage the sparsity of Pareto-optimal solutions. To address this issue, this paper proposes a model method to solve sparse multi-objective optimization problems through dynamic adaptive grouping and reward-penalty sparse strategies. Specifically, to obtain more effective prior information, a sparse initialization strategy is proposed in the initialization phase. This strategy aims to incorporate more prior knowledge and information about the sparsity of Pareto-optimal solutions. In the evolutionary phase, a decision variable dynamic adaptive grouping strategy is introduced. This strategy, combined with crossover and mutation operators, guides the population towards effective sparse directions. Furthermore, to further identify zero-value decision variables in Pareto-optimal solutions, a reward-penalty mechanism is designed to update the scores of decision variables. By combining this mechanism with the adaptive grouping strategy, this method can effectively flip low-scoring decision variables to zero with a higher probability. To validate the advantages of the proposed algorithm, experiments were conducted on eight benchmark problems, with comparative experiments conducted for different initialization methods. The results indicate that our algorithm exhibits significant advantages in solving SMOPs.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"94 ","pages":"Article 101881"},"PeriodicalIF":8.5000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Swarm and Evolutionary Computation","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2210650225000392","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/18 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Sparse Multi-Objective Optimization Problems (SMOPs) are commonly encountered in various fields such as machine learning, signal processing, and data mining. While evolutionary algorithms have shown good performance in tackling complex problems, many algorithms tend to exhibit performance degradation when dealing with SMOPs. The primary reasons for this performance decline are the curse of dimensionality and the inability to effectively leverage the sparsity of Pareto-optimal solutions. To address this issue, this paper proposes a model method to solve sparse multi-objective optimization problems through dynamic adaptive grouping and reward-penalty sparse strategies. Specifically, to obtain more effective prior information, a sparse initialization strategy is proposed in the initialization phase. This strategy aims to incorporate more prior knowledge and information about the sparsity of Pareto-optimal solutions. In the evolutionary phase, a decision variable dynamic adaptive grouping strategy is introduced. This strategy, combined with crossover and mutation operators, guides the population towards effective sparse directions. Furthermore, to further identify zero-value decision variables in Pareto-optimal solutions, a reward-penalty mechanism is designed to update the scores of decision variables. By combining this mechanism with the adaptive grouping strategy, this method can effectively flip low-scoring decision variables to zero with a higher probability. To validate the advantages of the proposed algorithm, experiments were conducted on eight benchmark problems, with comparative experiments conducted for different initialization methods. The results indicate that our algorithm exhibits significant advantages in solving SMOPs.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于动态自适应分组和奖惩稀疏策略求解稀疏多目标优化问题
稀疏多目标优化问题(SMOPs)在机器学习、信号处理和数据挖掘等各个领域都经常遇到。虽然进化算法在处理复杂问题方面表现出良好的性能,但许多算法在处理SMOPs时往往表现出性能下降。这种性能下降的主要原因是维度的诅咒和无法有效地利用帕累托最优解的稀疏性。针对这一问题,本文提出了一种利用动态自适应分组和奖罚稀疏策略求解稀疏多目标优化问题的模型方法。具体而言,为了获得更有效的先验信息,在初始化阶段提出了稀疏初始化策略。该策略旨在结合更多关于帕累托最优解稀疏性的先验知识和信息。在进化阶段,引入了一种决策变量动态自适应分组策略。该策略结合交叉和变异算子,引导种群向有效的稀疏方向移动。此外,为了进一步识别pareto最优解中的零值决策变量,设计了奖罚机制来更新决策变量的得分。该方法将该机制与自适应分组策略相结合,能够以较高的概率有效地将得分较低的决策变量归零。为了验证所提算法的优势,在8个基准问题上进行了实验,对不同的初始化方法进行了对比实验。结果表明,该算法在求解SMOPs方面具有明显的优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Swarm and Evolutionary Computation
Swarm and Evolutionary Computation COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCEC-COMPUTER SCIENCE, THEORY & METHODS
CiteScore
16.00
自引率
12.00%
发文量
169
期刊介绍: Swarm and Evolutionary Computation is a pioneering peer-reviewed journal focused on the latest research and advancements in nature-inspired intelligent computation using swarm and evolutionary algorithms. It covers theoretical, experimental, and practical aspects of these paradigms and their hybrids, promoting interdisciplinary research. The journal prioritizes the publication of high-quality, original articles that push the boundaries of evolutionary computation and swarm intelligence. Additionally, it welcomes survey papers on current topics and novel applications. Topics of interest include but are not limited to: Genetic Algorithms, and Genetic Programming, Evolution Strategies, and Evolutionary Programming, Differential Evolution, Artificial Immune Systems, Particle Swarms, Ant Colony, Bacterial Foraging, Artificial Bees, Fireflies Algorithm, Harmony Search, Artificial Life, Digital Organisms, Estimation of Distribution Algorithms, Stochastic Diffusion Search, Quantum Computing, Nano Computing, Membrane Computing, Human-centric Computing, Hybridization of Algorithms, Memetic Computing, Autonomic Computing, Self-organizing systems, Combinatorial, Discrete, Binary, Constrained, Multi-objective, Multi-modal, Dynamic, and Large-scale Optimization.
期刊最新文献
Integrated scheduling of cargo vessels, research vessels, and marine experiments in multifunctional ports using Q-learning enhanced PSO A competition-driven two-phase evolutionary algorithm for constrained multi-objective optimization A hybrid evolutionary algorithm for 2D variable-sized bin packing with guillotine constraint in manufacturing Conditional diffusion with gradient guidance for high-dimensional expensive multi-objective optimization Adaptive surrogate-based strategy for accelerating convergence speed when solving expensive unconstrained Multi-Objective Optimisation Problems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1