Reweighted-Boosting: A Gradient-Based Boosting Optimization Framework

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE transactions on neural networks and learning systems Pub Date : 2024-12-23 DOI:10.1109/TNNLS.2024.3457764
Guanxiong He;Zheng Wang;Liaoyuan Tang;Weizhong Yu;Feiping Nie;Xuelong Li
{"title":"Reweighted-Boosting: A Gradient-Based Boosting Optimization Framework","authors":"Guanxiong He;Zheng Wang;Liaoyuan Tang;Weizhong Yu;Feiping Nie;Xuelong Li","doi":"10.1109/TNNLS.2024.3457764","DOIUrl":null,"url":null,"abstract":"Boosting is a well-established ensemble learning approach that aims to enhance overall performance by combining multiple weak learners with a linear combination structure. It operates on the principle of using new learners to compensate for the shortcomings of previous learners and is known for its ability to reduce computational resource requirements while mitigating the risks of overfitting. However, from the perspective of convex optimization, it becomes apparent that classical boosting methods often converge to local optima rather than global optima when minimizing the target loss due to its greedy strategy. In this article, we address the issue and propose a novel optimization framework for the boosting paradigm. Our framework focuses on refining the ensemble model by further minimizing loss function through the reallocation of base learner weights, which results in a more robust and powerful learner. We have conducted experiments on various real-world and synthetic datasets, and our findings confirm that our Reweighted-Boosting model consistently outperforms its counterparts. It also exhibits an increased classification margin for the data, making it a valuable enhancement to original boosting algorithms.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 7","pages":"11953-11965"},"PeriodicalIF":8.9000,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10812027/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Boosting is a well-established ensemble learning approach that aims to enhance overall performance by combining multiple weak learners with a linear combination structure. It operates on the principle of using new learners to compensate for the shortcomings of previous learners and is known for its ability to reduce computational resource requirements while mitigating the risks of overfitting. However, from the perspective of convex optimization, it becomes apparent that classical boosting methods often converge to local optima rather than global optima when minimizing the target loss due to its greedy strategy. In this article, we address the issue and propose a novel optimization framework for the boosting paradigm. Our framework focuses on refining the ensemble model by further minimizing loss function through the reallocation of base learner weights, which results in a more robust and powerful learner. We have conducted experiments on various real-world and synthetic datasets, and our findings confirm that our Reweighted-Boosting model consistently outperforms its counterparts. It also exhibits an increased classification margin for the data, making it a valuable enhancement to original boosting algorithms.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
加权增强:一个基于梯度的增强优化框架
boost是一种完善的集成学习方法,旨在通过将多个弱学习器与线性组合结构相结合来提高整体性能。它的工作原理是使用新的学习器来弥补以前的学习器的缺点,并以其减少计算资源需求的能力而闻名,同时降低了过拟合的风险。然而,从凸优化的角度来看,经典的增强方法由于其贪婪策略,在最小化目标损失时往往收敛到局部最优而不是全局最优。在本文中,我们解决了这个问题,并提出了一种新的促进范式优化框架。我们的框架侧重于通过重新分配基本学习器权重来进一步最小化损失函数,从而改进集成模型,从而获得更强大的学习器。我们在各种真实世界和合成数据集上进行了实验,我们的研究结果证实,我们的加权增强模型始终优于同类模型。它还显示出对数据的分类裕度增加,使其成为原始增强算法的有价值的增强。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
期刊最新文献
Seeing What Few-Shot Learners See: Contrastive Cross-Class Attribution for Explainability. Accelerated Reinforcement Learning With Verifiable Excitation for Cubic Convergence. Learning Optimal Policies With Local Observations for Cooperative Multiagent Reinforcement Learning. Learning From M-Tuple One-vs-All Confidence Comparison Data. Sparse Variational Student-t Processes for Heavy-Tailed Modeling.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1