SPCNet: Deep Self-Paced Curriculum Network Incorporated With Inductive Bias.

IF 10.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE transactions on neural networks and learning systems Pub Date : 2025-03-20 DOI:10.1109/TNNLS.2025.3544724
Yue Zhao, Maoguo Gong, Mingyang Zhang, A K Qin, Fenlong Jiang, Jianzhao Li
{"title":"SPCNet: Deep Self-Paced Curriculum Network Incorporated With Inductive Bias.","authors":"Yue Zhao, Maoguo Gong, Mingyang Zhang, A K Qin, Fenlong Jiang, Jianzhao Li","doi":"10.1109/TNNLS.2025.3544724","DOIUrl":null,"url":null,"abstract":"<p><p>The vulnerability to poor local optimum and the memorization of noise data limit the generalizability and reliability of massively parameterized convolutional neural networks (CNNs) on complex real-world data. Self-paced curriculum learning (SPCL), which models the easy-to-hard learning progression from human beings, is considered as a potential savior. In spite of the fact that numerous SPCL solutions have been explored, it still confronts two main challenges exactly in solving deep networks. By virtue of various designed regularizers, existing weighting schemes independent of the learning objective heavily rely on the prior knowledge. In addition, alternative optimization strategy (AOS) enables the tedious iterative training procedure, thus there is still not an efficient framework that integrates the SPCL paradigm well with networks. This article delivers a novel insight that attention mechanism allows for adaptive enhancement in the contribution of diverse instance information to the gradient propagation. Accordingly, we propose a general-purpose deep SPCL paradigm that incorporates the preferences of implicit regularizer for different samples into the network structure with inductive bias, which in turn is formalized as the self-paced curriculum network (SPCNet). Our proposal allows simultaneous online difficulty estimation, adaptive sample selection, and model updating in an end-to-end manner, which significantly facilitates the collaboration of SPCL to deep networks. Experiments on image classification and scene classification tasks demonstrate that our approach surpasses the state-of-the-art schemes and obtains superior performance.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.2000,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TNNLS.2025.3544724","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The vulnerability to poor local optimum and the memorization of noise data limit the generalizability and reliability of massively parameterized convolutional neural networks (CNNs) on complex real-world data. Self-paced curriculum learning (SPCL), which models the easy-to-hard learning progression from human beings, is considered as a potential savior. In spite of the fact that numerous SPCL solutions have been explored, it still confronts two main challenges exactly in solving deep networks. By virtue of various designed regularizers, existing weighting schemes independent of the learning objective heavily rely on the prior knowledge. In addition, alternative optimization strategy (AOS) enables the tedious iterative training procedure, thus there is still not an efficient framework that integrates the SPCL paradigm well with networks. This article delivers a novel insight that attention mechanism allows for adaptive enhancement in the contribution of diverse instance information to the gradient propagation. Accordingly, we propose a general-purpose deep SPCL paradigm that incorporates the preferences of implicit regularizer for different samples into the network structure with inductive bias, which in turn is formalized as the self-paced curriculum network (SPCNet). Our proposal allows simultaneous online difficulty estimation, adaptive sample selection, and model updating in an end-to-end manner, which significantly facilitates the collaboration of SPCL to deep networks. Experiments on image classification and scene classification tasks demonstrate that our approach surpasses the state-of-the-art schemes and obtains superior performance.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
期刊最新文献
MSAFF: Multi-Way Soft Attention Fusion Framework With the Large Foundation Models for the Diagnosis of Alzheimer’s Disease 2-D Transformer: Extending Large Language Models to Long-Context With Few Memory Dual-Space Contrastive Learning for Open-World Semi-Supervised Classification A Robust Multi-Virtual-Agent Inverse Reinforcement Learning Approach With Data Aggregation for Perturbed Environments Multiscale Subgraph Adversarial Contrastive Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1