Assessor-Guided Learning for Continual Environments

M. A. Ma'sum, Mahardhika Pratama, E. Lughofer, Weiping Ding, W. Jatmiko
{"title":"Assessor-Guided Learning for Continual Environments","authors":"M. A. Ma'sum, Mahardhika Pratama, E. Lughofer, Weiping Ding, W. Jatmiko","doi":"10.48550/arXiv.2303.11624","DOIUrl":null,"url":null,"abstract":"This paper proposes an assessor-guided learning strategy for continual learning where an assessor guides the learning process of a base learner by controlling the direction and pace of the learning process thus allowing an efficient learning of new environments while protecting against the catastrophic interference problem. The assessor is trained in a meta-learning manner with a meta-objective to boost the learning process of the base learner. It performs a soft-weighting mechanism of every sample accepting positive samples while rejecting negative samples. The training objective of a base learner is to minimize a meta-weighted combination of the cross entropy loss function, the dark experience replay (DER) loss function and the knowledge distillation loss function whose interactions are controlled in such a way to attain an improved performance. A compensated over-sampling (COS) strategy is developed to overcome the class imbalanced problem of the episodic memory due to limited memory budgets. Our approach, Assessor-Guided Learning Approach (AGLA), has been evaluated in the class-incremental and task-incremental learning problems. AGLA achieves improved performances compared to its competitors while the theoretical analysis of the COS strategy is offered. Source codes of AGLA, baseline algorithms and experimental logs are shared publicly in \\url{https://github.com/anwarmaxsum/AGLA} for further study.","PeriodicalId":13641,"journal":{"name":"Inf. Sci.","volume":"84 1","pages":"119088"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Inf. Sci.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2303.11624","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper proposes an assessor-guided learning strategy for continual learning where an assessor guides the learning process of a base learner by controlling the direction and pace of the learning process thus allowing an efficient learning of new environments while protecting against the catastrophic interference problem. The assessor is trained in a meta-learning manner with a meta-objective to boost the learning process of the base learner. It performs a soft-weighting mechanism of every sample accepting positive samples while rejecting negative samples. The training objective of a base learner is to minimize a meta-weighted combination of the cross entropy loss function, the dark experience replay (DER) loss function and the knowledge distillation loss function whose interactions are controlled in such a way to attain an improved performance. A compensated over-sampling (COS) strategy is developed to overcome the class imbalanced problem of the episodic memory due to limited memory budgets. Our approach, Assessor-Guided Learning Approach (AGLA), has been evaluated in the class-incremental and task-incremental learning problems. AGLA achieves improved performances compared to its competitors while the theoretical analysis of the COS strategy is offered. Source codes of AGLA, baseline algorithms and experimental logs are shared publicly in \url{https://github.com/anwarmaxsum/AGLA} for further study.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
持续环境的评估指导学习
本文提出了一种用于持续学习的评估器引导学习策略,其中评估器通过控制学习过程的方向和速度来指导基础学习者的学习过程,从而在防止灾难性干扰问题的同时有效地学习新环境。评估员以元学习的方式进行训练,其元目标是促进基础学习者的学习过程。它执行每个样本接受正样本而拒绝负样本的软加权机制。基础学习器的训练目标是最小化交叉熵损失函数、暗经验重放(dark experience replay, DER)损失函数和知识蒸馏损失函数的元加权组合,以达到改进的性能。为了克服由于记忆预算有限而导致的情景记忆类不平衡问题,提出了一种补偿过采样策略。我们的方法,评估员引导学习方法(AGLA),已经在班级增量和任务增量学习问题中进行了评估。与竞争对手相比,AGLA实现了更高的性能,并对COS策略进行了理论分析。AGLA的源代码、基线算法和实验日志在\url{https://github.com/anwarmaxsum/AGLA}上公开共享,以供进一步研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Borderline scenarios of outranking classification based on α-cut variation in fuzzy intervals: Application in police investigations A group decision-making and optimization method based on relative inverse number Representations of L-fuzzy rough approximation operators Distributed quantile regression in decentralized optimization Word2Vec-based efficient privacy-preserving shared representation learning for federated recommendation system in a cross-device setting
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1