Modeling Inner- and Cross-Task Contrastive Relations for Continual Image Classification

IF 8.4 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS IEEE Transactions on Multimedia Pub Date : 2024-06-13 DOI:10.1109/TMM.2024.3414277
Yuxuan Luo;Runmin Cong;Xialei Liu;Horace Ho Shing Ip;Sam Kwong
{"title":"Modeling Inner- and Cross-Task Contrastive Relations for Continual Image Classification","authors":"Yuxuan Luo;Runmin Cong;Xialei Liu;Horace Ho Shing Ip;Sam Kwong","doi":"10.1109/TMM.2024.3414277","DOIUrl":null,"url":null,"abstract":"Existing continual image classification methods demonstrate that samples from all sequences of continual classification tasks contain common (task-invariant) features and class-specific (task-variant) features that can be decoupled for classification tasks. However, the existing feature decomposition strategies only focus on individual tasks while neglecting the essential cues that the relationship between different tasks can provide, thereby hindering the improvement of continual image classification results. To address this issue, we propose an Adversarial Contrastive Continual Learning (ACCL) method that decouples task-invariant and task-variant features by constructing all-round, multi-level contrasts on sample pairs within individual tasks or from different tasks. Specifically, three constraints on the distribution of task-invariant and task-variant features are included, i.e., task-invariant features across different tasks should remain consistent, task-variant features should exhibit differences, and task-invariant and task-variant features should differ from each other. At the same time, we also design an effective contrastive replay strategy to make full use of the replay samples to participate in the construction of sample pairs, further alleviating the forgetting problem, and modeling cross-task relationships. Through extensive experiments on continual image classification tasks on CIFAR100, MiniImageNet and TinyImageNet, we show the superiority of our proposed strategy, improving the accuracy and with better visualized outcomes.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"26 ","pages":"10842-10853"},"PeriodicalIF":8.4000,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Multimedia","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10557156/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Existing continual image classification methods demonstrate that samples from all sequences of continual classification tasks contain common (task-invariant) features and class-specific (task-variant) features that can be decoupled for classification tasks. However, the existing feature decomposition strategies only focus on individual tasks while neglecting the essential cues that the relationship between different tasks can provide, thereby hindering the improvement of continual image classification results. To address this issue, we propose an Adversarial Contrastive Continual Learning (ACCL) method that decouples task-invariant and task-variant features by constructing all-round, multi-level contrasts on sample pairs within individual tasks or from different tasks. Specifically, three constraints on the distribution of task-invariant and task-variant features are included, i.e., task-invariant features across different tasks should remain consistent, task-variant features should exhibit differences, and task-invariant and task-variant features should differ from each other. At the same time, we also design an effective contrastive replay strategy to make full use of the replay samples to participate in the construction of sample pairs, further alleviating the forgetting problem, and modeling cross-task relationships. Through extensive experiments on continual image classification tasks on CIFAR100, MiniImageNet and TinyImageNet, we show the superiority of our proposed strategy, improving the accuracy and with better visualized outcomes.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
为连续图像分类建立内部和跨任务对比关系模型
现有的连续图像分类方法表明,来自所有连续分类任务序列的样本包含共同的(任务变量)特征和特定类别的(任务变量)特征,这些特征可以针对分类任务进行解耦。然而,现有的特征分解策略只关注单个任务,而忽略了不同任务之间的关系所能提供的重要线索,从而阻碍了连续图像分类结果的改进。为了解决这个问题,我们提出了一种对抗对比持续学习(ACCL)方法,通过对单个任务或不同任务中的样本对构建全方位、多层次的对比,将任务变量特征和任务变量特征分离开来。具体来说,任务不变特征和任务变量特征的分布有三个限制条件,即不同任务的任务不变特征应保持一致,任务变量特征应表现出差异,任务不变特征和任务变量特征应互不相同。同时,我们还设计了有效的对比重放策略,充分利用重放样本参与样本对的构建,进一步缓解遗忘问题,并对跨任务关系进行建模。通过在 CIFAR100、MiniImageNet 和 TinyImageNet 上对连续图像分类任务的广泛实验,我们证明了所提策略的优越性,不仅提高了准确率,而且获得了更好的可视化结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Multimedia
IEEE Transactions on Multimedia 工程技术-电信学
CiteScore
11.70
自引率
11.00%
发文量
576
审稿时长
5.5 months
期刊介绍: The IEEE Transactions on Multimedia delves into diverse aspects of multimedia technology and applications, covering circuits, networking, signal processing, systems, software, and systems integration. The scope aligns with the Fields of Interest of the sponsors, ensuring a comprehensive exploration of research in multimedia.
期刊最新文献
Phase-shifted tACS can modulate cortical alpha waves in human subjects. Guest Editorial Introduction to the Issue on Pre-Trained Models for Multi-Modality Understanding Zero-Shot Video Moment Retrieval With Angular Reconstructive Text Embeddings Toward Efficient Video Compression Artifact Detection and Removal: A Benchmark Dataset Human-Centric Behavior Description in Videos: New Benchmark and Model
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1