利用对比表征在音频分类中实现稳健的少量类增量学习

Riyansha SinghIIT Kanpur, India, Parinita NemaIISER Bhopal, India, Vinod K KurmiIISER Bhopal, India
{"title":"利用对比表征在音频分类中实现稳健的少量类增量学习","authors":"Riyansha SinghIIT Kanpur, India, Parinita NemaIISER Bhopal, India, Vinod K KurmiIISER Bhopal, India","doi":"arxiv-2407.19265","DOIUrl":null,"url":null,"abstract":"In machine learning applications, gradual data ingress is common, especially\nin audio processing where incremental learning is vital for real-time\nanalytics. Few-shot class-incremental learning addresses challenges arising\nfrom limited incoming data. Existing methods often integrate additional\ntrainable components or rely on a fixed embedding extractor post-training on\nbase sessions to mitigate concerns related to catastrophic forgetting and the\ndangers of model overfitting. However, using cross-entropy loss alone during\nbase session training is suboptimal for audio data. To address this, we propose\nincorporating supervised contrastive learning to refine the representation\nspace, enhancing discriminative power and leading to better generalization\nsince it facilitates seamless integration of incremental classes, upon arrival.\nExperimental results on NSynth and LibriSpeech datasets with 100 classes, as\nwell as ESC dataset with 50 and 10 classes, demonstrate state-of-the-art\nperformance.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Towards Robust Few-shot Class Incremental Learning in Audio Classification using Contrastive Representation\",\"authors\":\"Riyansha SinghIIT Kanpur, India, Parinita NemaIISER Bhopal, India, Vinod K KurmiIISER Bhopal, India\",\"doi\":\"arxiv-2407.19265\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In machine learning applications, gradual data ingress is common, especially\\nin audio processing where incremental learning is vital for real-time\\nanalytics. Few-shot class-incremental learning addresses challenges arising\\nfrom limited incoming data. Existing methods often integrate additional\\ntrainable components or rely on a fixed embedding extractor post-training on\\nbase sessions to mitigate concerns related to catastrophic forgetting and the\\ndangers of model overfitting. However, using cross-entropy loss alone during\\nbase session training is suboptimal for audio data. To address this, we propose\\nincorporating supervised contrastive learning to refine the representation\\nspace, enhancing discriminative power and leading to better generalization\\nsince it facilitates seamless integration of incremental classes, upon arrival.\\nExperimental results on NSynth and LibriSpeech datasets with 100 classes, as\\nwell as ESC dataset with 50 and 10 classes, demonstrate state-of-the-art\\nperformance.\",\"PeriodicalId\":501178,\"journal\":{\"name\":\"arXiv - CS - Sound\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Sound\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2407.19265\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Sound","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.19265","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在机器学习应用中,渐进式数据输入很常见,尤其是在音频处理中,增量学习对实时分析至关重要。少量类增量学习可以解决有限输入数据带来的挑战。现有方法通常集成了额外的可训练组件,或依赖于固定的嵌入提取器对基础会话进行后训练,以减轻与灾难性遗忘和模型过拟合危险有关的担忧。然而,在基础会话训练期间仅使用交叉熵损失对于音频数据来说并不理想。为了解决这个问题,我们建议结合有监督的对比学习来完善表征空间,从而增强判别能力,并在增量类到达时进行无缝整合,从而实现更好的泛化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Towards Robust Few-shot Class Incremental Learning in Audio Classification using Contrastive Representation
In machine learning applications, gradual data ingress is common, especially in audio processing where incremental learning is vital for real-time analytics. Few-shot class-incremental learning addresses challenges arising from limited incoming data. Existing methods often integrate additional trainable components or rely on a fixed embedding extractor post-training on base sessions to mitigate concerns related to catastrophic forgetting and the dangers of model overfitting. However, using cross-entropy loss alone during base session training is suboptimal for audio data. To address this, we propose incorporating supervised contrastive learning to refine the representation space, enhancing discriminative power and leading to better generalization since it facilitates seamless integration of incremental classes, upon arrival. Experimental results on NSynth and LibriSpeech datasets with 100 classes, as well as ESC dataset with 50 and 10 classes, demonstrate state-of-the-art performance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Explaining Deep Learning Embeddings for Speech Emotion Recognition by Predicting Interpretable Acoustic Features ESPnet-EZ: Python-only ESPnet for Easy Fine-tuning and Integration Prevailing Research Areas for Music AI in the Era of Foundation Models Egocentric Speaker Classification in Child-Adult Dyadic Interactions: From Sensing to Computational Modeling The T05 System for The VoiceMOS Challenge 2024: Transfer Learning from Deep Image Classifier to Naturalness MOS Prediction of High-Quality Synthetic Speech
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1