长尾分类对抗性训练方法的比较研究

Xiangxian Li, Haokai Ma, Lei Meng, Xiangxu Meng
{"title":"长尾分类对抗性训练方法的比较研究","authors":"Xiangxian Li, Haokai Ma, Lei Meng, Xiangxu Meng","doi":"10.1145/3475724.3483601","DOIUrl":null,"url":null,"abstract":"Adversarial training is originated in image classification to address the problem of adversarial attacks, where an invisible perturbation in an image leads to a significant change in model decision. It recently has been observed to be effective in alleviating the long-tailed classification problem, where an imbalanced size of classes makes the model has much lower performance on small classes. However, existing methods typically focus on the methods to generate perturbations for data, while the contributions of different perturbations to long-tailed classification have not been well analyzed. To this end, this paper presents an investigation on the perturbation generation and incorporation components of existing adversarial training methods and proposes a taxonomy that defines these methods using three levels of components, in terms of information, methodology, and optimization. This taxonomy may serve as a design paradigm where an adversarial training algorithm can be created by combining different components in the taxonomy. A comparative study is conducted to verify the influence of each component in long-tailed classification. Experimental results on two benchmarking datasets show that a combination of statistical perturbations and hybrid optimization achieves a promising performance, and the gradient-based method typically improves the performance of both the head and tail classes. More importantly, it is verified that a reasonable combination of the components in our taxonomy may create an algorithm that outperforms the state-of-the-art.","PeriodicalId":279202,"journal":{"name":"Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia","volume":"180 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"Comparative Study of Adversarial Training Methods for Long-tailed Classification\",\"authors\":\"Xiangxian Li, Haokai Ma, Lei Meng, Xiangxu Meng\",\"doi\":\"10.1145/3475724.3483601\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Adversarial training is originated in image classification to address the problem of adversarial attacks, where an invisible perturbation in an image leads to a significant change in model decision. It recently has been observed to be effective in alleviating the long-tailed classification problem, where an imbalanced size of classes makes the model has much lower performance on small classes. However, existing methods typically focus on the methods to generate perturbations for data, while the contributions of different perturbations to long-tailed classification have not been well analyzed. To this end, this paper presents an investigation on the perturbation generation and incorporation components of existing adversarial training methods and proposes a taxonomy that defines these methods using three levels of components, in terms of information, methodology, and optimization. This taxonomy may serve as a design paradigm where an adversarial training algorithm can be created by combining different components in the taxonomy. A comparative study is conducted to verify the influence of each component in long-tailed classification. Experimental results on two benchmarking datasets show that a combination of statistical perturbations and hybrid optimization achieves a promising performance, and the gradient-based method typically improves the performance of both the head and tail classes. More importantly, it is verified that a reasonable combination of the components in our taxonomy may create an algorithm that outperforms the state-of-the-art.\",\"PeriodicalId\":279202,\"journal\":{\"name\":\"Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia\",\"volume\":\"180 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3475724.3483601\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3475724.3483601","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

摘要

对抗性训练起源于图像分类,以解决对抗性攻击问题,其中图像中不可见的扰动会导致模型决策的显着变化。最近已经观察到它在缓解长尾分类问题上是有效的,在长尾分类问题上,类的大小不平衡使得模型在小类上的性能低得多。然而,现有的方法通常侧重于对数据产生扰动的方法,而不同扰动对长尾分类的贡献尚未得到很好的分析。为此,本文对现有对抗性训练方法的摄动产生和合并组件进行了调查,并提出了一种分类法,该分类法使用三个级别的组件来定义这些方法,分别是信息、方法和优化。这种分类法可以作为一种设计范例,其中可以通过组合分类法中的不同组件来创建对抗性训练算法。通过对比研究验证了各成分对长尾分类的影响。在两个基准数据集上的实验结果表明,统计扰动和混合优化相结合的方法取得了很好的性能,基于梯度的方法通常可以提高头类和尾类的性能。更重要的是,它验证了分类法中组件的合理组合可以创建优于最先进算法的算法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Comparative Study of Adversarial Training Methods for Long-tailed Classification
Adversarial training is originated in image classification to address the problem of adversarial attacks, where an invisible perturbation in an image leads to a significant change in model decision. It recently has been observed to be effective in alleviating the long-tailed classification problem, where an imbalanced size of classes makes the model has much lower performance on small classes. However, existing methods typically focus on the methods to generate perturbations for data, while the contributions of different perturbations to long-tailed classification have not been well analyzed. To this end, this paper presents an investigation on the perturbation generation and incorporation components of existing adversarial training methods and proposes a taxonomy that defines these methods using three levels of components, in terms of information, methodology, and optimization. This taxonomy may serve as a design paradigm where an adversarial training algorithm can be created by combining different components in the taxonomy. A comparative study is conducted to verify the influence of each component in long-tailed classification. Experimental results on two benchmarking datasets show that a combination of statistical perturbations and hybrid optimization achieves a promising performance, and the gradient-based method typically improves the performance of both the head and tail classes. More importantly, it is verified that a reasonable combination of the components in our taxonomy may create an algorithm that outperforms the state-of-the-art.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Enhancing Adversarial Examples Transferability via Ensemble Feature Manifolds Improving Generalization of Deepfake Detection with Domain Adaptive Batch Normalization Comparative Study of Adversarial Training Methods for Cold-Start Recommendation Comparative Study of Adversarial Training Methods for Long-tailed Classification Generating Adversarial Remote Sensing Images via Pan-Sharpening Technique
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1