通过一致性损失驱动的 CNN 训练实现特征空间分离

IF 1.8 Q3 AUTOMATION & CONTROL SYSTEMS IFAC Journal of Systems and Control Pub Date : 2024-04-13 DOI:10.1016/j.ifacsc.2024.100260
N. Ding , H. Arabian , K. Möller
{"title":"通过一致性损失驱动的 CNN 训练实现特征空间分离","authors":"N. Ding ,&nbsp;H. Arabian ,&nbsp;K. Möller","doi":"10.1016/j.ifacsc.2024.100260","DOIUrl":null,"url":null,"abstract":"<div><p>Convolutional neural networks (CNNs) have enabled tremendous achievements in image classification, as the model can automatically extract image features and assign a proper classification. Nevertheless, the classification is lacking robustness to — for humans’ invisible perturbations on the input. To improve the robustness of the CNN model, it is necessary to understand the decision-making procedure of CNN models. By inspecting the learned feature space, we found that the classification regions are not always clearly separated by the CNN model. The overlap of classification regions increases the possibility to less perturbation induced input changes on classification results. Therefore, the clear separation of feature spaces of the CNN model should support decision robustness. In this paper, we propose to use a novel loss function called “conformity loss” to strengthen disjoint feature spaces during learning at different layers of the CNN, in order to improve the intra-class compactness and inter-class differences in trained representations. The same function was used as an evaluation metric to measure the feature space separation during the testing process. In conclusion, the conformity loss driven trained model has shown better feature space separation at comparable output performance.</p></div>","PeriodicalId":29926,"journal":{"name":"IFAC Journal of Systems and Control","volume":"28 ","pages":"Article 100260"},"PeriodicalIF":1.8000,"publicationDate":"2024-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S246860182400021X/pdfft?md5=7ae999412c5f76db07310209ce438ec2&pid=1-s2.0-S246860182400021X-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Feature space separation by conformity loss driven training of CNN\",\"authors\":\"N. Ding ,&nbsp;H. Arabian ,&nbsp;K. Möller\",\"doi\":\"10.1016/j.ifacsc.2024.100260\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Convolutional neural networks (CNNs) have enabled tremendous achievements in image classification, as the model can automatically extract image features and assign a proper classification. Nevertheless, the classification is lacking robustness to — for humans’ invisible perturbations on the input. To improve the robustness of the CNN model, it is necessary to understand the decision-making procedure of CNN models. By inspecting the learned feature space, we found that the classification regions are not always clearly separated by the CNN model. The overlap of classification regions increases the possibility to less perturbation induced input changes on classification results. Therefore, the clear separation of feature spaces of the CNN model should support decision robustness. In this paper, we propose to use a novel loss function called “conformity loss” to strengthen disjoint feature spaces during learning at different layers of the CNN, in order to improve the intra-class compactness and inter-class differences in trained representations. The same function was used as an evaluation metric to measure the feature space separation during the testing process. In conclusion, the conformity loss driven trained model has shown better feature space separation at comparable output performance.</p></div>\",\"PeriodicalId\":29926,\"journal\":{\"name\":\"IFAC Journal of Systems and Control\",\"volume\":\"28 \",\"pages\":\"Article 100260\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2024-04-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S246860182400021X/pdfft?md5=7ae999412c5f76db07310209ce438ec2&pid=1-s2.0-S246860182400021X-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IFAC Journal of Systems and Control\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S246860182400021X\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IFAC Journal of Systems and Control","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S246860182400021X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

卷积神经网络(CNN)在图像分类方面取得了巨大的成就,因为该模型可以自动提取图像特征并进行适当的分类。然而,这种分类方法缺乏鲁棒性,无法应对人类对输入的不可见扰动。为了提高 CNN 模型的鲁棒性,有必要了解 CNN 模型的决策过程。通过检查学习到的特征空间,我们发现分类区域并不总是被 CNN 模型清晰地分开。分类区域的重叠增加了扰动引起的输入变化对分类结果影响较小的可能性。因此,CNN 模型特征空间的清晰分离应支持决策的鲁棒性。在本文中,我们建议使用一种名为 "一致性损失 "的新型损失函数,在 CNN 不同层的学习过程中强化分离的特征空间,以改善训练表征的类内紧凑性和类间差异。在测试过程中,同样的函数也被用作衡量特征空间分离的评估指标。总之,在输出性能相当的情况下,一致性损失驱动的训练模型显示出更好的特征空间分离度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Feature space separation by conformity loss driven training of CNN

Convolutional neural networks (CNNs) have enabled tremendous achievements in image classification, as the model can automatically extract image features and assign a proper classification. Nevertheless, the classification is lacking robustness to — for humans’ invisible perturbations on the input. To improve the robustness of the CNN model, it is necessary to understand the decision-making procedure of CNN models. By inspecting the learned feature space, we found that the classification regions are not always clearly separated by the CNN model. The overlap of classification regions increases the possibility to less perturbation induced input changes on classification results. Therefore, the clear separation of feature spaces of the CNN model should support decision robustness. In this paper, we propose to use a novel loss function called “conformity loss” to strengthen disjoint feature spaces during learning at different layers of the CNN, in order to improve the intra-class compactness and inter-class differences in trained representations. The same function was used as an evaluation metric to measure the feature space separation during the testing process. In conclusion, the conformity loss driven trained model has shown better feature space separation at comparable output performance.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IFAC Journal of Systems and Control
IFAC Journal of Systems and Control AUTOMATION & CONTROL SYSTEMS-
CiteScore
3.70
自引率
5.30%
发文量
17
期刊最新文献
On the turnpike to design of deep neural networks: Explicit depth bounds Finite-time event-triggered tracking control for quadcopter attitude systems with zero compensation technology Efficiency criteria and dual techniques for some nonconvex multiple cost minimization models Analysis of Hyers–Ulam stability and controllability of non-linear switched impulsive systems with delays on time scales Design of fixed-time sliding mode control using variable exponents
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1