{"title":"色彩空间防御:简单,直观,但有效","authors":"Pei Yang, Jing Wang, Huandong Wang","doi":"10.1109/ISSREW55968.2022.00086","DOIUrl":null,"url":null,"abstract":"Deep neural networks (DNNs) are widely applied in autonomous intelligent systems. However, DNNs are vulnerable to adversarial attacks from exclusively crafted input images, leading to performance degradation such as wrong classifications. A wrong classification made by an AIS could result in severe and possibly lethal consequences. While several existing works proposed applying classic computer vision techniques to adversarial defense, these methods generally deteriorate the input information to a considerable extent. To re-store model performances while minimising such deterioration, we propose a novel method for adversarial defence named Colour Space Defence. We first demonstrated the weak transferability of adversarial information across different colour spaces. We then proposed to defend against adversarial examples by ensembling models trained in multiple colour spaces. Experiments have verified the validity of Colour Space Defence in maintaining performances on clean images. In most cases of defence, this method outperformed several of its comparators.","PeriodicalId":178302,"journal":{"name":"2022 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Colour Space Defence: Simple, Intuitive, but Effective\",\"authors\":\"Pei Yang, Jing Wang, Huandong Wang\",\"doi\":\"10.1109/ISSREW55968.2022.00086\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep neural networks (DNNs) are widely applied in autonomous intelligent systems. However, DNNs are vulnerable to adversarial attacks from exclusively crafted input images, leading to performance degradation such as wrong classifications. A wrong classification made by an AIS could result in severe and possibly lethal consequences. While several existing works proposed applying classic computer vision techniques to adversarial defense, these methods generally deteriorate the input information to a considerable extent. To re-store model performances while minimising such deterioration, we propose a novel method for adversarial defence named Colour Space Defence. We first demonstrated the weak transferability of adversarial information across different colour spaces. We then proposed to defend against adversarial examples by ensembling models trained in multiple colour spaces. Experiments have verified the validity of Colour Space Defence in maintaining performances on clean images. In most cases of defence, this method outperformed several of its comparators.\",\"PeriodicalId\":178302,\"journal\":{\"name\":\"2022 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)\",\"volume\":\"19 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISSREW55968.2022.00086\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSREW55968.2022.00086","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Colour Space Defence: Simple, Intuitive, but Effective
Deep neural networks (DNNs) are widely applied in autonomous intelligent systems. However, DNNs are vulnerable to adversarial attacks from exclusively crafted input images, leading to performance degradation such as wrong classifications. A wrong classification made by an AIS could result in severe and possibly lethal consequences. While several existing works proposed applying classic computer vision techniques to adversarial defense, these methods generally deteriorate the input information to a considerable extent. To re-store model performances while minimising such deterioration, we propose a novel method for adversarial defence named Colour Space Defence. We first demonstrated the weak transferability of adversarial information across different colour spaces. We then proposed to defend against adversarial examples by ensembling models trained in multiple colour spaces. Experiments have verified the validity of Colour Space Defence in maintaining performances on clean images. In most cases of defence, this method outperformed several of its comparators.