Xin Cheng, Meiqi Wang, Yuanyuan Shi, Jun Lin, Zhongfeng Wang
{"title":"神奇分解:在硬件上赢得对抗鲁棒性和效率","authors":"Xin Cheng, Meiqi Wang, Yuanyuan Shi, Jun Lin, Zhongfeng Wang","doi":"10.1109/ICMLC56445.2022.9941335","DOIUrl":null,"url":null,"abstract":"Model compression is one of the most preferred techniques for efficiently deploying deep neural networks (DNNs) on resource- constrained Internet of Things (IoT) platforms. However, the simply compressed model is often vulnerable to adversarial attacks, leading to a conflict between robustness and efficiency, especially for IoT devices exposed to complex real-world scenarios. We, for the first time, address this problem by developing a novel framework dubbed Magical-Decomposition to simultaneously enhance both robustness and efficiency for hardware. By leveraging a hardware-friendly model compression method called singular value decomposition, the defending algorithm can be supported by most of the existing DNN hardware accelerators. To step further, by using a recently developed DNN interpretation tool, the underlying scheme of how the adversarial accuracy can be increased in the compressed model is highlighted clearly. Ablation studies and extensive experiments under various attacks/models/datasets consistently validate the effectiveness and scalability of the proposed framework.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Magical-Decomposition: Winning Both Adversarial Robustness and Efficiency on Hardware\",\"authors\":\"Xin Cheng, Meiqi Wang, Yuanyuan Shi, Jun Lin, Zhongfeng Wang\",\"doi\":\"10.1109/ICMLC56445.2022.9941335\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Model compression is one of the most preferred techniques for efficiently deploying deep neural networks (DNNs) on resource- constrained Internet of Things (IoT) platforms. However, the simply compressed model is often vulnerable to adversarial attacks, leading to a conflict between robustness and efficiency, especially for IoT devices exposed to complex real-world scenarios. We, for the first time, address this problem by developing a novel framework dubbed Magical-Decomposition to simultaneously enhance both robustness and efficiency for hardware. By leveraging a hardware-friendly model compression method called singular value decomposition, the defending algorithm can be supported by most of the existing DNN hardware accelerators. To step further, by using a recently developed DNN interpretation tool, the underlying scheme of how the adversarial accuracy can be increased in the compressed model is highlighted clearly. Ablation studies and extensive experiments under various attacks/models/datasets consistently validate the effectiveness and scalability of the proposed framework.\",\"PeriodicalId\":117829,\"journal\":{\"name\":\"2022 International Conference on Machine Learning and Cybernetics (ICMLC)\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Conference on Machine Learning and Cybernetics (ICMLC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMLC56445.2022.9941335\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLC56445.2022.9941335","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Magical-Decomposition: Winning Both Adversarial Robustness and Efficiency on Hardware
Model compression is one of the most preferred techniques for efficiently deploying deep neural networks (DNNs) on resource- constrained Internet of Things (IoT) platforms. However, the simply compressed model is often vulnerable to adversarial attacks, leading to a conflict between robustness and efficiency, especially for IoT devices exposed to complex real-world scenarios. We, for the first time, address this problem by developing a novel framework dubbed Magical-Decomposition to simultaneously enhance both robustness and efficiency for hardware. By leveraging a hardware-friendly model compression method called singular value decomposition, the defending algorithm can be supported by most of the existing DNN hardware accelerators. To step further, by using a recently developed DNN interpretation tool, the underlying scheme of how the adversarial accuracy can be increased in the compressed model is highlighted clearly. Ablation studies and extensive experiments under various attacks/models/datasets consistently validate the effectiveness and scalability of the proposed framework.