{"title":"二值化神经网络对抗性攻击与防御方法研究","authors":"Van-Ngoc Dinh, Ngoc H. Bui, Van-Tinh Nguyen, Khoa-Sang Nguyen, Quang-Manh Duong, Quang-Kien Trinh","doi":"10.1109/ATC55345.2022.9943040","DOIUrl":null,"url":null,"abstract":"Binarized Neural Networks (BNNs) are relatively hardware-efficient neural network models which are seriously considered for edge-AI applications. However, BNNs are like other neural networks and exhibit certain linear properties and are vulnerable to adversarial attacks. This work evaluates the robustness of BNNs under Projected Gradient Descent (PGD) - one of the most powerful iterative adversarial attacks, on BNN models and analyzes the effectiveness of corresponding defense methods. Our extensive simulation shows that the network almost malfunction when performing recognition tasks when tested with PGD samples without adversarial training. On the other hand, adversarial training could significantly improve robustness for both BNNs and Deep learning neural networks (DNNs), though strong PGD attacks could still be challenging. Therefore, adversarial attacks are a real threat, and more effective adversarial defense methods and innovative network architectures may be required for practical applications.","PeriodicalId":135827,"journal":{"name":"2022 International Conference on Advanced Technologies for Communications (ATC)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Study on Adversarial Attacks and Defense Method on Binarized Neural Network\",\"authors\":\"Van-Ngoc Dinh, Ngoc H. Bui, Van-Tinh Nguyen, Khoa-Sang Nguyen, Quang-Manh Duong, Quang-Kien Trinh\",\"doi\":\"10.1109/ATC55345.2022.9943040\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Binarized Neural Networks (BNNs) are relatively hardware-efficient neural network models which are seriously considered for edge-AI applications. However, BNNs are like other neural networks and exhibit certain linear properties and are vulnerable to adversarial attacks. This work evaluates the robustness of BNNs under Projected Gradient Descent (PGD) - one of the most powerful iterative adversarial attacks, on BNN models and analyzes the effectiveness of corresponding defense methods. Our extensive simulation shows that the network almost malfunction when performing recognition tasks when tested with PGD samples without adversarial training. On the other hand, adversarial training could significantly improve robustness for both BNNs and Deep learning neural networks (DNNs), though strong PGD attacks could still be challenging. Therefore, adversarial attacks are a real threat, and more effective adversarial defense methods and innovative network architectures may be required for practical applications.\",\"PeriodicalId\":135827,\"journal\":{\"name\":\"2022 International Conference on Advanced Technologies for Communications (ATC)\",\"volume\":\"58 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Conference on Advanced Technologies for Communications (ATC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ATC55345.2022.9943040\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Advanced Technologies for Communications (ATC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ATC55345.2022.9943040","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Study on Adversarial Attacks and Defense Method on Binarized Neural Network
Binarized Neural Networks (BNNs) are relatively hardware-efficient neural network models which are seriously considered for edge-AI applications. However, BNNs are like other neural networks and exhibit certain linear properties and are vulnerable to adversarial attacks. This work evaluates the robustness of BNNs under Projected Gradient Descent (PGD) - one of the most powerful iterative adversarial attacks, on BNN models and analyzes the effectiveness of corresponding defense methods. Our extensive simulation shows that the network almost malfunction when performing recognition tasks when tested with PGD samples without adversarial training. On the other hand, adversarial training could significantly improve robustness for both BNNs and Deep learning neural networks (DNNs), though strong PGD attacks could still be challenging. Therefore, adversarial attacks are a real threat, and more effective adversarial defense methods and innovative network architectures may be required for practical applications.