DDR-Defense:带有检测器、去噪器和转换器的3D防御网络

Yukun Zhao, Xinyun Zhang, Shuang Ren
{"title":"DDR-Defense:带有检测器、去噪器和转换器的3D防御网络","authors":"Yukun Zhao, Xinyun Zhang, Shuang Ren","doi":"10.1109/ICCC56324.2022.10065933","DOIUrl":null,"url":null,"abstract":"Recently, 3D deep neural networks have been fully developed and applied to many high-safety tasks. However, due to the uninterpretability of deep learning networks, adversarial examples can easily prompt a normally trained deep learning model to make wrong predictions. In this paper, we propose a new point cloud defense network named DDR-Defense, a framework for defending neural network classifiers against adversarial examples. DDR-Defense neither modifies the number of the points in the input samples nor the protected classifiers so that it can protect most classification models. DDR-Defense first distinguishes adversarial examples from normal examples through a reconstruction-based detector. The detector can prevent errors caused by processing the entire input samples, thereby improving the security of the defense network. For adversarial examples, we first use the statistical outlier removal (SOR) method for denoising, then use a reformer to rebuild them. In this paper, We design a new reformer based on FoldingNet and variational autoencoder, named Folding-VAE. We test DDR-Defense on the ModelNet40 dataset and find that it has a better defense effect than other existing 3D defense networks, especially in saliency maps attack and LG-GAN attack. The lightweight detector, denoiser, and reformer framework ensures the security and efficiency of 3D defense for most application scenarios. Our research will provide a basis for improving the robustness of deep learning models on 3D point clouds.","PeriodicalId":263098,"journal":{"name":"2022 IEEE 8th International Conference on Computer and Communications (ICCC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DDR-Defense: 3D Defense Network with a Detector, a Denoiser, and a Reformer\",\"authors\":\"Yukun Zhao, Xinyun Zhang, Shuang Ren\",\"doi\":\"10.1109/ICCC56324.2022.10065933\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, 3D deep neural networks have been fully developed and applied to many high-safety tasks. However, due to the uninterpretability of deep learning networks, adversarial examples can easily prompt a normally trained deep learning model to make wrong predictions. In this paper, we propose a new point cloud defense network named DDR-Defense, a framework for defending neural network classifiers against adversarial examples. DDR-Defense neither modifies the number of the points in the input samples nor the protected classifiers so that it can protect most classification models. DDR-Defense first distinguishes adversarial examples from normal examples through a reconstruction-based detector. The detector can prevent errors caused by processing the entire input samples, thereby improving the security of the defense network. For adversarial examples, we first use the statistical outlier removal (SOR) method for denoising, then use a reformer to rebuild them. In this paper, We design a new reformer based on FoldingNet and variational autoencoder, named Folding-VAE. We test DDR-Defense on the ModelNet40 dataset and find that it has a better defense effect than other existing 3D defense networks, especially in saliency maps attack and LG-GAN attack. The lightweight detector, denoiser, and reformer framework ensures the security and efficiency of 3D defense for most application scenarios. Our research will provide a basis for improving the robustness of deep learning models on 3D point clouds.\",\"PeriodicalId\":263098,\"journal\":{\"name\":\"2022 IEEE 8th International Conference on Computer and Communications (ICCC)\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 8th International Conference on Computer and Communications (ICCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCC56324.2022.10065933\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 8th International Conference on Computer and Communications (ICCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCC56324.2022.10065933","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

近年来,三维深度神经网络得到了充分的发展,并应用于许多高安全性的任务中。然而,由于深度学习网络的不可解释性,对抗性示例很容易促使正常训练的深度学习模型做出错误的预测。在本文中,我们提出了一种新的点云防御网络,称为DDR-Defense,这是一个保护神经网络分类器免受对抗性示例攻击的框架。DDR-Defense既不修改输入样本中点的个数,也不修改被保护的分类器,因此它可以保护大多数分类模型。DDR-Defense首先通过基于重建的检测器区分对抗性示例和正常示例。该检测器可以防止因处理整个输入样本而产生的错误,从而提高防御网络的安全性。对于对抗性示例,我们首先使用统计离群值去除(SOR)方法进行去噪,然后使用改革者重建它们。本文设计了一种基于FoldingNet和变分自编码器的新型改进器,命名为fold - vae。我们在ModelNet40数据集上测试了DDR-Defense,发现它比现有的其他3D防御网络具有更好的防御效果,特别是在显著性地图攻击和LG-GAN攻击方面。轻量级的检测器、去噪器和变换器框架确保了大多数应用场景下3D防御的安全性和效率。我们的研究将为提高三维点云上深度学习模型的鲁棒性提供基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
DDR-Defense: 3D Defense Network with a Detector, a Denoiser, and a Reformer
Recently, 3D deep neural networks have been fully developed and applied to many high-safety tasks. However, due to the uninterpretability of deep learning networks, adversarial examples can easily prompt a normally trained deep learning model to make wrong predictions. In this paper, we propose a new point cloud defense network named DDR-Defense, a framework for defending neural network classifiers against adversarial examples. DDR-Defense neither modifies the number of the points in the input samples nor the protected classifiers so that it can protect most classification models. DDR-Defense first distinguishes adversarial examples from normal examples through a reconstruction-based detector. The detector can prevent errors caused by processing the entire input samples, thereby improving the security of the defense network. For adversarial examples, we first use the statistical outlier removal (SOR) method for denoising, then use a reformer to rebuild them. In this paper, We design a new reformer based on FoldingNet and variational autoencoder, named Folding-VAE. We test DDR-Defense on the ModelNet40 dataset and find that it has a better defense effect than other existing 3D defense networks, especially in saliency maps attack and LG-GAN attack. The lightweight detector, denoiser, and reformer framework ensures the security and efficiency of 3D defense for most application scenarios. Our research will provide a basis for improving the robustness of deep learning models on 3D point clouds.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Backward Edge Pointer Protection Technology Based on Dynamic Instrumentation Experimental Design of Router Debugging based Neighbor Cache States Change of IPv6 Nodes Sharing Big Data Storage for Air Traffic Management Study of Non-Orthogonal Multiple Access Technology for Satellite Communications A Joint Design of Polar Codes and Physical-layer Network Coding in Visible Light Communication System
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1