审核不同隐私神经网络模型的隐私预算

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Neurocomputing Pub Date : 2024-10-28 DOI:10.1016/j.neucom.2024.128756
Wen Huang , Zhishuo Zhang , Weixin Zhao , Jian Peng , Wenzheng Xu , Yongjian Liao , Shijie Zhou , Ziming Wang
{"title":"审核不同隐私神经网络模型的隐私预算","authors":"Wen Huang ,&nbsp;Zhishuo Zhang ,&nbsp;Weixin Zhao ,&nbsp;Jian Peng ,&nbsp;Wenzheng Xu ,&nbsp;Yongjian Liao ,&nbsp;Shijie Zhou ,&nbsp;Ziming Wang","doi":"10.1016/j.neucom.2024.128756","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, neural network models are used in various tasks. To eliminate privacy concern, differential privacy (DP) is introduced to the training phase of neural network models. However, introducing DP into neural network models is very subtle and error-prone, resulting in that some differentially private neural network models may not achieve privacy guarantee claimed. In this paper, we propose a method, which can audit privacy budget of differentially private neural network models. The proposed method is general and can be used to audit some other AI models. We elaborate on how to audit privacy budget of basic DP mechanisms and neural network models by the proposed method first. Then, we run experiments to verify our method. Experiment results indicate that the proposed method is better than the advanced method and the auditing precise is high when the privacy budget is small. In particular, when auditing privacy budget of ResNet-18 over CIFAR-10 protected by the differentially private mechanism with theoretical privacy budget 0.2, the accuracy of our method is about 17 times that of the state-of-the-art method. For the simpler dataset FMNIST, the accuracy of our method is about 32 times that of the state-of-the-art method when theoretical privacy budget is 0.2.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5000,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Auditing privacy budget of differentially private neural network models\",\"authors\":\"Wen Huang ,&nbsp;Zhishuo Zhang ,&nbsp;Weixin Zhao ,&nbsp;Jian Peng ,&nbsp;Wenzheng Xu ,&nbsp;Yongjian Liao ,&nbsp;Shijie Zhou ,&nbsp;Ziming Wang\",\"doi\":\"10.1016/j.neucom.2024.128756\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In recent years, neural network models are used in various tasks. To eliminate privacy concern, differential privacy (DP) is introduced to the training phase of neural network models. However, introducing DP into neural network models is very subtle and error-prone, resulting in that some differentially private neural network models may not achieve privacy guarantee claimed. In this paper, we propose a method, which can audit privacy budget of differentially private neural network models. The proposed method is general and can be used to audit some other AI models. We elaborate on how to audit privacy budget of basic DP mechanisms and neural network models by the proposed method first. Then, we run experiments to verify our method. Experiment results indicate that the proposed method is better than the advanced method and the auditing precise is high when the privacy budget is small. In particular, when auditing privacy budget of ResNet-18 over CIFAR-10 protected by the differentially private mechanism with theoretical privacy budget 0.2, the accuracy of our method is about 17 times that of the state-of-the-art method. For the simpler dataset FMNIST, the accuracy of our method is about 32 times that of the state-of-the-art method when theoretical privacy budget is 0.2.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.5000,\"publicationDate\":\"2024-10-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231224015273\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224015273","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

近年来,神经网络模型被广泛应用于各种任务中。为了消除对隐私的担忧,人们在神经网络模型的训练阶段引入了差分隐私(DP)。然而,在神经网络模型中引入 DP 非常微妙且容易出错,导致一些差异化隐私神经网络模型可能无法实现所声称的隐私保证。在本文中,我们提出了一种方法,可以审核差异化隐私神经网络模型的隐私预算。本文提出的方法具有通用性,可用于审核其他人工智能模型。我们首先阐述了如何利用所提出的方法审核基本 DP 机制和神经网络模型的隐私预算。然后,我们通过实验来验证我们的方法。实验结果表明,当隐私预算较小时,建议的方法优于先进的方法,且审计精度较高。特别是,当对 ResNet-18 的隐私预算进行审计时,在理论隐私预算为 0.2 的情况下,在 CIFAR-10 上使用差异化隐私机制保护的 ResNet-18,我们的方法的精确度约为先进方法的 17 倍。对于更简单的数据集 FMNIST,当理论隐私预算为 0.2 时,我们的方法的准确率约为最新方法的 32 倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Auditing privacy budget of differentially private neural network models
In recent years, neural network models are used in various tasks. To eliminate privacy concern, differential privacy (DP) is introduced to the training phase of neural network models. However, introducing DP into neural network models is very subtle and error-prone, resulting in that some differentially private neural network models may not achieve privacy guarantee claimed. In this paper, we propose a method, which can audit privacy budget of differentially private neural network models. The proposed method is general and can be used to audit some other AI models. We elaborate on how to audit privacy budget of basic DP mechanisms and neural network models by the proposed method first. Then, we run experiments to verify our method. Experiment results indicate that the proposed method is better than the advanced method and the auditing precise is high when the privacy budget is small. In particular, when auditing privacy budget of ResNet-18 over CIFAR-10 protected by the differentially private mechanism with theoretical privacy budget 0.2, the accuracy of our method is about 17 times that of the state-of-the-art method. For the simpler dataset FMNIST, the accuracy of our method is about 32 times that of the state-of-the-art method when theoretical privacy budget is 0.2.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
期刊最新文献
An efficient re-parameterization feature pyramid network on YOLOv8 to the detection of steel surface defect Editorial Board Multi-contrast image clustering via multi-resolution augmentation and momentum-output queues Augmented ELBO regularization for enhanced clustering in variational autoencoders Learning from different perspectives for regret reduction in reinforcement learning: A free energy approach
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1