A Fine-tuning-based Adversarial Network for Member Privacy Preserving

Xiangyi Lu, Qing Ren, Feng Tian
{"title":"A Fine-tuning-based Adversarial Network for Member Privacy Preserving","authors":"Xiangyi Lu, Qing Ren, Feng Tian","doi":"10.1109/NaNA53684.2021.00082","DOIUrl":null,"url":null,"abstract":"With the development of machine learning, the issue of privacy leakage has attracted much attention. Member inference attack is an attack method that threatens the privacy of training datasets. It uses the model’s behavior to infer whether the input user record belongs to the training datasets, and then get the user’s private information according to the purpose of the model. This paper studies the member inference attack under the black box model. We design a defense mechanism to make the learning model and the inference attack model learn from each other, and use the gains from the attack model to fine-tune the last layer’s parameters of the learning model. The fine-tuned learning model can reduce the gains from the membership inference attack with less loss of prediction accuracy. We use different datasets to evaluate the defense mechanism on deep neural networks. The results show that when the training accuracy and test accuracy of the learning model convergence are similar, the learning model only losses about 1% of the prediction accuracy, which the accuracy of the member inference attack drops by a maximum of around 20%.","PeriodicalId":414672,"journal":{"name":"2021 International Conference on Networking and Network Applications (NaNA)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Networking and Network Applications (NaNA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NaNA53684.2021.00082","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With the development of machine learning, the issue of privacy leakage has attracted much attention. Member inference attack is an attack method that threatens the privacy of training datasets. It uses the model’s behavior to infer whether the input user record belongs to the training datasets, and then get the user’s private information according to the purpose of the model. This paper studies the member inference attack under the black box model. We design a defense mechanism to make the learning model and the inference attack model learn from each other, and use the gains from the attack model to fine-tune the last layer’s parameters of the learning model. The fine-tuned learning model can reduce the gains from the membership inference attack with less loss of prediction accuracy. We use different datasets to evaluate the defense mechanism on deep neural networks. The results show that when the training accuracy and test accuracy of the learning model convergence are similar, the learning model only losses about 1% of the prediction accuracy, which the accuracy of the member inference attack drops by a maximum of around 20%.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于微调的成员隐私保护对抗网络
随着机器学习的发展,隐私泄露问题引起了人们的广泛关注。成员推理攻击是一种威胁训练数据集隐私的攻击方法。它利用模型的行为来推断输入的用户记录是否属于训练数据集,然后根据模型的目的获取用户的隐私信息。研究了黑盒模型下的成员推理攻击。我们设计了一种防御机制,使学习模型和推理攻击模型相互学习,并利用攻击模型的增益对学习模型的最后一层参数进行微调。经过微调的学习模型可以减少隶属度推理攻击带来的增益,同时降低预测精度。我们使用不同的数据集来评估深度神经网络的防御机制。结果表明,当学习模型收敛的训练精度和测试精度相近时,学习模型的预测精度仅损失1%左右,其中成员推理攻击的精度最大下降20%左右。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Covert Communication in D2D Underlaying Cellular Network Online Scheduling of Machine Learning Jobs in Edge-Cloud Networks Dual attention mechanism object tracking algorithm based on Fully-convolutional Siamese network Fatigue Detection Technology for Online Learning The Nearest Neighbor Algorithm for Balanced and Connected k-Center Problem under Modular Distance
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1