VPP: Privacy Preserving Machine Learning via Undervolting

Md. Shohidul Islam, Behnam Omidi, Ihsen Alouani, Khaled N. Khasawneh
{"title":"VPP: Privacy Preserving Machine Learning via Undervolting","authors":"Md. Shohidul Islam, Behnam Omidi, Ihsen Alouani, Khaled N. Khasawneh","doi":"10.1109/HOST55118.2023.10133266","DOIUrl":null,"url":null,"abstract":"Machine Learning (ML) systems are susceptible to membership inference attacks (MIAs), which leak private information from the training data. Specifically, MIAs are able to infer whether a target sample has been used in the training data of a given model. Such privacy breaching concern motivated several defenses against MIAs. However, most of the state-of-theart defenses such as Differential Privacy (DP) come at the cost of lower utility (i.e, classification accuracy). In this work, we propose Privacy Preserving Volt $(V_{PP})$, a new lightweight inference-time approach that leverages undervolting for privacy-preserving ML. Unlike related work, VPP maintains protected models’ utility without requiring re-training. The key insight of our method is to blur the MIA differential analysis outcome by comprehensively garbling the model features using random noise. Unlike DP, which injects noise within the gradient at training time, VPP injects computational randomness in a set of layers’ during inference through carefully designed undervolting Specifically, we propose a bi-objective optimization approach to identify the noise characteristics that yield privacypreserving properties while maintaining the protected model’s utility. Extensive experimental results demonstrate that VPP yields a significantly more interesting utility/privacy tradeoff compared to prior defenses. For example, with comparable privacy protection on CIFAR-10 benchmark, VPP improves the utility by 32.93% over DP-SGD. Besides, while related noisebased defenses are defeated by label-only attacks, VPP shows high resilience to such adaptive MLA. More over, VPP comes with a by-product inference power gain of up to 61%. Finally, for a comprehensive analysis, we propose a new adaptive attacks that operate on the expectation over the stochastic model behavior. We believe that VPP represents a significant step towards practical privacy preserving techniques and considerably improves the state-of-the-art.","PeriodicalId":128125,"journal":{"name":"2023 IEEE International Symposium on Hardware Oriented Security and Trust (HOST)","volume":"210 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Symposium on Hardware Oriented Security and Trust (HOST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HOST55118.2023.10133266","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Machine Learning (ML) systems are susceptible to membership inference attacks (MIAs), which leak private information from the training data. Specifically, MIAs are able to infer whether a target sample has been used in the training data of a given model. Such privacy breaching concern motivated several defenses against MIAs. However, most of the state-of-theart defenses such as Differential Privacy (DP) come at the cost of lower utility (i.e, classification accuracy). In this work, we propose Privacy Preserving Volt $(V_{PP})$, a new lightweight inference-time approach that leverages undervolting for privacy-preserving ML. Unlike related work, VPP maintains protected models’ utility without requiring re-training. The key insight of our method is to blur the MIA differential analysis outcome by comprehensively garbling the model features using random noise. Unlike DP, which injects noise within the gradient at training time, VPP injects computational randomness in a set of layers’ during inference through carefully designed undervolting Specifically, we propose a bi-objective optimization approach to identify the noise characteristics that yield privacypreserving properties while maintaining the protected model’s utility. Extensive experimental results demonstrate that VPP yields a significantly more interesting utility/privacy tradeoff compared to prior defenses. For example, with comparable privacy protection on CIFAR-10 benchmark, VPP improves the utility by 32.93% over DP-SGD. Besides, while related noisebased defenses are defeated by label-only attacks, VPP shows high resilience to such adaptive MLA. More over, VPP comes with a by-product inference power gain of up to 61%. Finally, for a comprehensive analysis, we propose a new adaptive attacks that operate on the expectation over the stochastic model behavior. We believe that VPP represents a significant step towards practical privacy preserving techniques and considerably improves the state-of-the-art.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
VPP:通过欠电压保护隐私的机器学习
机器学习(ML)系统容易受到成员推理攻击(mia)的影响,这种攻击会泄露训练数据中的私人信息。具体来说,MIAs能够推断目标样本是否已用于给定模型的训练数据。这种侵犯隐私的担忧激发了针对mia的几项防御措施。然而,大多数最先进的防御,如差分隐私(DP),是以较低的效用(即分类准确性)为代价的。在这项工作中,我们提出了隐私保护Volt $(V_{PP})$,这是一种新的轻量级推理时间方法,利用欠电压进行隐私保护ML。与相关工作不同,VPP保持受保护模型的效用,而无需重新训练。我们方法的关键见解是通过使用随机噪声全面混淆模型特征来模糊MIA差分分析结果。与DP在训练时在梯度中注入噪声不同,VPP在推理过程中通过精心设计的欠电压在一组层中注入计算随机性。具体而言,我们提出了一种双目标优化方法来识别产生隐私保护属性的噪声特征,同时保持受保护模型的实用性。广泛的实验结果表明,与之前的防御相比,VPP产生了更有趣的效用/隐私权衡。例如,在CIFAR-10基准测试中,VPP的隐私保护性能比DP-SGD提高了32.93%。此外,尽管相关的基于噪声的防御被标签攻击所击败,VPP对这种自适应MLA表现出很高的弹性。此外,VPP附带的副产品推理功率增益高达61%。最后,为了全面分析,我们提出了一种新的自适应攻击,该攻击基于随机模型行为的期望。我们相信VPP代表了一个重要的一步,实用的隐私保护技术,并大大提高了国家的最先进的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
TripletPower: Deep-Learning Side-Channel Attacks over Few Traces Design of Quantum Computer Antivirus Bits to BNNs: Reconstructing FPGA ML-IP with Joint Bitstream and Side-Channel Analysis Disassembling Software Instruction Types through Impedance Side-channel Analysis Generating Lower-Cost Garbled Circuits: Logic Synthesis Can Help
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1