超越梯度:利用模型反转攻击中的对抗性先验

IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS ACM Transactions on Privacy and Security Pub Date : 2023-06-26 DOI:https://dl.acm.org/doi/10.1145/3592800
Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis
{"title":"超越梯度:利用模型反转攻击中的对抗性先验","authors":"Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis","doi":"https://dl.acm.org/doi/10.1145/3592800","DOIUrl":null,"url":null,"abstract":"<p>Collaborative machine learning settings such as federated learning can be susceptible to adversarial interference and attacks. One class of such attacks is termed <i>model inversion attacks</i>, characterised by the adversary reverse-engineering the model into disclosing the training data. Previous implementations of this attack typically <i>only</i> rely on the shared data representations, ignoring the adversarial priors, or require that specific layers are present in the target model, reducing the potential attack surface. In this work, we propose a novel context-agnostic model inversion framework that builds on the foundations of gradient-based inversion attacks, but additionally exploits the features and the style of the data controlled by an in-the-network adversary. Our technique outperforms existing gradient-based approaches both qualitatively and quantitatively across all training settings, showing particular effectiveness against the collaborative medical imaging tasks. Finally, we demonstrate that our method achieves significant success on two downstream tasks: sensitive feature inference and facial recognition spoofing.</p>","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":"25 1","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Beyond Gradients: Exploiting Adversarial Priors in Model Inversion Attacks\",\"authors\":\"Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis\",\"doi\":\"https://dl.acm.org/doi/10.1145/3592800\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Collaborative machine learning settings such as federated learning can be susceptible to adversarial interference and attacks. One class of such attacks is termed <i>model inversion attacks</i>, characterised by the adversary reverse-engineering the model into disclosing the training data. Previous implementations of this attack typically <i>only</i> rely on the shared data representations, ignoring the adversarial priors, or require that specific layers are present in the target model, reducing the potential attack surface. In this work, we propose a novel context-agnostic model inversion framework that builds on the foundations of gradient-based inversion attacks, but additionally exploits the features and the style of the data controlled by an in-the-network adversary. Our technique outperforms existing gradient-based approaches both qualitatively and quantitatively across all training settings, showing particular effectiveness against the collaborative medical imaging tasks. Finally, we demonstrate that our method achieves significant success on two downstream tasks: sensitive feature inference and facial recognition spoofing.</p>\",\"PeriodicalId\":56050,\"journal\":{\"name\":\"ACM Transactions on Privacy and Security\",\"volume\":\"25 1\",\"pages\":\"\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2023-06-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Privacy and Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/https://dl.acm.org/doi/10.1145/3592800\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Privacy and Security","FirstCategoryId":"94","ListUrlMain":"https://doi.org/https://dl.acm.org/doi/10.1145/3592800","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

协作机器学习设置(如联邦学习)可能容易受到对抗性干扰和攻击。其中一类攻击被称为模型反转攻击,其特征是攻击者对模型进行逆向工程,使其暴露训练数据。这种攻击的先前实现通常只依赖于共享数据表示,而忽略了对抗性先验,或者要求目标模型中存在特定的层,从而减少了潜在的攻击面。在这项工作中,我们提出了一种新的上下文无关模型反演框架,该框架建立在基于梯度的反演攻击的基础上,但还利用了网络内对手控制的数据的特征和风格。我们的技术在所有训练设置的定性和定量上都优于现有的基于梯度的方法,在协作医学成像任务中表现出特别的有效性。最后,我们证明了我们的方法在两个下游任务上取得了显著的成功:敏感特征推理和面部识别欺骗。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Beyond Gradients: Exploiting Adversarial Priors in Model Inversion Attacks

Collaborative machine learning settings such as federated learning can be susceptible to adversarial interference and attacks. One class of such attacks is termed model inversion attacks, characterised by the adversary reverse-engineering the model into disclosing the training data. Previous implementations of this attack typically only rely on the shared data representations, ignoring the adversarial priors, or require that specific layers are present in the target model, reducing the potential attack surface. In this work, we propose a novel context-agnostic model inversion framework that builds on the foundations of gradient-based inversion attacks, but additionally exploits the features and the style of the data controlled by an in-the-network adversary. Our technique outperforms existing gradient-based approaches both qualitatively and quantitatively across all training settings, showing particular effectiveness against the collaborative medical imaging tasks. Finally, we demonstrate that our method achieves significant success on two downstream tasks: sensitive feature inference and facial recognition spoofing.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ACM Transactions on Privacy and Security
ACM Transactions on Privacy and Security Computer Science-General Computer Science
CiteScore
5.20
自引率
0.00%
发文量
52
期刊介绍: ACM Transactions on Privacy and Security (TOPS) (formerly known as TISSEC) publishes high-quality research results in the fields of information and system security and privacy. Studies addressing all aspects of these fields are welcomed, ranging from technologies, to systems and applications, to the crafting of policies.
期刊最新文献
ZPredict: ML-Based IPID Side-channel Measurements ZTA-IoT: A Novel Architecture for Zero-Trust in IoT Systems and an Ensuing Usage Control Model Security Analysis of the Consumer Remote SIM Provisioning Protocol X-squatter: AI Multilingual Generation of Cross-Language Sound-squatting Toward Robust ASR System against Audio Adversarial Examples using Agitated Logit
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1