SoK: Model Inversion Attack Landscape: Taxonomy, Challenges, and Future Roadmap

S. V. Dibbo
{"title":"SoK: Model Inversion Attack Landscape: Taxonomy, Challenges, and Future Roadmap","authors":"S. V. Dibbo","doi":"10.1109/CSF57540.2023.00027","DOIUrl":null,"url":null,"abstract":"A crucial module of the widely applied machine learning (ML) model is the model training phase, which involves large-scale training data, often including sensitive private data. ML models trained on these sensitive data suffer from significant privacy concerns since ML models can intentionally or unintendedly leak information about training data. Adversaries can exploit this information to perform privacy attacks, including model extraction, membership inference, and model inversion. While a model extraction attack steals and replicates a trained model functionality, and membership inference infers the data sample's inclusiveness to the training set, a model inversion attack has the goal of inferring the training data sample's sensitive attribute value or reconstructing the training sample (i.e., image/audio/text). Distinct and inconsistent characteristics of model inversion attack make this attack even more challenging and consequential, opening up model inversion attack as a more prominent and increasingly expanding research paradigm. Thereby, to flourish research in this relatively underexplored model inversion domain, we conduct the first-ever systematic literature review of the model inversion attack landscape. We characterize model inversion attacks and provide a comprehensive taxonomy based on different dimensions. We illustrate foundational perspectives emphasizing methodologies and key principles of the existing attacks and defense techniques. Finally, we discuss challenges and open issues in the existing model inversion attacks, focusing on the roadmap for future research directions.","PeriodicalId":179870,"journal":{"name":"2023 IEEE 36th Computer Security Foundations Symposium (CSF)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 36th Computer Security Foundations Symposium (CSF)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSF57540.2023.00027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

A crucial module of the widely applied machine learning (ML) model is the model training phase, which involves large-scale training data, often including sensitive private data. ML models trained on these sensitive data suffer from significant privacy concerns since ML models can intentionally or unintendedly leak information about training data. Adversaries can exploit this information to perform privacy attacks, including model extraction, membership inference, and model inversion. While a model extraction attack steals and replicates a trained model functionality, and membership inference infers the data sample's inclusiveness to the training set, a model inversion attack has the goal of inferring the training data sample's sensitive attribute value or reconstructing the training sample (i.e., image/audio/text). Distinct and inconsistent characteristics of model inversion attack make this attack even more challenging and consequential, opening up model inversion attack as a more prominent and increasingly expanding research paradigm. Thereby, to flourish research in this relatively underexplored model inversion domain, we conduct the first-ever systematic literature review of the model inversion attack landscape. We characterize model inversion attacks and provide a comprehensive taxonomy based on different dimensions. We illustrate foundational perspectives emphasizing methodologies and key principles of the existing attacks and defense techniques. Finally, we discuss challenges and open issues in the existing model inversion attacks, focusing on the roadmap for future research directions.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
模型反转攻击前景:分类、挑战和未来路线图
广泛应用的机器学习(ML)模型的一个关键模块是模型训练阶段,该阶段涉及大规模的训练数据,通常包括敏感的私有数据。在这些敏感数据上训练的机器学习模型存在严重的隐私问题,因为机器学习模型可能有意或无意地泄露有关训练数据的信息。攻击者可以利用这些信息来执行隐私攻击,包括模型提取、成员推理和模型反演。模型提取攻击窃取和复制训练好的模型功能,隶属度推理推断数据样本对训练集的包容性,而模型反演攻击的目标是推断训练数据样本的敏感属性值或重建训练样本(即图像/音频/文本)。模型反演攻击的鲜明和不一致的特点使得模型反演攻击更具挑战性和后果性,使模型反演攻击成为一个更加突出和不断扩展的研究范式。因此,为了在这个开发相对不足的模型反演领域蓬勃发展,我们对模型反演攻击领域进行了首次系统的文献综述。我们描述了模型反转攻击的特征,并提供了基于不同维度的综合分类。我们说明了强调现有攻击和防御技术的方法和关键原则的基本观点。最后,我们讨论了现有模型反转攻击中存在的挑战和有待解决的问题,重点讨论了未来研究方向的路线图。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
SoK: Model Inversion Attack Landscape: Taxonomy, Challenges, and Future Roadmap $\pi_{\mathbf{RA}}$: A $\pi\text{-calculus}$ for Verifying Protocols that Use Remote Attestation Keep Spending: Beyond Optimal Cyber-Security Investment A State-Separating Proof for Yao's Garbling Scheme Collusion-Deterrent Threshold Information Escrow
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1