Analyzing the Shuffle Model Through the Lens of Quantitative Information Flow

Mireya Jurado, Ramon G. Gonze, M. Alvim, C. Palamidessi
{"title":"Analyzing the Shuffle Model Through the Lens of Quantitative Information Flow","authors":"Mireya Jurado, Ramon G. Gonze, M. Alvim, C. Palamidessi","doi":"10.1109/CSF57540.2023.00033","DOIUrl":null,"url":null,"abstract":"Local differential privacy (LDP) is a variant of differential privacy (DP) that avoids the necessity of a trusted central curator, at the expense of a worse trade-off between privacy and utility. The shuffle model has emerged as a way to provide greater anonymity to users by randomly permuting their messages, so that the direct link between users and their reported values is lost to the data collector. By combining an LDP mechanism with a shuffler, privacy can be improved at no cost for the accuracy of operations insensitive to permutations, thereby improving utility in many analytic tasks. However, the privacy implications of shuffling are not always immediately evident, and derivations of privacy bounds are made on a case-by-case basis. In this paper, we analyze the combination of LDP with shuffling in the rigorous framework of quantitative information flow (QIF), and reason about the resulting resilience to inference attacks. QIF naturally captures (combinations of) randomization mechanisms as information-theoretic channels, thus allowing for precise modeling of a variety of inference attacks in a natural way and for measuring the leakage of private information under these attacks. We exploit symmetries of k-RR mechanisms with the shuffle model to achieve closed formulas that express leakage exactly. We provide formulas that show how shuffling improves protection against leaks in the local model, and study how leakage behaves for various values of the privacy parameter of the LDP mechanism. In contrast to the strong adversary from differential privacy, who knows everyone's record in a dataset but the target's, we focus on an uninformed adversary, who does not know the value of any individual in the dataset. This adversary is often more realistic as a consumer of statistical datasets, and indeed we show that in some situations, mechanisms that are equivalent under the strong adversary can provide different privacy guarantees under the uninformed one. Finally, we also illustrate the application of our model to the typical strong adversary from DP.","PeriodicalId":179870,"journal":{"name":"2023 IEEE 36th Computer Security Foundations Symposium (CSF)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 36th Computer Security Foundations Symposium (CSF)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSF57540.2023.00033","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Local differential privacy (LDP) is a variant of differential privacy (DP) that avoids the necessity of a trusted central curator, at the expense of a worse trade-off between privacy and utility. The shuffle model has emerged as a way to provide greater anonymity to users by randomly permuting their messages, so that the direct link between users and their reported values is lost to the data collector. By combining an LDP mechanism with a shuffler, privacy can be improved at no cost for the accuracy of operations insensitive to permutations, thereby improving utility in many analytic tasks. However, the privacy implications of shuffling are not always immediately evident, and derivations of privacy bounds are made on a case-by-case basis. In this paper, we analyze the combination of LDP with shuffling in the rigorous framework of quantitative information flow (QIF), and reason about the resulting resilience to inference attacks. QIF naturally captures (combinations of) randomization mechanisms as information-theoretic channels, thus allowing for precise modeling of a variety of inference attacks in a natural way and for measuring the leakage of private information under these attacks. We exploit symmetries of k-RR mechanisms with the shuffle model to achieve closed formulas that express leakage exactly. We provide formulas that show how shuffling improves protection against leaks in the local model, and study how leakage behaves for various values of the privacy parameter of the LDP mechanism. In contrast to the strong adversary from differential privacy, who knows everyone's record in a dataset but the target's, we focus on an uninformed adversary, who does not know the value of any individual in the dataset. This adversary is often more realistic as a consumer of statistical datasets, and indeed we show that in some situations, mechanisms that are equivalent under the strong adversary can provide different privacy guarantees under the uninformed one. Finally, we also illustrate the application of our model to the typical strong adversary from DP.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
从定量信息流的角度分析Shuffle模型
本地差分隐私(LDP)是差分隐私(DP)的一种变体,它避免了可信的中央管理员的必要性,但代价是隐私和效用之间的折衷。shuffle模型的出现是为了通过随机排列用户的消息来为用户提供更大的匿名性,这样用户与其报告值之间的直接链接就会丢失给数据收集器。通过将LDP机制与洗牌器相结合,可以在不影响排列的操作的准确性的前提下提高隐私性,从而提高许多分析任务的实用性。然而,洗牌对隐私的影响并不总是立即显而易见的,隐私界限的推导是根据具体情况而定的。本文在严格的定量信息流(QIF)框架下,分析了LDP与变换的结合,并对由此产生的对推理攻击的弹性进行了推理。QIF自然地将随机化机制(组合)捕获为信息理论通道,从而允许以自然的方式对各种推理攻击进行精确建模,并测量这些攻击下的私有信息泄漏。我们利用k-RR机制与shuffle模型的对称性来获得精确表达泄漏的封闭公式。我们提供了显示洗牌如何在局部模型中提高对泄漏的保护的公式,并研究了LDP机制的不同隐私参数值的泄漏行为。与来自差异隐私的强大对手相比,他们知道数据集中除了目标之外的每个人的记录,我们关注的是一个不知情的对手,他不知道数据集中任何个人的价值。作为统计数据集的消费者,这个对手通常更现实,我们确实表明,在某些情况下,在强大对手下等效的机制可以在不知情的情况下提供不同的隐私保证。最后,我们还说明了我们的模型在典型的DP强对手中的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
SoK: Model Inversion Attack Landscape: Taxonomy, Challenges, and Future Roadmap $\pi_{\mathbf{RA}}$: A $\pi\text{-calculus}$ for Verifying Protocols that Use Remote Attestation Keep Spending: Beyond Optimal Cyber-Security Investment A State-Separating Proof for Yao's Garbling Scheme Collusion-Deterrent Threshold Information Escrow
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1