Measures of Information Leakage for Incomplete Statistical Information: Application to a Binary Privacy Mechanism

IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS ACM Transactions on Privacy and Security Pub Date : 2023-11-13 DOI:10.1145/3624982
Shahnewaz Karim Sakib, George T Amariucai, Yong Guan
{"title":"Measures of Information Leakage for Incomplete Statistical Information: Application to a Binary Privacy Mechanism","authors":"Shahnewaz Karim Sakib, George T Amariucai, Yong Guan","doi":"10.1145/3624982","DOIUrl":null,"url":null,"abstract":"Information leakage is usually defined as the logarithmic increment in the adversary’s probability of correctly guessing the legitimate user’s private data or some arbitrary function of the private data when presented with the legitimate user’s publicly disclosed information. However, this definition of information leakage implicitly assumes that both the privacy mechanism and the prior probability of the original data are entirely known to the attacker. In reality, the assumption of complete knowledge of the privacy mechanism for an attacker is often impractical. The attacker can usually have access to only an approximate version of the correct privacy mechanism, computed from a limited set of the disclosed data, for which they can access the corresponding un-distorted data. In this scenario, the conventional definition of leakage no longer has an operational meaning. To address this problem, in this article, we propose novel meaningful information-theoretic metrics for information leakage when the attacker has incomplete information about the privacy mechanism—we call them average subjective leakage , average confidence boost , and average objective leakage , respectively. For the simplest, binary scenario, we demonstrate how to find an optimized privacy mechanism that minimizes the worst-case value of either of these leakages.","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":"3 11","pages":"0"},"PeriodicalIF":3.0000,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Privacy and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3624982","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Information leakage is usually defined as the logarithmic increment in the adversary’s probability of correctly guessing the legitimate user’s private data or some arbitrary function of the private data when presented with the legitimate user’s publicly disclosed information. However, this definition of information leakage implicitly assumes that both the privacy mechanism and the prior probability of the original data are entirely known to the attacker. In reality, the assumption of complete knowledge of the privacy mechanism for an attacker is often impractical. The attacker can usually have access to only an approximate version of the correct privacy mechanism, computed from a limited set of the disclosed data, for which they can access the corresponding un-distorted data. In this scenario, the conventional definition of leakage no longer has an operational meaning. To address this problem, in this article, we propose novel meaningful information-theoretic metrics for information leakage when the attacker has incomplete information about the privacy mechanism—we call them average subjective leakage , average confidence boost , and average objective leakage , respectively. For the simplest, binary scenario, we demonstrate how to find an optimized privacy mechanism that minimizes the worst-case value of either of these leakages.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
不完全统计信息的信息泄漏度量:在二进制隐私机制中的应用
信息泄漏通常被定义为攻击者正确猜测合法用户私有数据的概率的对数增量,或者当合法用户公开披露的信息出现时私有数据的任意函数。然而,信息泄漏的这个定义隐含地假设攻击者完全知道隐私机制和原始数据的先验概率。在现实中,假设攻击者完全了解隐私机制通常是不切实际的。攻击者通常只能访问从有限的公开数据集计算出来的正确隐私机制的一个近似版本,因此他们可以访问相应的未扭曲的数据。在这种情况下,泄漏的传统定义不再具有操作意义。为了解决这个问题,在本文中,我们为攻击者拥有关于隐私机制的不完全信息时的信息泄漏提出了新的有意义的信息论度量——我们分别称之为平均主观泄漏、平均信心增强和平均客观泄漏。对于最简单的二进制场景,我们演示了如何找到一种优化的隐私机制,使这两种泄漏的最坏情况值最小化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
ACM Transactions on Privacy and Security
ACM Transactions on Privacy and Security Computer Science-General Computer Science
CiteScore
5.20
自引率
0.00%
发文量
52
期刊介绍: ACM Transactions on Privacy and Security (TOPS) (formerly known as TISSEC) publishes high-quality research results in the fields of information and system security and privacy. Studies addressing all aspects of these fields are welcomed, ranging from technologies, to systems and applications, to the crafting of policies.
期刊最新文献
ZPredict: ML-Based IPID Side-channel Measurements ZTA-IoT: A Novel Architecture for Zero-Trust in IoT Systems and an Ensuing Usage Control Model Security Analysis of the Consumer Remote SIM Provisioning Protocol X-squatter: AI Multilingual Generation of Cross-Language Sound-squatting Toward Robust ASR System against Audio Adversarial Examples using Agitated Logit
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1