Hoare逻辑中差分隐私的证明

G. Barthe, Marco Gaboardi, E. J. G. Arias, Justin Hsu, César Kunz, Pierre-Yves Strub
{"title":"Hoare逻辑中差分隐私的证明","authors":"G. Barthe, Marco Gaboardi, E. J. G. Arias, Justin Hsu, César Kunz, Pierre-Yves Strub","doi":"10.1109/CSF.2014.36","DOIUrl":null,"url":null,"abstract":"Differential privacy is a rigorous, worst-case notion of privacy-preserving computation. Informally, a probabilistic program is differentially private if the participation of a single individual in the input database has a limited effect on the program's distribution on outputs. More technically, differential privacy is a quantitative 2-safety property that bounds the distance between the output distributions of a probabilistic program on adjacent inputs. Like many 2-safety properties, differential privacy lies outside the scope of traditional verification techniques. Existing approaches to enforce privacy are based on intricate, non-conventional type systems, or customized relational logics. These approaches are difficult to implement and often cumbersome to use. We present an alternative approach that verifies differential privacy by standard, non-relational reasoning on non-probabilistic programs. Our approach transforms a probabilistic program into a non-probabilistic program which simulates two executions of the original program. We prove that if the target program is correct with respect to a Hoare specification, then the original probabilistic program is differentially private. We provide a variety of examples from the differential privacy literature to demonstrate the utility of our approach. Finally, we compare our approach with existing verification techniques for privacy.","PeriodicalId":285965,"journal":{"name":"2014 IEEE 27th Computer Security Foundations Symposium","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2014-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"57","resultStr":"{\"title\":\"Proving Differential Privacy in Hoare Logic\",\"authors\":\"G. Barthe, Marco Gaboardi, E. J. G. Arias, Justin Hsu, César Kunz, Pierre-Yves Strub\",\"doi\":\"10.1109/CSF.2014.36\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Differential privacy is a rigorous, worst-case notion of privacy-preserving computation. Informally, a probabilistic program is differentially private if the participation of a single individual in the input database has a limited effect on the program's distribution on outputs. More technically, differential privacy is a quantitative 2-safety property that bounds the distance between the output distributions of a probabilistic program on adjacent inputs. Like many 2-safety properties, differential privacy lies outside the scope of traditional verification techniques. Existing approaches to enforce privacy are based on intricate, non-conventional type systems, or customized relational logics. These approaches are difficult to implement and often cumbersome to use. We present an alternative approach that verifies differential privacy by standard, non-relational reasoning on non-probabilistic programs. Our approach transforms a probabilistic program into a non-probabilistic program which simulates two executions of the original program. We prove that if the target program is correct with respect to a Hoare specification, then the original probabilistic program is differentially private. We provide a variety of examples from the differential privacy literature to demonstrate the utility of our approach. Finally, we compare our approach with existing verification techniques for privacy.\",\"PeriodicalId\":285965,\"journal\":{\"name\":\"2014 IEEE 27th Computer Security Foundations Symposium\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-07-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"57\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 IEEE 27th Computer Security Foundations Symposium\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CSF.2014.36\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE 27th Computer Security Foundations Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSF.2014.36","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 57

摘要

差分隐私是一种严格的、最坏情况下的隐私保护计算概念。非正式地说,如果单个人在输入数据库中的参与对程序在输出上的分布影响有限,那么一个概率计划就是差分私有的。从技术上讲,差分隐私是一种定量的2-安全属性,它限制了概率程序在相邻输入上的输出分布之间的距离。像许多双安全属性一样,差分隐私不在传统验证技术的范围之内。现有的增强隐私的方法是基于复杂的、非常规的类型系统或定制的关系逻辑。这些方法很难实现,而且通常使用起来很麻烦。我们提出了一种替代方法,通过非概率程序上的标准非关系推理来验证差分隐私。我们的方法将一个概率程序转换成一个非概率程序,模拟原始程序的两次执行。我们证明了如果目标程序对于Hoare规范是正确的,那么原始概率程序是差分私有的。我们从不同的隐私文献中提供了各种示例来演示我们的方法的实用性。最后,我们将我们的方法与现有的隐私验证技术进行了比较。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Proving Differential Privacy in Hoare Logic
Differential privacy is a rigorous, worst-case notion of privacy-preserving computation. Informally, a probabilistic program is differentially private if the participation of a single individual in the input database has a limited effect on the program's distribution on outputs. More technically, differential privacy is a quantitative 2-safety property that bounds the distance between the output distributions of a probabilistic program on adjacent inputs. Like many 2-safety properties, differential privacy lies outside the scope of traditional verification techniques. Existing approaches to enforce privacy are based on intricate, non-conventional type systems, or customized relational logics. These approaches are difficult to implement and often cumbersome to use. We present an alternative approach that verifies differential privacy by standard, non-relational reasoning on non-probabilistic programs. Our approach transforms a probabilistic program into a non-probabilistic program which simulates two executions of the original program. We prove that if the target program is correct with respect to a Hoare specification, then the original probabilistic program is differentially private. We provide a variety of examples from the differential privacy literature to demonstrate the utility of our approach. Finally, we compare our approach with existing verification techniques for privacy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Compositional Information-Flow Security for Interactive Systems Automated Generation of Attack Trees Noninterference under Weak Memory Models TUC: Time-Sensitive and Modular Analysis of Anonymous Communication A Sound Abstraction of the Parsing Problem
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1