不同观测模型的拉格朗日乘数与最大信息泄漏

P. Malacaria, Han Chen
{"title":"不同观测模型的拉格朗日乘数与最大信息泄漏","authors":"P. Malacaria, Han Chen","doi":"10.1145/1375696.1375713","DOIUrl":null,"url":null,"abstract":"This paper explores two fundamental issues in Language based security. The first is to provide a quantitative definition of information leakage valid in several attacker's models. We consider attackers with different capabilities; the strongest one is able to observe the value of the low variables at each step during the execution of a program; the weakest one can only observe a single low value at some stage of the execution.\n We will provide a uniform definition of leakage, based on Information Theory, that will allow us to formalize and prove some intuitive relationships between the amount leaked by the same program in different models.\n The second issue is Channel Capacity, which in security terms amounts to answering the questions: given a program and an observational model, what is the maximum amount that the program can leak? And which input distribution causes the maximum leakage?\n To answer those questions we will introduce techniques from constrained non-linear optimization, mainly Lagrange multipliers and we will show how they provide a workable solution in all observational models considered. In the simplest setting, i.e. under minimal constraints, we will show that channel capacity is achieved by any input distribution which induces a uniform distribution on the observables.","PeriodicalId":119000,"journal":{"name":"ACM Workshop on Programming Languages and Analysis for Security","volume":"100 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"81","resultStr":"{\"title\":\"Lagrange multipliers and maximum information leakage in different observational models\",\"authors\":\"P. Malacaria, Han Chen\",\"doi\":\"10.1145/1375696.1375713\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper explores two fundamental issues in Language based security. The first is to provide a quantitative definition of information leakage valid in several attacker's models. We consider attackers with different capabilities; the strongest one is able to observe the value of the low variables at each step during the execution of a program; the weakest one can only observe a single low value at some stage of the execution.\\n We will provide a uniform definition of leakage, based on Information Theory, that will allow us to formalize and prove some intuitive relationships between the amount leaked by the same program in different models.\\n The second issue is Channel Capacity, which in security terms amounts to answering the questions: given a program and an observational model, what is the maximum amount that the program can leak? And which input distribution causes the maximum leakage?\\n To answer those questions we will introduce techniques from constrained non-linear optimization, mainly Lagrange multipliers and we will show how they provide a workable solution in all observational models considered. In the simplest setting, i.e. under minimal constraints, we will show that channel capacity is achieved by any input distribution which induces a uniform distribution on the observables.\",\"PeriodicalId\":119000,\"journal\":{\"name\":\"ACM Workshop on Programming Languages and Analysis for Security\",\"volume\":\"100 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2008-06-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"81\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Workshop on Programming Languages and Analysis for Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/1375696.1375713\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Workshop on Programming Languages and Analysis for Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1375696.1375713","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 81

摘要

本文探讨了基于语言的安全中的两个基本问题。首先,给出了在几种攻击者模型中有效的信息泄漏的定量定义。我们考虑具有不同能力的攻击者;最强的是能够在程序执行的每一步观察低变量的值;最弱的人只能在执行的某个阶段观察到一个低值。我们将根据信息论提供泄漏的统一定义,这将使我们能够形式化并证明同一程序在不同模型中的泄漏量之间的一些直观关系。第二个问题是通道容量(Channel Capacity),从安全的角度来看,它相当于回答以下问题:给定一个程序和一个观察模型,该程序可以泄漏的最大数量是多少?哪种输入分布导致最大的泄漏?为了回答这些问题,我们将介绍约束非线性优化技术,主要是拉格朗日乘数,我们将展示它们如何在所有考虑的观测模型中提供可行的解决方案。在最简单的设置中,即在最小的约束下,我们将证明信道容量是通过任何输入分布来实现的,该输入分布在可观测值上引起均匀分布。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Lagrange multipliers and maximum information leakage in different observational models
This paper explores two fundamental issues in Language based security. The first is to provide a quantitative definition of information leakage valid in several attacker's models. We consider attackers with different capabilities; the strongest one is able to observe the value of the low variables at each step during the execution of a program; the weakest one can only observe a single low value at some stage of the execution. We will provide a uniform definition of leakage, based on Information Theory, that will allow us to formalize and prove some intuitive relationships between the amount leaked by the same program in different models. The second issue is Channel Capacity, which in security terms amounts to answering the questions: given a program and an observational model, what is the maximum amount that the program can leak? And which input distribution causes the maximum leakage? To answer those questions we will introduce techniques from constrained non-linear optimization, mainly Lagrange multipliers and we will show how they provide a workable solution in all observational models considered. In the simplest setting, i.e. under minimal constraints, we will show that channel capacity is achieved by any input distribution which induces a uniform distribution on the observables.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Faceted execution of policy-agnostic programs Position paper: the science of boxing Knowledge inference for optimizing secure multi-party computation Fault-tolerant non-interference: invited talk abstract WEBLOG: a declarative language for secure web development
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1