量化模型中的信息论私有推理

Netanel Raviv, Rawad Bitar, Eitan Yaakobi
{"title":"量化模型中的信息论私有推理","authors":"Netanel Raviv, Rawad Bitar, Eitan Yaakobi","doi":"10.1109/ISIT50566.2022.9834464","DOIUrl":null,"url":null,"abstract":"In a Private Inference scenario, a server holds a model (e.g., a neural network), a user holds data, and the user wishes to apply the model on her data. The privacy of both parties must be protected; the user’s data might contain confidential information, and the server’s model is his intellectual property.Private inference has been studied extensively in recent years, mostly from a cryptographic perspective by incorporating homo-morphic encryption and multiparty computation protocols, which incur high computational overhead and degrade the accuracy of the model. In this work we take a perpendicular approach which draws inspiration from the expansive Private Information Retrieval literature. We view private inference as the task of retrieving an inner product of a parameter vector with the data, a fundamental step in most machine learning models.By combining binary arithmetic with real-valued one, we present a scheme which enables the retrieval of the inner product for models whose weights are either binarized, or given in fixed-point representation; such models gained increased attention recently, due to their ease of implementation and increased robustness. We also present a fundamental trade-off between the privacy of the user and that of the server, and show that our scheme is optimal in this sense. Our scheme is simple, universal to a large family of models, provides clear information-theoretic guarantees to both parties with zero accuracy loss, and in addition, is compatible with continuous data distributions and allows infinite precision.","PeriodicalId":348168,"journal":{"name":"2022 IEEE International Symposium on Information Theory (ISIT)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Information Theoretic Private Inference in Quantized Models\",\"authors\":\"Netanel Raviv, Rawad Bitar, Eitan Yaakobi\",\"doi\":\"10.1109/ISIT50566.2022.9834464\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In a Private Inference scenario, a server holds a model (e.g., a neural network), a user holds data, and the user wishes to apply the model on her data. The privacy of both parties must be protected; the user’s data might contain confidential information, and the server’s model is his intellectual property.Private inference has been studied extensively in recent years, mostly from a cryptographic perspective by incorporating homo-morphic encryption and multiparty computation protocols, which incur high computational overhead and degrade the accuracy of the model. In this work we take a perpendicular approach which draws inspiration from the expansive Private Information Retrieval literature. We view private inference as the task of retrieving an inner product of a parameter vector with the data, a fundamental step in most machine learning models.By combining binary arithmetic with real-valued one, we present a scheme which enables the retrieval of the inner product for models whose weights are either binarized, or given in fixed-point representation; such models gained increased attention recently, due to their ease of implementation and increased robustness. We also present a fundamental trade-off between the privacy of the user and that of the server, and show that our scheme is optimal in this sense. Our scheme is simple, universal to a large family of models, provides clear information-theoretic guarantees to both parties with zero accuracy loss, and in addition, is compatible with continuous data distributions and allows infinite precision.\",\"PeriodicalId\":348168,\"journal\":{\"name\":\"2022 IEEE International Symposium on Information Theory (ISIT)\",\"volume\":\"39 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Symposium on Information Theory (ISIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISIT50566.2022.9834464\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Symposium on Information Theory (ISIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISIT50566.2022.9834464","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

在私有推理场景中,服务器持有模型(例如,神经网络),用户持有数据,用户希望将模型应用于其数据。双方的隐私必须得到保护;用户的数据可能包含机密信息,服务器的模型是他的知识产权。近年来,人们对私有推理进行了广泛的研究,主要是从密码学的角度出发,将同态加密和多方计算协议结合在一起,这带来了很高的计算开销,降低了模型的准确性。在这项工作中,我们采用垂直的方法,从广泛的私人信息检索文献中汲取灵感。我们将私有推理视为检索参数向量与数据的内积的任务,这是大多数机器学习模型的基本步骤。通过将二值算法与实值算法相结合,我们提出了一种能够检索权值为二值化或以不动点表示的模型的内积的方案;由于易于实现和增强的健壮性,这些模型最近获得了越来越多的关注。我们还提出了用户隐私和服务器隐私之间的基本权衡,并表明我们的方案在这个意义上是最优的。我们的方案简单,适用于大量的模型,为双方提供了明确的信息论保证,零精度损失,并且兼容连续数据分布,允许无限精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Information Theoretic Private Inference in Quantized Models
In a Private Inference scenario, a server holds a model (e.g., a neural network), a user holds data, and the user wishes to apply the model on her data. The privacy of both parties must be protected; the user’s data might contain confidential information, and the server’s model is his intellectual property.Private inference has been studied extensively in recent years, mostly from a cryptographic perspective by incorporating homo-morphic encryption and multiparty computation protocols, which incur high computational overhead and degrade the accuracy of the model. In this work we take a perpendicular approach which draws inspiration from the expansive Private Information Retrieval literature. We view private inference as the task of retrieving an inner product of a parameter vector with the data, a fundamental step in most machine learning models.By combining binary arithmetic with real-valued one, we present a scheme which enables the retrieval of the inner product for models whose weights are either binarized, or given in fixed-point representation; such models gained increased attention recently, due to their ease of implementation and increased robustness. We also present a fundamental trade-off between the privacy of the user and that of the server, and show that our scheme is optimal in this sense. Our scheme is simple, universal to a large family of models, provides clear information-theoretic guarantees to both parties with zero accuracy loss, and in addition, is compatible with continuous data distributions and allows infinite precision.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Fast Low Rank column-wise Compressive Sensing Ternary Message Passing Decoding of RS-SPC Product Codes Understanding Deep Neural Networks Using Sliced Mutual Information Rate-Optimal Streaming Codes Over the Three-Node Decode-And-Forward Relay Network Unlimited Sampling via Generalized Thresholding
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1