Sage差异私有ML平台中的隐私会计与质量控制

Q3 Computer Science Operating Systems Review (ACM) Pub Date : 2019-07-25 DOI:10.1145/3352020.3352032
Mathias Lécuyer, Riley Spahn, Kiran Vodrahalli, Roxana Geambasu, Daniel J. Hsu
{"title":"Sage差异私有ML平台中的隐私会计与质量控制","authors":"Mathias Lécuyer, Riley Spahn, Kiran Vodrahalli, Roxana Geambasu, Daniel J. Hsu","doi":"10.1145/3352020.3352032","DOIUrl":null,"url":null,"abstract":"We present Sage, the first ML platform that enforces a global differential privacy (DP) guarantee across all models produced from a sensitive data stream. Sage extends the Tensorflow-Extended ML platform with novel mechanisms and DP theory to address operational challenges that arise from incorporating DP into ML training processes. First, to avoid the typical problem with DP systems of \"running out of privacy budget\" after a pre-established number of training processes, we develop block composition. It is a new DP composition theory that leverages the time-bounded structure of training processes to keep training models endlessly on a sensitive data stream while enforcing event-level DP on the stream. Second, to control the quality of ML models produced by Sage, we develop a novel iterative training process that trains a model on increasing amounts of data from a stream until, with high probability, the model meets developer-configured quality criteria.","PeriodicalId":38935,"journal":{"name":"Operating Systems Review (ACM)","volume":"53 1","pages":"75 - 84"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3352020.3352032","citationCount":"32","resultStr":"{\"title\":\"Privacy Accounting and Quality Control in the Sage Differentially Private ML Platform\",\"authors\":\"Mathias Lécuyer, Riley Spahn, Kiran Vodrahalli, Roxana Geambasu, Daniel J. Hsu\",\"doi\":\"10.1145/3352020.3352032\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present Sage, the first ML platform that enforces a global differential privacy (DP) guarantee across all models produced from a sensitive data stream. Sage extends the Tensorflow-Extended ML platform with novel mechanisms and DP theory to address operational challenges that arise from incorporating DP into ML training processes. First, to avoid the typical problem with DP systems of \\\"running out of privacy budget\\\" after a pre-established number of training processes, we develop block composition. It is a new DP composition theory that leverages the time-bounded structure of training processes to keep training models endlessly on a sensitive data stream while enforcing event-level DP on the stream. Second, to control the quality of ML models produced by Sage, we develop a novel iterative training process that trains a model on increasing amounts of data from a stream until, with high probability, the model meets developer-configured quality criteria.\",\"PeriodicalId\":38935,\"journal\":{\"name\":\"Operating Systems Review (ACM)\",\"volume\":\"53 1\",\"pages\":\"75 - 84\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-07-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1145/3352020.3352032\",\"citationCount\":\"32\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Operating Systems Review (ACM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3352020.3352032\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Operating Systems Review (ACM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3352020.3352032","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 32

摘要

Sage是第一个在敏感数据流生成的所有模型中强制执行全局差分隐私(DP)保证的ML平台。Sage通过新颖的机制和DP理论扩展了Tensorflow-Extended ML平台,以解决将DP纳入ML训练过程中出现的操作挑战。首先,为了避免DP系统在预先设定的训练过程数量后“耗尽隐私预算”的典型问题,我们开发了块组合。这是一种新的DP组合理论,它利用训练过程的时间限制结构,使训练模型在敏感数据流上无休止地进行训练,同时在流上执行事件级DP。其次,为了控制Sage生成的机器学习模型的质量,我们开发了一种新的迭代训练过程,该过程可以根据来自流的越来越多的数据量来训练模型,直到模型有很大可能满足开发人员配置的质量标准。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Privacy Accounting and Quality Control in the Sage Differentially Private ML Platform
We present Sage, the first ML platform that enforces a global differential privacy (DP) guarantee across all models produced from a sensitive data stream. Sage extends the Tensorflow-Extended ML platform with novel mechanisms and DP theory to address operational challenges that arise from incorporating DP into ML training processes. First, to avoid the typical problem with DP systems of "running out of privacy budget" after a pre-established number of training processes, we develop block composition. It is a new DP composition theory that leverages the time-bounded structure of training processes to keep training models endlessly on a sensitive data stream while enforcing event-level DP on the stream. Second, to control the quality of ML models produced by Sage, we develop a novel iterative training process that trains a model on increasing amounts of data from a stream until, with high probability, the model meets developer-configured quality criteria.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Operating Systems Review (ACM)
Operating Systems Review (ACM) Computer Science-Computer Networks and Communications
CiteScore
2.80
自引率
0.00%
发文量
10
期刊介绍: Operating Systems Review (OSR) is a publication of the ACM Special Interest Group on Operating Systems (SIGOPS), whose scope of interest includes: computer operating systems and architecture for multiprogramming, multiprocessing, and time sharing; resource management; evaluation and simulation; reliability, integrity, and security of data; communications among computing processors; and computer system modeling and analysis.
期刊最新文献
Disaggregated GPU Acceleration for Serverless Applications Navigating Performance-Efficiency Tradeoffs in Serverless Computing: Deduplication to the Rescue! Using Local Cache Coherence for Disaggregated Memory Systems Make It Real: An End-to-End Implementation of A Physically Disaggregated Data Center Memory disaggregation: why now and what are the challenges
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1