Mathias Lécuyer, Riley Spahn, Kiran Vodrahalli, Roxana Geambasu, Daniel J. Hsu
{"title":"Sage差异私有ML平台中的隐私会计与质量控制","authors":"Mathias Lécuyer, Riley Spahn, Kiran Vodrahalli, Roxana Geambasu, Daniel J. Hsu","doi":"10.1145/3352020.3352032","DOIUrl":null,"url":null,"abstract":"We present Sage, the first ML platform that enforces a global differential privacy (DP) guarantee across all models produced from a sensitive data stream. Sage extends the Tensorflow-Extended ML platform with novel mechanisms and DP theory to address operational challenges that arise from incorporating DP into ML training processes. First, to avoid the typical problem with DP systems of \"running out of privacy budget\" after a pre-established number of training processes, we develop block composition. It is a new DP composition theory that leverages the time-bounded structure of training processes to keep training models endlessly on a sensitive data stream while enforcing event-level DP on the stream. Second, to control the quality of ML models produced by Sage, we develop a novel iterative training process that trains a model on increasing amounts of data from a stream until, with high probability, the model meets developer-configured quality criteria.","PeriodicalId":38935,"journal":{"name":"Operating Systems Review (ACM)","volume":"53 1","pages":"75 - 84"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3352020.3352032","citationCount":"32","resultStr":"{\"title\":\"Privacy Accounting and Quality Control in the Sage Differentially Private ML Platform\",\"authors\":\"Mathias Lécuyer, Riley Spahn, Kiran Vodrahalli, Roxana Geambasu, Daniel J. Hsu\",\"doi\":\"10.1145/3352020.3352032\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present Sage, the first ML platform that enforces a global differential privacy (DP) guarantee across all models produced from a sensitive data stream. Sage extends the Tensorflow-Extended ML platform with novel mechanisms and DP theory to address operational challenges that arise from incorporating DP into ML training processes. First, to avoid the typical problem with DP systems of \\\"running out of privacy budget\\\" after a pre-established number of training processes, we develop block composition. It is a new DP composition theory that leverages the time-bounded structure of training processes to keep training models endlessly on a sensitive data stream while enforcing event-level DP on the stream. Second, to control the quality of ML models produced by Sage, we develop a novel iterative training process that trains a model on increasing amounts of data from a stream until, with high probability, the model meets developer-configured quality criteria.\",\"PeriodicalId\":38935,\"journal\":{\"name\":\"Operating Systems Review (ACM)\",\"volume\":\"53 1\",\"pages\":\"75 - 84\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-07-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1145/3352020.3352032\",\"citationCount\":\"32\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Operating Systems Review (ACM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3352020.3352032\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Operating Systems Review (ACM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3352020.3352032","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Computer Science","Score":null,"Total":0}
Privacy Accounting and Quality Control in the Sage Differentially Private ML Platform
We present Sage, the first ML platform that enforces a global differential privacy (DP) guarantee across all models produced from a sensitive data stream. Sage extends the Tensorflow-Extended ML platform with novel mechanisms and DP theory to address operational challenges that arise from incorporating DP into ML training processes. First, to avoid the typical problem with DP systems of "running out of privacy budget" after a pre-established number of training processes, we develop block composition. It is a new DP composition theory that leverages the time-bounded structure of training processes to keep training models endlessly on a sensitive data stream while enforcing event-level DP on the stream. Second, to control the quality of ML models produced by Sage, we develop a novel iterative training process that trains a model on increasing amounts of data from a stream until, with high probability, the model meets developer-configured quality criteria.
期刊介绍:
Operating Systems Review (OSR) is a publication of the ACM Special Interest Group on Operating Systems (SIGOPS), whose scope of interest includes: computer operating systems and architecture for multiprogramming, multiprocessing, and time sharing; resource management; evaluation and simulation; reliability, integrity, and security of data; communications among computing processors; and computer system modeling and analysis.