{"title":"动态隐私预算分配提高了差分隐私梯度下降法的数据效率","authors":"Junyuan Hong, Zhangyang Wang, Jiayu Zhou","doi":"10.1145/3531146.3533070","DOIUrl":null,"url":null,"abstract":"<p><p>Protecting privacy in learning while maintaining the model performance has become increasingly critical in many applications that involve sensitive data. Private Gradient Descent (PGD) is a commonly used private learning framework, which noises gradients based on the Differential Privacy protocol. Recent studies show that <i>dynamic privacy schedules</i> of decreasing noise magnitudes can improve loss at the final iteration, and yet theoretical understandings of the effectiveness of such schedules and their connections to optimization algorithms remain limited. In this paper, we provide comprehensive analysis of noise influence in dynamic privacy schedules to answer these critical questions. We first present a dynamic noise schedule minimizing the utility upper bound of PGD, and show how the noise influence from each optimization step collectively impacts utility of the final model. Our study also reveals how impacts from dynamic noise influence change when momentum is used. We empirically show the connection exists for general non-convex losses, and the influence is greatly impacted by the loss curvature.</p>","PeriodicalId":73013,"journal":{"name":"FAccT 2022 : 2022 5th ACM Conference on Fairness, Accountability, and Transparency : June 21-24, 2022, Seoul, South Korea. ACM Conference on Fairness, Accountability, and Transparency (5th : 2022 : Seoul, Korea; Online)","volume":"2022 ","pages":"11-35"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10115558/pdf/nihms-1888054.pdf","citationCount":"0","resultStr":"{\"title\":\"Dynamic Privacy Budget Allocation Improves Data Efficiency of Differentially Private Gradient Descent.\",\"authors\":\"Junyuan Hong, Zhangyang Wang, Jiayu Zhou\",\"doi\":\"10.1145/3531146.3533070\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Protecting privacy in learning while maintaining the model performance has become increasingly critical in many applications that involve sensitive data. Private Gradient Descent (PGD) is a commonly used private learning framework, which noises gradients based on the Differential Privacy protocol. Recent studies show that <i>dynamic privacy schedules</i> of decreasing noise magnitudes can improve loss at the final iteration, and yet theoretical understandings of the effectiveness of such schedules and their connections to optimization algorithms remain limited. In this paper, we provide comprehensive analysis of noise influence in dynamic privacy schedules to answer these critical questions. We first present a dynamic noise schedule minimizing the utility upper bound of PGD, and show how the noise influence from each optimization step collectively impacts utility of the final model. Our study also reveals how impacts from dynamic noise influence change when momentum is used. We empirically show the connection exists for general non-convex losses, and the influence is greatly impacted by the loss curvature.</p>\",\"PeriodicalId\":73013,\"journal\":{\"name\":\"FAccT 2022 : 2022 5th ACM Conference on Fairness, Accountability, and Transparency : June 21-24, 2022, Seoul, South Korea. ACM Conference on Fairness, Accountability, and Transparency (5th : 2022 : Seoul, Korea; Online)\",\"volume\":\"2022 \",\"pages\":\"11-35\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10115558/pdf/nihms-1888054.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"FAccT 2022 : 2022 5th ACM Conference on Fairness, Accountability, and Transparency : June 21-24, 2022, Seoul, South Korea. ACM Conference on Fairness, Accountability, and Transparency (5th : 2022 : Seoul, Korea; Online)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3531146.3533070\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2022/6/20 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"FAccT 2022 : 2022 5th ACM Conference on Fairness, Accountability, and Transparency : June 21-24, 2022, Seoul, South Korea. ACM Conference on Fairness, Accountability, and Transparency (5th : 2022 : Seoul, Korea; Online)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3531146.3533070","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2022/6/20 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
Dynamic Privacy Budget Allocation Improves Data Efficiency of Differentially Private Gradient Descent.
Protecting privacy in learning while maintaining the model performance has become increasingly critical in many applications that involve sensitive data. Private Gradient Descent (PGD) is a commonly used private learning framework, which noises gradients based on the Differential Privacy protocol. Recent studies show that dynamic privacy schedules of decreasing noise magnitudes can improve loss at the final iteration, and yet theoretical understandings of the effectiveness of such schedules and their connections to optimization algorithms remain limited. In this paper, we provide comprehensive analysis of noise influence in dynamic privacy schedules to answer these critical questions. We first present a dynamic noise schedule minimizing the utility upper bound of PGD, and show how the noise influence from each optimization step collectively impacts utility of the final model. Our study also reveals how impacts from dynamic noise influence change when momentum is used. We empirically show the connection exists for general non-convex losses, and the influence is greatly impacted by the loss curvature.