Haodi Wang, Tangyu Jiang, Yu Guo, Xiaohua Jia, Chengjun Cai
{"title":"GReDP: A More Robust Approach for Differential Privacy Training with Gradient-Preserving Noise Reduction","authors":"Haodi Wang, Tangyu Jiang, Yu Guo, Xiaohua Jia, Chengjun Cai","doi":"arxiv-2409.11663","DOIUrl":null,"url":null,"abstract":"Deep learning models have been extensively adopted in various regions due to\ntheir ability to represent hierarchical features, which highly rely on the\ntraining set and procedures. Thus, protecting the training process and deep\nlearning algorithms is paramount in privacy preservation. Although Differential\nPrivacy (DP) as a powerful cryptographic primitive has achieved satisfying\nresults in deep learning training, the existing schemes still fall short in\npreserving model utility, i.e., they either invoke a high noise scale or\ninevitably harm the original gradients. To address the above issues, in this\npaper, we present a more robust approach for DP training called GReDP.\nSpecifically, we compute the model gradients in the frequency domain and adopt\na new approach to reduce the noise level. Unlike the previous work, our GReDP\nonly requires half of the noise scale compared to DPSGD [1] while keeping all\nthe gradient information intact. We present a detailed analysis of our method\nboth theoretically and empirically. The experimental results show that our\nGReDP works consistently better than the baselines on all models and training\nsettings.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"19 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Cryptography and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11663","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning models have been extensively adopted in various regions due to
their ability to represent hierarchical features, which highly rely on the
training set and procedures. Thus, protecting the training process and deep
learning algorithms is paramount in privacy preservation. Although Differential
Privacy (DP) as a powerful cryptographic primitive has achieved satisfying
results in deep learning training, the existing schemes still fall short in
preserving model utility, i.e., they either invoke a high noise scale or
inevitably harm the original gradients. To address the above issues, in this
paper, we present a more robust approach for DP training called GReDP.
Specifically, we compute the model gradients in the frequency domain and adopt
a new approach to reduce the noise level. Unlike the previous work, our GReDP
only requires half of the noise scale compared to DPSGD [1] while keeping all
the gradient information intact. We present a detailed analysis of our method
both theoretically and empirically. The experimental results show that our
GReDP works consistently better than the baselines on all models and training
settings.