求助PDF
{"title":"Shuffle模型上的隐私保护SGD","authors":"Lingjie Zhang, Hai Zhang","doi":"10.1155/2023/4055950","DOIUrl":null,"url":null,"abstract":"<jats:p>In this paper, we consider an exceptional study of differentially private stochastic gradient descent (SGD) algorithms in the stochastic convex optimization (SCO). The majority of the existing literature requires that the losses have additional assumptions, such as the loss functions with Lipschitz, smooth and strongly convex, and uniformly bounded of the model parameters, or focus on the Euclidean (i.e. <jats:inline-formula>\n <math xmlns=\"http://www.w3.org/1998/Math/MathML\" id=\"M1\">\n <msubsup>\n <mrow>\n <mi mathvariant=\"script\">l</mi>\n </mrow>\n <mrow>\n <mn>2</mn>\n </mrow>\n <mrow>\n <mi>d</mi>\n </mrow>\n </msubsup>\n </math>\n </jats:inline-formula>) setting. However, these restrictive requirements exclude many popular losses, including the absolute loss and the hinge loss. By loosening the restrictions, we proposed two differentially private SGD without shuffle model and with shuffle model algorithms (in short, DP-SGD-NOS and DP-SGD-S) for the <jats:inline-formula>\n <math xmlns=\"http://www.w3.org/1998/Math/MathML\" id=\"M2\">\n <mfenced open=\"(\" close=\")\" separators=\"|\">\n <mrow>\n <mi>α</mi>\n <mo>,</mo>\n <mi>L</mi>\n </mrow>\n </mfenced>\n </math>\n </jats:inline-formula>-Hölder smooth loss by adding calibrated Laplace noise under no shuffling scheme and shuffling scheme in the <jats:inline-formula>\n <math xmlns=\"http://www.w3.org/1998/Math/MathML\" id=\"M3\">\n <msubsup>\n <mrow>\n <mi mathvariant=\"script\">l</mi>\n </mrow>\n <mrow>\n <mi>p</mi>\n </mrow>\n <mrow>\n <mi>d</mi>\n </mrow>\n </msubsup>\n </math>\n </jats:inline-formula>-setting for <jats:inline-formula>\n <math xmlns=\"http://www.w3.org/1998/Math/MathML\" id=\"M4\">\n <mi>p</mi>\n <mo>∈</mo>\n <mfenced open=\"[\" close=\"]\" separators=\"|\">\n <mrow>\n <mn>1,2</mn>\n </mrow>\n </mfenced>\n </math>\n </jats:inline-formula>. We provide privacy guarantees by using advanced composition and privacy amplification techniques. We also analyze the convergence bounds of the DP-SGD-NOS and DP-SGD-S and obtain the optimal excess population risks <jats:inline-formula>\n <math xmlns=\"http://www.w3.org/1998/Math/MathML\" id=\"M5\">\n <mi mathvariant=\"script\">O</mi>\n <mfenced open=\"(\" close=\")\" separators=\"|\">\n <mrow>\n <mrow>\n <mn>1</mn>\n <mo>/</mo>\n <msqrt>\n <mi>n</mi>\n </msqrt>\n </mrow>\n <mo>+</mo>\n <mrow>\n <msqrt>\n <mrow>\n <mi>d</mi>\n <mtext> </mtext>\n <mi mathvariant=\"normal\">log</mi>\n <mrow>\n <mfenced open=\"(\" close=\")\" separators=\"|\">\n <mrow>\n <mn>1</mn>\n <mo>/</mo>\n <mi>δ</mi>\n </mrow>\n </mfenced>\n </mrow>\n </mrow>\n </msqrt>\n <mo>/</mo>\n <mrow>\n <mi>n</mi>\n <mi>ϵ</mi>\n </mrow>\n </mrow>\n </mrow>\n </mfenced>\n </math>\n </jats:inline-formula> and <jats:inline-formula>\n <math xmlns=\"http://www.w3.org/1998/Math/MathML\" id=\"M6\">\n <mi mathvariant=\"script\">O</mi>\n <mfenced open=\"(\" close=\")\" separators=\"|\">\n <mrow>\n <mrow>\n <mn>1</mn>\n <mo>/</mo>\n <msqrt>\n <mi>n</mi>\n </msqrt>\n </mrow>\n <mo>+</mo>\n <mrow>\n <msqrt>\n <mrow>\n <mi>d</mi>\n <mtext> </mtext>\n <mi mathvariant=\"normal\">log</mi>\n <mrow>\n <mfenced open=\"(\" close=\")\" separators=\"|\">\n <mrow>\n <mn>1</mn>\n <mo>/</mo>\n <mi>δ</mi>\n </mrow>\n </mfenced>\n </mrow>\n <mi mathvariant=\"normal\">log</mi>\n <mrow>\n <mfenced open=\"(\" close=\")\" separators=\"|\">\n <mrow>\n <mi>n</mi>\n <mo>/</mo>\n <mi>δ</mi>\n </mrow>\n </mfenced>\n </mrow>\n </mrow>\n </msqrt>\n <mo>/</mo>\n <mrow>\n <msup>\n <mrow>\n <mi>n</mi>\n </mrow>\n <mrow>\n <mrow>\n <mfenced open=\"(\" close=\")\" separators=\"|\">\n <mrow>\n <mn>4</mn>\n <mo>+</mo>\n <mi>α</mi>\n </mrow>\n </mfenced>\n </mrow>\n <mo>/</mo>\n <mrow>\n <mfenced open=\"(\" close=\")\" separators=\"|\">\n <mrow>\n <mn>2</mn>\n <mrow>\n <mfenced open=\"(\" close=\")\" separators=\"|\">\n <mrow>\n <mn>1</mn>\n <mo>+</mo>\n <mi>α</mi>\n </mrow>\n </mfenced>\n </mrow>\n </mrow>\n </mfenced>\n </mrow>\n </mrow>\n </msup>\n <mi>ϵ</mi>\n </mrow>\n </mrow>\n </mrow>\n </mfenced>\n </math>\n </jats:inline-formula> up to logarithmic factors with gradient complexity <jats:inline-formula>\n <math xmlns=\"http://www.w3.org/1998/Math/MathML\" id=\"M7\">\n <mi mathvariant=\"script\">O</mi>\n <mfenced open=\"(\" close=\")\" separators=\"|\">\n <mrow>\n <msup>\n <mrow>\n <mi>n</mi>\n </mrow>\n <mrow>\n <mfenced ","PeriodicalId":43667,"journal":{"name":"Muenster Journal of Mathematics","volume":null,"pages":null},"PeriodicalIF":0.7000,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Privacy-Preserving SGD on Shuffle Model\",\"authors\":\"Lingjie Zhang, Hai Zhang\",\"doi\":\"10.1155/2023/4055950\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<jats:p>In this paper, we consider an exceptional study of differentially private stochastic gradient descent (SGD) algorithms in the stochastic convex optimization (SCO). The majority of the existing literature requires that the losses have additional assumptions, such as the loss functions with Lipschitz, smooth and strongly convex, and uniformly bounded of the model parameters, or focus on the Euclidean (i.e. <jats:inline-formula>\\n <math xmlns=\\\"http://www.w3.org/1998/Math/MathML\\\" id=\\\"M1\\\">\\n <msubsup>\\n <mrow>\\n <mi mathvariant=\\\"script\\\">l</mi>\\n </mrow>\\n <mrow>\\n <mn>2</mn>\\n </mrow>\\n <mrow>\\n <mi>d</mi>\\n </mrow>\\n </msubsup>\\n </math>\\n </jats:inline-formula>) setting. However, these restrictive requirements exclude many popular losses, including the absolute loss and the hinge loss. By loosening the restrictions, we proposed two differentially private SGD without shuffle model and with shuffle model algorithms (in short, DP-SGD-NOS and DP-SGD-S) for the <jats:inline-formula>\\n <math xmlns=\\\"http://www.w3.org/1998/Math/MathML\\\" id=\\\"M2\\\">\\n <mfenced open=\\\"(\\\" close=\\\")\\\" separators=\\\"|\\\">\\n <mrow>\\n <mi>α</mi>\\n <mo>,</mo>\\n <mi>L</mi>\\n </mrow>\\n </mfenced>\\n </math>\\n </jats:inline-formula>-Hölder smooth loss by adding calibrated Laplace noise under no shuffling scheme and shuffling scheme in the <jats:inline-formula>\\n <math xmlns=\\\"http://www.w3.org/1998/Math/MathML\\\" id=\\\"M3\\\">\\n <msubsup>\\n <mrow>\\n <mi mathvariant=\\\"script\\\">l</mi>\\n </mrow>\\n <mrow>\\n <mi>p</mi>\\n </mrow>\\n <mrow>\\n <mi>d</mi>\\n </mrow>\\n </msubsup>\\n </math>\\n </jats:inline-formula>-setting for <jats:inline-formula>\\n <math xmlns=\\\"http://www.w3.org/1998/Math/MathML\\\" id=\\\"M4\\\">\\n <mi>p</mi>\\n <mo>∈</mo>\\n <mfenced open=\\\"[\\\" close=\\\"]\\\" separators=\\\"|\\\">\\n <mrow>\\n <mn>1,2</mn>\\n </mrow>\\n </mfenced>\\n </math>\\n </jats:inline-formula>. We provide privacy guarantees by using advanced composition and privacy amplification techniques. We also analyze the convergence bounds of the DP-SGD-NOS and DP-SGD-S and obtain the optimal excess population risks <jats:inline-formula>\\n <math xmlns=\\\"http://www.w3.org/1998/Math/MathML\\\" id=\\\"M5\\\">\\n <mi mathvariant=\\\"script\\\">O</mi>\\n <mfenced open=\\\"(\\\" close=\\\")\\\" separators=\\\"|\\\">\\n <mrow>\\n <mrow>\\n <mn>1</mn>\\n <mo>/</mo>\\n <msqrt>\\n <mi>n</mi>\\n </msqrt>\\n </mrow>\\n <mo>+</mo>\\n <mrow>\\n <msqrt>\\n <mrow>\\n <mi>d</mi>\\n <mtext> </mtext>\\n <mi mathvariant=\\\"normal\\\">log</mi>\\n <mrow>\\n <mfenced open=\\\"(\\\" close=\\\")\\\" separators=\\\"|\\\">\\n <mrow>\\n <mn>1</mn>\\n <mo>/</mo>\\n <mi>δ</mi>\\n </mrow>\\n </mfenced>\\n </mrow>\\n </mrow>\\n </msqrt>\\n <mo>/</mo>\\n <mrow>\\n <mi>n</mi>\\n <mi>ϵ</mi>\\n </mrow>\\n </mrow>\\n </mrow>\\n </mfenced>\\n </math>\\n </jats:inline-formula> and <jats:inline-formula>\\n <math xmlns=\\\"http://www.w3.org/1998/Math/MathML\\\" id=\\\"M6\\\">\\n <mi mathvariant=\\\"script\\\">O</mi>\\n <mfenced open=\\\"(\\\" close=\\\")\\\" separators=\\\"|\\\">\\n <mrow>\\n <mrow>\\n <mn>1</mn>\\n <mo>/</mo>\\n <msqrt>\\n <mi>n</mi>\\n </msqrt>\\n </mrow>\\n <mo>+</mo>\\n <mrow>\\n <msqrt>\\n <mrow>\\n <mi>d</mi>\\n <mtext> </mtext>\\n <mi mathvariant=\\\"normal\\\">log</mi>\\n <mrow>\\n <mfenced open=\\\"(\\\" close=\\\")\\\" separators=\\\"|\\\">\\n <mrow>\\n <mn>1</mn>\\n <mo>/</mo>\\n <mi>δ</mi>\\n </mrow>\\n </mfenced>\\n </mrow>\\n <mi mathvariant=\\\"normal\\\">log</mi>\\n <mrow>\\n <mfenced open=\\\"(\\\" close=\\\")\\\" separators=\\\"|\\\">\\n <mrow>\\n <mi>n</mi>\\n <mo>/</mo>\\n <mi>δ</mi>\\n </mrow>\\n </mfenced>\\n </mrow>\\n </mrow>\\n </msqrt>\\n <mo>/</mo>\\n <mrow>\\n <msup>\\n <mrow>\\n <mi>n</mi>\\n </mrow>\\n <mrow>\\n <mrow>\\n <mfenced open=\\\"(\\\" close=\\\")\\\" separators=\\\"|\\\">\\n <mrow>\\n <mn>4</mn>\\n <mo>+</mo>\\n <mi>α</mi>\\n </mrow>\\n </mfenced>\\n </mrow>\\n <mo>/</mo>\\n <mrow>\\n <mfenced open=\\\"(\\\" close=\\\")\\\" separators=\\\"|\\\">\\n <mrow>\\n <mn>2</mn>\\n <mrow>\\n <mfenced open=\\\"(\\\" close=\\\")\\\" separators=\\\"|\\\">\\n <mrow>\\n <mn>1</mn>\\n <mo>+</mo>\\n <mi>α</mi>\\n </mrow>\\n </mfenced>\\n </mrow>\\n </mrow>\\n </mfenced>\\n </mrow>\\n </mrow>\\n </msup>\\n <mi>ϵ</mi>\\n </mrow>\\n </mrow>\\n </mrow>\\n </mfenced>\\n </math>\\n </jats:inline-formula> up to logarithmic factors with gradient complexity <jats:inline-formula>\\n <math xmlns=\\\"http://www.w3.org/1998/Math/MathML\\\" id=\\\"M7\\\">\\n <mi mathvariant=\\\"script\\\">O</mi>\\n <mfenced open=\\\"(\\\" close=\\\")\\\" separators=\\\"|\\\">\\n <mrow>\\n <msup>\\n <mrow>\\n <mi>n</mi>\\n </mrow>\\n <mrow>\\n <mfenced \",\"PeriodicalId\":43667,\"journal\":{\"name\":\"Muenster Journal of Mathematics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.7000,\"publicationDate\":\"2023-05-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Muenster Journal of Mathematics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1155/2023/4055950\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Muenster Journal of Mathematics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1155/2023/4055950","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS","Score":null,"Total":0}
引用次数: 0
引用
批量引用
Privacy-Preserving SGD on Shuffle Model
In this paper, we consider an exceptional study of differentially private stochastic gradient descent (SGD) algorithms in the stochastic convex optimization (SCO). The majority of the existing literature requires that the losses have additional assumptions, such as the loss functions with Lipschitz, smooth and strongly convex, and uniformly bounded of the model parameters, or focus on the Euclidean (i.e.
l
2
d
) setting. However, these restrictive requirements exclude many popular losses, including the absolute loss and the hinge loss. By loosening the restrictions, we proposed two differentially private SGD without shuffle model and with shuffle model algorithms (in short, DP-SGD-NOS and DP-SGD-S) for the
α
,
L
-Hölder smooth loss by adding calibrated Laplace noise under no shuffling scheme and shuffling scheme in the
l
p
d
-setting for
p
∈
1,2
. We provide privacy guarantees by using advanced composition and privacy amplification techniques. We also analyze the convergence bounds of the DP-SGD-NOS and DP-SGD-S and obtain the optimal excess population risks
O
1
/
n
+
d
log
1
/
δ
/
n
ϵ
and
O
1
/
n
+
d
log
1
/
δ
log
n
/
δ
/
n
4
+
α
/
2
1
+
α
ϵ
up to logarithmic factors with gradient complexity
O
n