{"title":"针对定时攻击的差异隐私框架","authors":"Zachary Ratliff, Salil Vadhan","doi":"arxiv-2409.05623","DOIUrl":null,"url":null,"abstract":"The standard definition of differential privacy (DP) ensures that a\nmechanism's output distribution on adjacent datasets is indistinguishable.\nHowever, real-world implementations of DP can, and often do, reveal information\nthrough their runtime distributions, making them susceptible to timing attacks.\nIn this work, we establish a general framework for ensuring differential\nprivacy in the presence of timing side channels. We define a new notion of\ntiming privacy, which captures programs that remain differentially private to\nan adversary that observes the program's runtime in addition to the output. Our\nframework enables chaining together component programs that are timing-stable\nfollowed by a random delay to obtain DP programs that achieve timing privacy.\nImportantly, our definitions allow for measuring timing privacy and output\nprivacy using different privacy measures. We illustrate how to instantiate our\nframework by giving programs for standard DP computations in the RAM and Word\nRAM models of computation. Furthermore, we show how our framework can be\nrealized in code through a natural extension of the OpenDP Programming\nFramework.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"21 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Framework for Differential Privacy Against Timing Attacks\",\"authors\":\"Zachary Ratliff, Salil Vadhan\",\"doi\":\"arxiv-2409.05623\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The standard definition of differential privacy (DP) ensures that a\\nmechanism's output distribution on adjacent datasets is indistinguishable.\\nHowever, real-world implementations of DP can, and often do, reveal information\\nthrough their runtime distributions, making them susceptible to timing attacks.\\nIn this work, we establish a general framework for ensuring differential\\nprivacy in the presence of timing side channels. We define a new notion of\\ntiming privacy, which captures programs that remain differentially private to\\nan adversary that observes the program's runtime in addition to the output. Our\\nframework enables chaining together component programs that are timing-stable\\nfollowed by a random delay to obtain DP programs that achieve timing privacy.\\nImportantly, our definitions allow for measuring timing privacy and output\\nprivacy using different privacy measures. We illustrate how to instantiate our\\nframework by giving programs for standard DP computations in the RAM and Word\\nRAM models of computation. Furthermore, we show how our framework can be\\nrealized in code through a natural extension of the OpenDP Programming\\nFramework.\",\"PeriodicalId\":501332,\"journal\":{\"name\":\"arXiv - CS - Cryptography and Security\",\"volume\":\"21 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Cryptography and Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.05623\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Cryptography and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.05623","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Framework for Differential Privacy Against Timing Attacks
The standard definition of differential privacy (DP) ensures that a
mechanism's output distribution on adjacent datasets is indistinguishable.
However, real-world implementations of DP can, and often do, reveal information
through their runtime distributions, making them susceptible to timing attacks.
In this work, we establish a general framework for ensuring differential
privacy in the presence of timing side channels. We define a new notion of
timing privacy, which captures programs that remain differentially private to
an adversary that observes the program's runtime in addition to the output. Our
framework enables chaining together component programs that are timing-stable
followed by a random delay to obtain DP programs that achieve timing privacy.
Importantly, our definitions allow for measuring timing privacy and output
privacy using different privacy measures. We illustrate how to instantiate our
framework by giving programs for standard DP computations in the RAM and Word
RAM models of computation. Furthermore, we show how our framework can be
realized in code through a natural extension of the OpenDP Programming
Framework.