{"title":"Multi-User Privacy Mechanism Design with Non-zero Leakage","authors":"A. Zamani, T. Oechtering, M. Skoglund","doi":"10.1109/ITW55543.2023.10161670","DOIUrl":null,"url":null,"abstract":"A privacy mechanism design problem is studied through the lens of information theory. In this work, an agent observes useful data Y = (Y1,…,YN) that is correlated with private data X = (X1,…,XN) which is assumed to be also accessible by the agent. Here, we consider K users where user i demands a sub-vector of Y, denoted by Ci. The agent wishes to disclose Ci to user i. A privacy mechanism is designed to generate disclosed data U which maximizes a linear combinations of the users utilities while satisfying a bounded privacy constraint in terms of mutual information. In a similar work it has been assumed that Xi is a deterministic function of Yi, however in this work we let Xi and Yi be arbitrarily correlated.First, an upper bound on the privacy-utility trade-off is obtained by using a specific transformation, Functional Representation Lemma and Strong Functional Representation Lemma, then we show that the upper bound can be decomposed into N parallel problems. Next, lower bounds on privacy-utility tradeoff are derived using Functional Representation Lemma and Strong Functional Representation Lemma. The upper bound is tight within a constant and the lower bounds assert that the disclosed data is independent of all $\\left\\{ {{X_j}} \\right\\}_{i = 1}^N$ except one which we allocate the maximum allowed leakage to it. Finally, the obtained bounds are studied in special cases.","PeriodicalId":439800,"journal":{"name":"2023 IEEE Information Theory Workshop (ITW)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Information Theory Workshop (ITW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITW55543.2023.10161670","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
A privacy mechanism design problem is studied through the lens of information theory. In this work, an agent observes useful data Y = (Y1,…,YN) that is correlated with private data X = (X1,…,XN) which is assumed to be also accessible by the agent. Here, we consider K users where user i demands a sub-vector of Y, denoted by Ci. The agent wishes to disclose Ci to user i. A privacy mechanism is designed to generate disclosed data U which maximizes a linear combinations of the users utilities while satisfying a bounded privacy constraint in terms of mutual information. In a similar work it has been assumed that Xi is a deterministic function of Yi, however in this work we let Xi and Yi be arbitrarily correlated.First, an upper bound on the privacy-utility trade-off is obtained by using a specific transformation, Functional Representation Lemma and Strong Functional Representation Lemma, then we show that the upper bound can be decomposed into N parallel problems. Next, lower bounds on privacy-utility tradeoff are derived using Functional Representation Lemma and Strong Functional Representation Lemma. The upper bound is tight within a constant and the lower bounds assert that the disclosed data is independent of all $\left\{ {{X_j}} \right\}_{i = 1}^N$ except one which we allocate the maximum allowed leakage to it. Finally, the obtained bounds are studied in special cases.