{"title":"Using Rényi-divergence and Arimoto-Rényi Information to Quantify Membership Information Leakage","authors":"F. Farokhi","doi":"10.1109/CISS50987.2021.9400316","DOIUrl":null,"url":null,"abstract":"Membership inference attacks, i.e., adversarial attacks inferring whether a data record is used for training a machine learning model, has been recently shown to pose a legitimate privacy risk in machine learning literature. In this paper, we propose two measures of information leakage for investigating membership inference attacks backed by results on binary hypothesis testing in information theory literature. The first measure of information leakage is defined using Rényi α-divergence of the distribution of output of a machine learning model for data records that are in and out of the training dataset. The second measure of information leakage is based on Arimoto-Rényi α-information between the membership random variable (whether the data record is in or out of the training dataset) and the output of the machine learning model. These measures of leakage are shown to be related to each other. We compare the proposed measures of information leakage with α-leakage from the information-theoretic privacy literature to establish some useful properties. We establish an upper bound for α-divergence information leakage as a function of the privacy budget for differentially-private machine learning models.","PeriodicalId":228112,"journal":{"name":"2021 55th Annual Conference on Information Sciences and Systems (CISS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 55th Annual Conference on Information Sciences and Systems (CISS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CISS50987.2021.9400316","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Membership inference attacks, i.e., adversarial attacks inferring whether a data record is used for training a machine learning model, has been recently shown to pose a legitimate privacy risk in machine learning literature. In this paper, we propose two measures of information leakage for investigating membership inference attacks backed by results on binary hypothesis testing in information theory literature. The first measure of information leakage is defined using Rényi α-divergence of the distribution of output of a machine learning model for data records that are in and out of the training dataset. The second measure of information leakage is based on Arimoto-Rényi α-information between the membership random variable (whether the data record is in or out of the training dataset) and the output of the machine learning model. These measures of leakage are shown to be related to each other. We compare the proposed measures of information leakage with α-leakage from the information-theoretic privacy literature to establish some useful properties. We establish an upper bound for α-divergence information leakage as a function of the privacy budget for differentially-private machine learning models.
成员推理攻击,即对抗性攻击,推断数据记录是否用于训练机器学习模型,最近已被证明在机器学习文献中构成合法的隐私风险。本文基于信息论文献中二元假设检验的结果,提出了两种信息泄漏度量方法,用于研究隶属性推理攻击。信息泄漏的第一个度量是使用r α-散度定义的,该散度是机器学习模型的输出分布,用于训练数据集内外的数据记录。信息泄漏的第二个度量是基于隶属度随机变量(数据记录是否在训练数据集中)和机器学习模型输出之间的arimoto - r α-信息。这些泄漏测量显示是相互关联的。我们将所提出的信息泄漏度量与信息论隐私文献中的α-泄漏度量进行比较,以建立一些有用的性质。对于微分私有机器学习模型,我们建立了α-散度信息泄漏的上界作为隐私预算的函数。