Local Methods for Privacy Protection and Impact on Fairness

C. Palamidessi
{"title":"Local Methods for Privacy Protection and Impact on Fairness","authors":"C. Palamidessi","doi":"10.1145/3577923.3587263","DOIUrl":null,"url":null,"abstract":"The increasingly pervasive use of big data and machine learning is raising various ethical issues, in particular privacy and fairness. In this talk, I will discuss some frameworks to understand and mitigate the issues, focusing on iterative methods coming from information theory and statistics. In the area of privacy protection, differential privacy (DP) and its variants are the most successful approaches to date. One of the fundamental issues of DP is how to reconcile the loss of information that it implies with the need to preserve the utility of the data. In this regard, a useful tool to recover utility is the iterative Bayesian update (IBU), an instance of the expectation-maximization method from statistics. I will show that the IBU, combined with a version of DP called d-\\emphprivacy (also known as metric differential privacy ), outperforms the state-of-the-art, which is based on algebraic methods combined with the randomized response mechanism, widely adopted by the Big Tech industry (Google, Apple, Amazon, ...). Then, I will discuss the issue of biased predictions in machine learning, and how DP can affect the level of fairness and accuracy of the trained model. Finally, I will show that the IBU can be applied also in this domain to ensure fairer treatment of disadvantaged groups and reconcile fairness and accuracy.","PeriodicalId":387479,"journal":{"name":"Proceedings of the Thirteenth ACM Conference on Data and Application Security and Privacy","volume":"121 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Thirteenth ACM Conference on Data and Application Security and Privacy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3577923.3587263","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The increasingly pervasive use of big data and machine learning is raising various ethical issues, in particular privacy and fairness. In this talk, I will discuss some frameworks to understand and mitigate the issues, focusing on iterative methods coming from information theory and statistics. In the area of privacy protection, differential privacy (DP) and its variants are the most successful approaches to date. One of the fundamental issues of DP is how to reconcile the loss of information that it implies with the need to preserve the utility of the data. In this regard, a useful tool to recover utility is the iterative Bayesian update (IBU), an instance of the expectation-maximization method from statistics. I will show that the IBU, combined with a version of DP called d-\emphprivacy (also known as metric differential privacy ), outperforms the state-of-the-art, which is based on algebraic methods combined with the randomized response mechanism, widely adopted by the Big Tech industry (Google, Apple, Amazon, ...). Then, I will discuss the issue of biased predictions in machine learning, and how DP can affect the level of fairness and accuracy of the trained model. Finally, I will show that the IBU can be applied also in this domain to ensure fairer treatment of disadvantaged groups and reconcile fairness and accuracy.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
隐私保护的局部方法及其对公平的影响
大数据和机器学习的日益普及引发了各种道德问题,尤其是隐私和公平问题。在这次演讲中,我将讨论一些框架来理解和缓解问题,重点是来自信息论和统计学的迭代方法。在隐私保护领域,差分隐私(DP)及其变体是迄今为止最成功的方法。数据处理的一个基本问题是如何协调它所暗示的信息丢失与保持数据效用的需要。在这方面,恢复效用的一个有用工具是迭代贝叶斯更新(IBU),这是统计学中期望最大化方法的一个实例。我将展示IBU与称为d-\强调隐私(也称为度量差分隐私)的DP版本相结合,优于最先进的技术,该技术基于代数方法与随机响应机制相结合,被大型科技行业(谷歌,苹果,亚马逊等)广泛采用。然后,我将讨论机器学习中有偏见的预测问题,以及DP如何影响训练模型的公平性和准确性。最后,我将证明IBU也可以应用于这一领域,以确保更公平地对待弱势群体,并协调公平性和准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Tackling Credential Abuse Together Comparative Privacy Analysis of Mobile Browsers Confidential Execution of Deep Learning Inference at the Untrusted Edge with ARM TrustZone Local Methods for Privacy Protection and Impact on Fairness Role Models: Role-based Debloating for Web Applications
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1