Differentially Private Guarantees for Analytics and Machine Learning on Graphs: A Survey of Results

Q2 Mathematics Journal of Privacy and Confidentiality Pub Date : 2024-02-11 DOI:10.29012/jpc.820
Tamara T. Mueller, Dmitrii Usynin, Johannes C. Paetzold, R. Braren, D. Rueckert, Georgios Kaissis
{"title":"Differentially Private Guarantees for Analytics and Machine Learning on Graphs: A Survey of Results","authors":"Tamara T. Mueller, Dmitrii Usynin, Johannes C. Paetzold, R. Braren, D. Rueckert, Georgios Kaissis","doi":"10.29012/jpc.820","DOIUrl":null,"url":null,"abstract":"We study the applications of differential privacy (DP) in the context of graph-structured data and discuss the formulations of DP applicable to the publication of graphs and their associated statistics as well as machine learning on graph-based data, including graph neural networks (GNNs). Interpreting DP guarantees in the context of graph-structured data can be challenging, as individual data points are interconnected (often non-linearly or sparsely). This connectivity complicates the computation of individual privacy loss in differentially private learning. The problem is exacerbated by an absence of a single, well-established formulation of DP in graph settings. This issue extends to the domain of GNNs, rendering private machine learning on graph-structured data a challenging task. A lack of prior systematisation work motivated us to study graph-based learning from a privacy perspective. In this work, we systematise different formulations of DP on graphs, discuss challenges and promising applications, including the GNN domain. We compare and separate works into graph analytics tasks and graph learning tasks with GNNs. We conclude our work with a discussion of open questions and potential directions for further research in this area.","PeriodicalId":52360,"journal":{"name":"Journal of Privacy and Confidentiality","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Privacy and Confidentiality","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.29012/jpc.820","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Mathematics","Score":null,"Total":0}
引用次数: 1

Abstract

We study the applications of differential privacy (DP) in the context of graph-structured data and discuss the formulations of DP applicable to the publication of graphs and their associated statistics as well as machine learning on graph-based data, including graph neural networks (GNNs). Interpreting DP guarantees in the context of graph-structured data can be challenging, as individual data points are interconnected (often non-linearly or sparsely). This connectivity complicates the computation of individual privacy loss in differentially private learning. The problem is exacerbated by an absence of a single, well-established formulation of DP in graph settings. This issue extends to the domain of GNNs, rendering private machine learning on graph-structured data a challenging task. A lack of prior systematisation work motivated us to study graph-based learning from a privacy perspective. In this work, we systematise different formulations of DP on graphs, discuss challenges and promising applications, including the GNN domain. We compare and separate works into graph analytics tasks and graph learning tasks with GNNs. We conclude our work with a discussion of open questions and potential directions for further research in this area.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
图上分析和机器学习的差异化私有保证:成果概览
我们研究了图结构数据背景下的差分隐私(DP)应用,并讨论了适用于发布图及其相关统计数据以及基于图数据(包括图神经网络(GNN))的机器学习的 DP 表述。在图结构数据的背景下解释 DP 保证具有挑战性,因为单个数据点是相互连接的(通常是非线性或稀疏的)。这种连通性使得计算差异化私有学习中的单个隐私损失变得更加复杂。由于在图环境中缺乏单一、成熟的 DP 表述,这个问题变得更加严重。这个问题延伸到了 GNN 领域,使得在图结构数据上进行隐私机器学习成为一项具有挑战性的任务。之前缺乏系统化工作促使我们从隐私角度研究基于图的学习。在这项工作中,我们系统阐述了图上 DP 的不同形式,讨论了包括 GNN 领域在内的挑战和有前景的应用。我们比较并区分了图分析任务和使用 GNN 的图学习任务。最后,我们讨论了这一领域的开放性问题和进一步研究的潜在方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Privacy and Confidentiality
Journal of Privacy and Confidentiality Computer Science-Computer Science (miscellaneous)
CiteScore
3.10
自引率
0.00%
发文量
11
审稿时长
24 weeks
期刊最新文献
Differentially Private Guarantees for Analytics and Machine Learning on Graphs: A Survey of Results Differentially Private Guarantees for Analytics and Machine Learning on Graphs: A Survey of Results Protecting Sensitive Data Early in the Research Data Lifecycle Beyond Legal Frameworks and Security Controls For Accessing Confidential Survey Data: Engaging Data Users in Data Protection Restricted data management: the current practice and the future
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1