Protecting Your Attention During Distributed Graph Learning: Efficient Privacy-Preserving Federated Graph Attention Network

IF 8 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS IEEE Transactions on Information Forensics and Security Pub Date : 2025-01-29 DOI:10.1109/TIFS.2025.3536612
Jinhao Zhou;Jun Wu;Jianbing Ni;Yuntao Wang;Yanghe Pan;Zhou Su
{"title":"Protecting Your Attention During Distributed Graph Learning: Efficient Privacy-Preserving Federated Graph Attention Network","authors":"Jinhao Zhou;Jun Wu;Jianbing Ni;Yuntao Wang;Yanghe Pan;Zhou Su","doi":"10.1109/TIFS.2025.3536612","DOIUrl":null,"url":null,"abstract":"Federated graph attention networks (FGATs) are gaining prominence for enabling collaborative and privacy-preserving graph model training. The attention mechanisms in FGATs enhance the focus on crucial graph features for improved graph representation learning while maintaining data decentralization. However, these mechanisms inherently process sensitive information, which is vulnerable to privacy threats like graph reconstruction and attribute inference. Additionally, their role in assigning varying and changing importance to nodes challenges traditional privacy methods to balance privacy and utility across varied node sensitivities effectively. Our study fills this gap by proposing an efficient privacy-preserving FGAT (PFGAT). We present an attention-based dynamic differential privacy (DP) approach via an improved multiplication triplet (IMT). Specifically, we first propose an IMT mechanism that leverages a reusable triplet generation method to efficiently and securely compute the attention mechanism. Second, we employ an attention-based privacy budget that dynamically adjusts privacy levels according to node data significance, optimizing the privacy-utility trade-off. Third, the proposed hybrid neighbor aggregation algorithm tailors DP mechanisms according to the unique characteristics of neighbor nodes, thereby mitigating the adverse impact of DP on graph attention network (GAT) utility. Extensive experiments on benchmarking datasets confirm that PFGAT maintains high efficiency and ensures robust privacy protection against potential threats.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1949-1964"},"PeriodicalIF":8.0000,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10858070/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Federated graph attention networks (FGATs) are gaining prominence for enabling collaborative and privacy-preserving graph model training. The attention mechanisms in FGATs enhance the focus on crucial graph features for improved graph representation learning while maintaining data decentralization. However, these mechanisms inherently process sensitive information, which is vulnerable to privacy threats like graph reconstruction and attribute inference. Additionally, their role in assigning varying and changing importance to nodes challenges traditional privacy methods to balance privacy and utility across varied node sensitivities effectively. Our study fills this gap by proposing an efficient privacy-preserving FGAT (PFGAT). We present an attention-based dynamic differential privacy (DP) approach via an improved multiplication triplet (IMT). Specifically, we first propose an IMT mechanism that leverages a reusable triplet generation method to efficiently and securely compute the attention mechanism. Second, we employ an attention-based privacy budget that dynamically adjusts privacy levels according to node data significance, optimizing the privacy-utility trade-off. Third, the proposed hybrid neighbor aggregation algorithm tailors DP mechanisms according to the unique characteristics of neighbor nodes, thereby mitigating the adverse impact of DP on graph attention network (GAT) utility. Extensive experiments on benchmarking datasets confirm that PFGAT maintains high efficiency and ensures robust privacy protection against potential threats.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在分布式图学习期间保护您的注意力:高效的隐私保护联邦图注意力网络
联邦图注意网络(fgat)在支持协作和保护隐私的图模型训练方面越来越突出。fgat中的注意力机制增强了对关键图形特征的关注,以改进图形表示学习,同时保持数据去中心化。然而,这些机制固有地处理敏感信息,容易受到图重构和属性推断等隐私威胁。此外,它们在为节点分配不同和变化的重要性方面的作用挑战了传统的隐私方法,以有效地平衡不同节点敏感性的隐私和效用。我们的研究通过提出一种有效的隐私保护FGAT (PFGAT)来填补这一空白。我们提出了一种基于注意力的动态差分隐私(DP)方法,通过改进的乘法三重组(IMT)。具体而言,我们首先提出了一种IMT机制,该机制利用可重用的三元组生成方法来高效安全地计算注意力机制。其次,我们采用基于注意力的隐私预算,根据节点数据的重要性动态调整隐私级别,优化隐私-效用权衡。第三,本文提出的混合邻居聚合算法根据邻居节点的独特特征定制DP机制,从而减轻DP对图注意网络(GAT)效用的不利影响。在基准数据集上进行的大量实验证实,PFGAT保持了高效率,并确保了对潜在威胁的强大隐私保护。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
期刊最新文献
SeeGait: Synergistic Co-evolving Representations for Multimodal Gait Recognition via Hierarchical Multi-Stage Fusion One Trigger, Multiple Victims: Clean-Label Neighborhood Backdoor Attacks on Graph Neural Networks Component-Specific Prompt Tuning for Deepfake Detection GDetox : Purifying Backdoor Encoder in Graph Self-supervised Learning via Knowledge Distillation IFAD: Privacy-Preserving Isolation Forest Based Anomaly Detection in Public Cloud Environments
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1