揭示威胁:研究联盟图神经网络中的分布式和集中式后门攻击

Jing Xu, Stefanos Koffas, S. Picek
{"title":"揭示威胁:研究联盟图神经网络中的分布式和集中式后门攻击","authors":"Jing Xu, Stefanos Koffas, S. Picek","doi":"10.1145/3633206","DOIUrl":null,"url":null,"abstract":"Graph Neural Networks (GNNs) have gained significant popularity as powerful deep learning methods for processing graph data. However, centralized GNNs face challenges in data-sensitive scenarios due to privacy concerns and regulatory restrictions. Federated learning (FL) has emerged as a promising technology that enables collaborative training of a shared global model while preserving privacy. While FL has been applied to train GNNs, no research focuses on the robustness of Federated GNNs against backdoor attacks. This paper bridges this research gap by investigating two types of backdoor attacks in Federated GNNs: centralized backdoor attacks (CBA) and distributed backdoor attacks (DBA). Through extensive experiments, we demonstrate that DBA exhibits a higher success rate than CBA across various scenarios. To further explore the characteristics of these backdoor attacks in Federated GNNs, we evaluate their performance under different scenarios, including varying numbers of clients, trigger sizes, poisoning intensities, and trigger densities. Additionally, we explore the resilience of DBA and CBA against two defense mechanisms. Our findings reveal that both defenses can not eliminate DBA and CBA without affecting the original task. This highlights the necessity of developing tailored defenses to mitigate the novel threat of backdoor attacks in Federated GNNs.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"23 5","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Unveiling the Threat: Investigating Distributed and Centralized Backdoor Attacks in Federated Graph Neural Networks\",\"authors\":\"Jing Xu, Stefanos Koffas, S. Picek\",\"doi\":\"10.1145/3633206\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Graph Neural Networks (GNNs) have gained significant popularity as powerful deep learning methods for processing graph data. However, centralized GNNs face challenges in data-sensitive scenarios due to privacy concerns and regulatory restrictions. Federated learning (FL) has emerged as a promising technology that enables collaborative training of a shared global model while preserving privacy. While FL has been applied to train GNNs, no research focuses on the robustness of Federated GNNs against backdoor attacks. This paper bridges this research gap by investigating two types of backdoor attacks in Federated GNNs: centralized backdoor attacks (CBA) and distributed backdoor attacks (DBA). Through extensive experiments, we demonstrate that DBA exhibits a higher success rate than CBA across various scenarios. To further explore the characteristics of these backdoor attacks in Federated GNNs, we evaluate their performance under different scenarios, including varying numbers of clients, trigger sizes, poisoning intensities, and trigger densities. Additionally, we explore the resilience of DBA and CBA against two defense mechanisms. Our findings reveal that both defenses can not eliminate DBA and CBA without affecting the original task. This highlights the necessity of developing tailored defenses to mitigate the novel threat of backdoor attacks in Federated GNNs.\",\"PeriodicalId\":202552,\"journal\":{\"name\":\"Digital Threats: Research and Practice\",\"volume\":\"23 5\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-11-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Digital Threats: Research and Practice\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3633206\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Threats: Research and Practice","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3633206","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

图神经网络(GNN)作为处理图数据的强大深度学习方法,已经获得了极大的普及。然而,由于隐私问题和监管限制,集中式 GNN 在数据敏感场景中面临挑战。联合学习(FL)作为一种有前途的技术已经出现,它能在保护隐私的同时,对共享的全局模型进行协作训练。虽然联合学习已被应用于训练 GNN,但还没有研究关注联合 GNN 对后门攻击的鲁棒性。本文通过研究联合 GNN 中的两种后门攻击类型:集中式后门攻击(CBA)和分布式后门攻击(DBA),弥补了这一研究空白。通过大量实验,我们证明在各种情况下,DBA 的成功率高于 CBA。为了进一步探索这些后门攻击在联邦 GNN 中的特性,我们评估了它们在不同场景下的性能,包括不同的客户端数量、触发器大小、中毒强度和触发器密度。此外,我们还探讨了 DBA 和 CBA 对两种防御机制的抵御能力。我们的研究结果表明,这两种防御机制都无法在不影响原始任务的情况下消除 DBA 和 CBA。这突出表明,有必要开发量身定制的防御措施,以减轻联邦 GNN 中后门攻击的新威胁。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Unveiling the Threat: Investigating Distributed and Centralized Backdoor Attacks in Federated Graph Neural Networks
Graph Neural Networks (GNNs) have gained significant popularity as powerful deep learning methods for processing graph data. However, centralized GNNs face challenges in data-sensitive scenarios due to privacy concerns and regulatory restrictions. Federated learning (FL) has emerged as a promising technology that enables collaborative training of a shared global model while preserving privacy. While FL has been applied to train GNNs, no research focuses on the robustness of Federated GNNs against backdoor attacks. This paper bridges this research gap by investigating two types of backdoor attacks in Federated GNNs: centralized backdoor attacks (CBA) and distributed backdoor attacks (DBA). Through extensive experiments, we demonstrate that DBA exhibits a higher success rate than CBA across various scenarios. To further explore the characteristics of these backdoor attacks in Federated GNNs, we evaluate their performance under different scenarios, including varying numbers of clients, trigger sizes, poisoning intensities, and trigger densities. Additionally, we explore the resilience of DBA and CBA against two defense mechanisms. Our findings reveal that both defenses can not eliminate DBA and CBA without affecting the original task. This highlights the necessity of developing tailored defenses to mitigate the novel threat of backdoor attacks in Federated GNNs.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Causal Inconsistencies are Normal in Windows Memory Dumps (too) InvesTEE: A TEE-supported Framework for Lawful Remote Forensic Investigations Does Cyber Insurance promote Cyber Security Best Practice? An Analysis based on Insurance Application Forms Unveiling Cyber Threat Actors: A Hybrid Deep Learning Approach for Behavior-based Attribution A Framework for Enhancing Social Media Misinformation Detection with Topical-Tactics
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1