CareFL:贡献指导拜占庭式稳健联合学习

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS IEEE Transactions on Information Forensics and Security Pub Date : 2024-10-10 DOI:10.1109/TIFS.2024.3477912
Qihao Dong;Shengyuan Yang;Zhiyang Dai;Yansong Gao;Shang Wang;Yuan Cao;Anmin Fu;Willy Susilo
{"title":"CareFL:贡献指导拜占庭式稳健联合学习","authors":"Qihao Dong;Shengyuan Yang;Zhiyang Dai;Yansong Gao;Shang Wang;Yuan Cao;Anmin Fu;Willy Susilo","doi":"10.1109/TIFS.2024.3477912","DOIUrl":null,"url":null,"abstract":"Byzantine-robust federated learning (FL) endeavors to empower service providers in acquiring a precise global model, even in the presence of potentially malicious FL clients. While considerable strides have been taken in the development of robust aggregation algorithms for FL in recent years, their efficacy is confined to addressing particular forms of Byzantine attacks, and they exhibit vulnerabilities when confronted with a spectrum of attack vectors. Notably, a prevailing issue lies in the heavy reliance of these algorithms on the examination of local model gradients. It is worth noting that an attacker possesses the ability to manipulate a carefully chosen small gradient of a model within a context where there could be millions of gradients available, thereby facilitating adaptive attacks. Drawing inspiration from the foundational Shapley value methodology in game theory, we introduce an effective FL scheme named \n<monospace>CareFL</monospace>\n. This scheme is designed to provide robustness against a spectrum of state-of-the-art Byzantine attacks. Unlike approaches that rely on the examination of gradients, \n<monospace>CareFL</monospace>\n employs a universal metric, the loss of the local model—independent of specific gradients, to identify potentially malicious clients. Specifically, in each aggregation round, the FL server trains a reference model using a small auxiliary dataset— the auxiliary dataset can be removed with a slight defense degradation trade-off. It employs the Shapley value to assess the contribution of each client-submitted model in minimizing the global model loss. Subsequently, the server selects client models closer to the reference model in terms of Shapley values for the global model update. To reduce the computational overhead of \n<monospace>CareFL</monospace>\n when the number of clients is relatively scaled-up, we construct its variant, namely \n<monospace>CareFL</monospace>\n+ generally by grouping clients. Extensive experimentation conducted on well-established MNIST and CIFAR-10 datasets, encompassing diverse model architectures, including AlexNet, demonstrates that \n<monospace>CareFL</monospace>\n consistently achieves accuracy levels comparable to those attained under attack-free conditions when faced with five formidable attacks. \n<monospace>CareFL</monospace>\n and CareFL+ outperform six existing state-of-the-art Byzantine-robust FL aggregation methods, including \n<monospace>FLTrust</monospace>\n, across both IID and non-IID data distribution settings.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"19 ","pages":"9714-9729"},"PeriodicalIF":6.3000,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CareFL: Contribution Guided Byzantine-Robust Federated Learning\",\"authors\":\"Qihao Dong;Shengyuan Yang;Zhiyang Dai;Yansong Gao;Shang Wang;Yuan Cao;Anmin Fu;Willy Susilo\",\"doi\":\"10.1109/TIFS.2024.3477912\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Byzantine-robust federated learning (FL) endeavors to empower service providers in acquiring a precise global model, even in the presence of potentially malicious FL clients. While considerable strides have been taken in the development of robust aggregation algorithms for FL in recent years, their efficacy is confined to addressing particular forms of Byzantine attacks, and they exhibit vulnerabilities when confronted with a spectrum of attack vectors. Notably, a prevailing issue lies in the heavy reliance of these algorithms on the examination of local model gradients. It is worth noting that an attacker possesses the ability to manipulate a carefully chosen small gradient of a model within a context where there could be millions of gradients available, thereby facilitating adaptive attacks. Drawing inspiration from the foundational Shapley value methodology in game theory, we introduce an effective FL scheme named \\n<monospace>CareFL</monospace>\\n. This scheme is designed to provide robustness against a spectrum of state-of-the-art Byzantine attacks. Unlike approaches that rely on the examination of gradients, \\n<monospace>CareFL</monospace>\\n employs a universal metric, the loss of the local model—independent of specific gradients, to identify potentially malicious clients. Specifically, in each aggregation round, the FL server trains a reference model using a small auxiliary dataset— the auxiliary dataset can be removed with a slight defense degradation trade-off. It employs the Shapley value to assess the contribution of each client-submitted model in minimizing the global model loss. Subsequently, the server selects client models closer to the reference model in terms of Shapley values for the global model update. To reduce the computational overhead of \\n<monospace>CareFL</monospace>\\n when the number of clients is relatively scaled-up, we construct its variant, namely \\n<monospace>CareFL</monospace>\\n+ generally by grouping clients. Extensive experimentation conducted on well-established MNIST and CIFAR-10 datasets, encompassing diverse model architectures, including AlexNet, demonstrates that \\n<monospace>CareFL</monospace>\\n consistently achieves accuracy levels comparable to those attained under attack-free conditions when faced with five formidable attacks. \\n<monospace>CareFL</monospace>\\n and CareFL+ outperform six existing state-of-the-art Byzantine-robust FL aggregation methods, including \\n<monospace>FLTrust</monospace>\\n, across both IID and non-IID data distribution settings.\",\"PeriodicalId\":13492,\"journal\":{\"name\":\"IEEE Transactions on Information Forensics and Security\",\"volume\":\"19 \",\"pages\":\"9714-9729\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2024-10-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Information Forensics and Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10713463/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10713463/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

摘要

稳健的拜占庭联合学习(FL)致力于帮助服务提供商获得精确的全局模型,即使在可能存在恶意FL客户端的情况下也是如此。虽然近年来在为联合学习开发稳健的聚合算法方面取得了长足进步,但这些算法的功效仅限于应对特定形式的拜占庭攻击,而且在面对各种攻击载体时表现出脆弱性。值得注意的是,一个普遍存在的问题是,这些算法严重依赖于对局部模型梯度的检验。值得注意的是,在可能存在数百万梯度的情况下,攻击者有能力操纵模型中精心选择的一个小梯度,从而促进自适应攻击。从博弈论中的基础 Shapley 值方法中汲取灵感,我们引入了一种名为 CareFL 的有效 FL 方案。该方案旨在提供对一系列最先进的拜占庭攻击的鲁棒性。与依赖梯度检查的方法不同,CareFL 采用了一种通用指标,即与特定梯度无关的本地模型损失,来识别潜在的恶意客户端。具体来说,在每一轮聚合过程中,FL 服务器都会使用一个小型辅助数据集训练一个参考模型--可以在略微降低防御能力的前提下移除辅助数据集。它利用沙普利值来评估每个客户端提交的模型对最小化全局模型损失的贡献。随后,服务器会选择在 Shapley 值方面更接近参考模型的客户端模型进行全局模型更新。为了在客户端数量相对增加时减少 CareFL 的计算开销,我们一般通过对客户端进行分组来构建其变体,即 CareFL+。在成熟的 MNIST 和 CIFAR-10 数据集(包括 AlexNet 在内的各种模型架构)上进行的大量实验表明,面对五种可怕的攻击,CareFL 始终能达到与无攻击条件下相当的准确率水平。CareFL 和 CareFL+ 在 IID 和非 IID 数据分布环境下的表现均优于包括 FLTrust 在内的六种现有最先进的拜占庭稳健 FL 聚合方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
CareFL: Contribution Guided Byzantine-Robust Federated Learning
Byzantine-robust federated learning (FL) endeavors to empower service providers in acquiring a precise global model, even in the presence of potentially malicious FL clients. While considerable strides have been taken in the development of robust aggregation algorithms for FL in recent years, their efficacy is confined to addressing particular forms of Byzantine attacks, and they exhibit vulnerabilities when confronted with a spectrum of attack vectors. Notably, a prevailing issue lies in the heavy reliance of these algorithms on the examination of local model gradients. It is worth noting that an attacker possesses the ability to manipulate a carefully chosen small gradient of a model within a context where there could be millions of gradients available, thereby facilitating adaptive attacks. Drawing inspiration from the foundational Shapley value methodology in game theory, we introduce an effective FL scheme named CareFL . This scheme is designed to provide robustness against a spectrum of state-of-the-art Byzantine attacks. Unlike approaches that rely on the examination of gradients, CareFL employs a universal metric, the loss of the local model—independent of specific gradients, to identify potentially malicious clients. Specifically, in each aggregation round, the FL server trains a reference model using a small auxiliary dataset— the auxiliary dataset can be removed with a slight defense degradation trade-off. It employs the Shapley value to assess the contribution of each client-submitted model in minimizing the global model loss. Subsequently, the server selects client models closer to the reference model in terms of Shapley values for the global model update. To reduce the computational overhead of CareFL when the number of clients is relatively scaled-up, we construct its variant, namely CareFL + generally by grouping clients. Extensive experimentation conducted on well-established MNIST and CIFAR-10 datasets, encompassing diverse model architectures, including AlexNet, demonstrates that CareFL consistently achieves accuracy levels comparable to those attained under attack-free conditions when faced with five formidable attacks. CareFL and CareFL+ outperform six existing state-of-the-art Byzantine-robust FL aggregation methods, including FLTrust , across both IID and non-IID data distribution settings.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
期刊最新文献
Attackers Are Not the Same! Unveiling the Impact of Feature Distribution on Label Inference Attacks Backdoor Online Tracing With Evolving Graphs LHADRO: A Robust Control Framework for Autonomous Vehicles Under Cyber-Physical Attacks Towards Mobile Palmprint Recognition via Multi-view Hierarchical Graph Learning Succinct Hash-based Arbitrary-Range Proofs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1