Maximizing Uncertainty for Federated Learning via Bayesian Optimization-Based Model Poisoning

IF 8 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS IEEE Transactions on Information Forensics and Security Pub Date : 2025-01-17 DOI:10.1109/TIFS.2025.3531143
Marios Aristodemou;Xiaolan Liu;Yuan Wang;Konstantinos G. Kyriakopoulos;Sangarapillai Lambotharan;Qingsong Wei
{"title":"Maximizing Uncertainty for Federated Learning via Bayesian Optimization-Based Model Poisoning","authors":"Marios Aristodemou;Xiaolan Liu;Yuan Wang;Konstantinos G. Kyriakopoulos;Sangarapillai Lambotharan;Qingsong Wei","doi":"10.1109/TIFS.2025.3531143","DOIUrl":null,"url":null,"abstract":"As we transition from Narrow Artificial Intelligence towards Artificial Super Intelligence, users are increasingly concerned about their privacy and the trustworthiness of machine learning (ML) technology. A common denominator for the metrics of trustworthiness is the quantification of uncertainty inherent in DL algorithms, and specifically in the model parameters, input data, and model predictions. One of the common approaches to address privacy-related issues in DL is to adopt distributed learning such as federated learning (FL), where private raw data is not shared among users. Despite the privacy-preserving mechanisms in FL, it still faces challenges in trustworthiness. Specifically, the malicious users, during training, can systematically create malicious model parameters to compromise the models’ predictive and generative capabilities, resulting in high uncertainty about their reliability. To demonstrate malicious behaviour, we propose a novel model poisoning attack method named Delphi which aims to maximise the uncertainty of the global model output. We achieve this by taking advantage of the relationship between the uncertainty and the model parameters of the first hidden layer of the local model. Delphi employs two types of optimisation, Bayesian Optimisation and Least Squares Trust Region, to search for the optimal poisoned model parameters, named as Delphi-BO and Delphi-LSTR. We quantify the uncertainty using the KL Divergence to minimise the distance of the predictive probability distribution towards an uncertain distribution of model output. Furthermore, we establish a mathematical proof for the attack effectiveness demonstrated in FL. Numerical results demonstrate that Delphi-BO induces a higher amount of uncertainty than Delphi-LSTR highlighting vulnerability of FL systems to model poisoning attacks.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"2399-2411"},"PeriodicalIF":8.0000,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10844871/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

As we transition from Narrow Artificial Intelligence towards Artificial Super Intelligence, users are increasingly concerned about their privacy and the trustworthiness of machine learning (ML) technology. A common denominator for the metrics of trustworthiness is the quantification of uncertainty inherent in DL algorithms, and specifically in the model parameters, input data, and model predictions. One of the common approaches to address privacy-related issues in DL is to adopt distributed learning such as federated learning (FL), where private raw data is not shared among users. Despite the privacy-preserving mechanisms in FL, it still faces challenges in trustworthiness. Specifically, the malicious users, during training, can systematically create malicious model parameters to compromise the models’ predictive and generative capabilities, resulting in high uncertainty about their reliability. To demonstrate malicious behaviour, we propose a novel model poisoning attack method named Delphi which aims to maximise the uncertainty of the global model output. We achieve this by taking advantage of the relationship between the uncertainty and the model parameters of the first hidden layer of the local model. Delphi employs two types of optimisation, Bayesian Optimisation and Least Squares Trust Region, to search for the optimal poisoned model parameters, named as Delphi-BO and Delphi-LSTR. We quantify the uncertainty using the KL Divergence to minimise the distance of the predictive probability distribution towards an uncertain distribution of model output. Furthermore, we establish a mathematical proof for the attack effectiveness demonstrated in FL. Numerical results demonstrate that Delphi-BO induces a higher amount of uncertainty than Delphi-LSTR highlighting vulnerability of FL systems to model poisoning attacks.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于贝叶斯优化模型中毒的联邦学习不确定性最大化
随着我们从狭义人工智能向人工超级智能过渡,用户越来越关注他们的隐私和机器学习(ML)技术的可信度。可信度度量的一个共同点是量化DL算法中固有的不确定性,特别是在模型参数、输入数据和模型预测中。在深度学习中解决隐私相关问题的常见方法之一是采用分布式学习,例如联邦学习(FL),其中私有原始数据不会在用户之间共享。尽管FL有隐私保护机制,但它仍然面临着可信度方面的挑战。具体而言,在训练过程中,恶意用户可以系统地创建恶意模型参数,损害模型的预测和生成能力,导致模型的可靠性存在很大的不确定性。为了证明恶意行为,我们提出了一种名为Delphi的新型模型中毒攻击方法,旨在最大化全局模型输出的不确定性。我们通过利用局部模型的第一隐层的不确定性和模型参数之间的关系来实现这一点。Delphi采用贝叶斯优化和最小二乘信任域两种优化方式来搜索最优的中毒模型参数,分别称为Delphi- bo和Delphi- lstr。我们使用KL散度来量化不确定性,以最小化预测概率分布与模型输出不确定分布的距离。此外,我们建立了FL攻击有效性的数学证明。数值结果表明,Delphi-BO比Delphi-LSTR具有更高的不确定性,突出了FL系统对模型中毒攻击的脆弱性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
期刊最新文献
Rethinking Cross-Table Quantization Step Estimation: From Global and Local Perspectives Adversarial Training for Graph Neural Networks via Graph Subspace Energy Optimization LLMBA: Efficient Behavior Analytics via Large Pretrained Models in Zero Trust Networks Model Hijacking Attack in Federated Learning Model Inversion Attack against Federated Unlearning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1