LFGurad: A Defense against Label Flipping Attack in Federated Learning for Vehicular Network

IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Computer Networks Pub Date : 2024-09-03 DOI:10.1016/j.comnet.2024.110768
{"title":"LFGurad: A Defense against Label Flipping Attack in Federated Learning for Vehicular Network","authors":"","doi":"10.1016/j.comnet.2024.110768","DOIUrl":null,"url":null,"abstract":"<div><p>The explosive growth of the interconnected vehicle network creates vast amounts of data within individual vehicles, offering exciting opportunities to develop advanced applications. FL (Federated Learning) is a game-changer for vehicular networks, enabling powerful distributed data processing across vehicles to build intelligent applications while promoting collaborative training and safeguarding data privacy. However, recent research has exposed a critical vulnerability in FL: poisoning attacks, where malicious actors can manipulate data, labels, or models to subvert the system. Despite its advantages, deploying FL in dynamic vehicular environments with a multitude of distributed vehicles presents unique challenges. One such challenge is the potential for a significant number of malicious actors to tamper with data. We propose a hierarchical FL framework for vehicular networks to address these challenges, promising lower latency and coverage. We also present a defense mechanism, LFGuard, which employs a detection system to pinpoint malicious vehicles. It then excludes their local models from the aggregation stage, significantly reducing their influence on the final outcome. We evaluate LFGuard against state-of-the-art techniques using the three popular benchmark datasets in a heterogeneous environment. Results illustrate LFGuard outperforms prior studies in thwarting targeted label-flipping attacks with more than 5% improvement in the global model accuracy, 12% in the source class recall, and a 6% reduction in the attack success rate while maintaining high model utility.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4000,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389128624006005","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

The explosive growth of the interconnected vehicle network creates vast amounts of data within individual vehicles, offering exciting opportunities to develop advanced applications. FL (Federated Learning) is a game-changer for vehicular networks, enabling powerful distributed data processing across vehicles to build intelligent applications while promoting collaborative training and safeguarding data privacy. However, recent research has exposed a critical vulnerability in FL: poisoning attacks, where malicious actors can manipulate data, labels, or models to subvert the system. Despite its advantages, deploying FL in dynamic vehicular environments with a multitude of distributed vehicles presents unique challenges. One such challenge is the potential for a significant number of malicious actors to tamper with data. We propose a hierarchical FL framework for vehicular networks to address these challenges, promising lower latency and coverage. We also present a defense mechanism, LFGuard, which employs a detection system to pinpoint malicious vehicles. It then excludes their local models from the aggregation stage, significantly reducing their influence on the final outcome. We evaluate LFGuard against state-of-the-art techniques using the three popular benchmark datasets in a heterogeneous environment. Results illustrate LFGuard outperforms prior studies in thwarting targeted label-flipping attacks with more than 5% improvement in the global model accuracy, 12% in the source class recall, and a 6% reduction in the attack success rate while maintaining high model utility.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
LFGurad:防御车载网络联合学习中的标签翻转攻击
互联车辆网络的爆炸式增长在单个车辆内产生了大量数据,为开发高级应用提供了令人兴奋的机会。FL(联合学习)改变了车载网络的游戏规则,使强大的跨车辆分布式数据处理成为可能,从而在促进协作训练和保护数据隐私的同时构建智能应用。然而,最近的研究暴露了 FL 的一个关键漏洞:中毒攻击,即恶意行为者可以操纵数据、标签或模型来颠覆系统。尽管 FL 具有诸多优势,但在有大量分布式车辆的动态车辆环境中部署 FL 也面临着独特的挑战。其中一个挑战就是大量恶意行为者有可能篡改数据。我们为车载网络提出了分层 FL 框架来应对这些挑战,有望降低延迟和覆盖范围。我们还提出了一种防御机制--LFGuard,它采用检测系统来定位恶意车辆。然后,它将恶意车辆的本地模型排除在聚合阶段之外,从而大大降低了它们对最终结果的影响。我们在异构环境中使用三个流行的基准数据集对 LFGuard 和最先进的技术进行了评估。结果表明,在挫败有针对性的标签翻转攻击方面,LFGuard 的表现优于之前的研究,全局模型准确率提高了 5%,源类召回率提高了 12%,攻击成功率降低了 6%,同时保持了较高的模型效用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Computer Networks
Computer Networks 工程技术-电信学
CiteScore
10.80
自引率
3.60%
发文量
434
审稿时长
8.6 months
期刊介绍: Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in the computer communications networking area. The audience includes researchers, managers and operators of networks as well as designers and implementors. The Editorial Board will consider any material for publication that is of interest to those groups.
期刊最新文献
SD-MDN-TM: A traceback and mitigation integrated mechanism against DDoS attacks with IP spoofing On the aggregation of FIBs at ICN routers using routing strategy Protecting unauthenticated messages in LTE/5G mobile networks: A two-level Hierarchical Identity-Based Signature (HIBS) solution A two-step linear programming approach for repeater placement in large-scale quantum networks Network traffic prediction based on PSO-LightGBM-TM
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1