Minimal data poisoning attack in federated learning for medical image classification: An attacker perspective

IF 6.1 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Artificial Intelligence in Medicine Pub Date : 2024-11-21 DOI:10.1016/j.artmed.2024.103024
K. Naveen Kumar , C. Krishna Mohan , Linga Reddy Cenkeramaddi , Navchetan Awasthi
{"title":"Minimal data poisoning attack in federated learning for medical image classification: An attacker perspective","authors":"K. Naveen Kumar ,&nbsp;C. Krishna Mohan ,&nbsp;Linga Reddy Cenkeramaddi ,&nbsp;Navchetan Awasthi","doi":"10.1016/j.artmed.2024.103024","DOIUrl":null,"url":null,"abstract":"<div><div>The privacy-sensitive nature of medical image data is often bounded by strict data sharing regulations that necessitate the need for novel modeling and analysis techniques. Federated learning (FL) enables multiple medical institutions to collectively train a deep neural network without sharing sensitive patient information. In addition, FL uses its collaborative approach to address challenges related to the scarcity and non-uniform distribution of heterogeneous medical domain data. Nevertheless, the data-opaque nature and distributed setup make FL susceptible to data poisoning attacks. There are diverse FL data poisoning attacks for classification models on natural image data in the literature. But their primary focus is on the impact of the attack and they do not consider the attack budget and attack visibility. The attack budget is essential for adversaries to optimize resource utilization in real-world scenarios, which determines the number of manipulations or perturbations they can apply. Simultaneously, attack visibility is crucial to ensure covert execution, allowing attackers to achieve their objectives without triggering detection mechanisms. Generally, an attacker’s aim is to create maximum attack impact with minimal resources and low visibility. So, considering these three entities can effectively comprehend the adversary’s perspective in designing an attack for real-world scenarios. Further, data poisoning attacks on medical images are challenging compared to natural images due to the subjective nature of medical data. Hence, we develop an attack with a low budget, low visibility, and high impact for medical image classification in FL. We propose a federated learning attention guided minimal attack (FL-AGMA), that uses class attention maps to identify specific medical image regions for perturbation. We introduce image distortion degree (IDD) as a metric to assess the attack budget. Also, we develop a feedback mechanism to regulate the attack coefficient for low attack visibility. Later, we optimize the attack budget by adaptively changing the IDD based on attack visibility. We extensively evaluate three large-scale datasets, namely, Covid-chestxray, Camelyon17, and HAM10000, covering three different data modalities. We observe that our FL-AGMA method has resulted in 44.49% less test accuracy with only 24% of IDD attack budget and lower attack visibility compared to the other attacks.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"159 ","pages":"Article 103024"},"PeriodicalIF":6.1000,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence in Medicine","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0933365724002665","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The privacy-sensitive nature of medical image data is often bounded by strict data sharing regulations that necessitate the need for novel modeling and analysis techniques. Federated learning (FL) enables multiple medical institutions to collectively train a deep neural network without sharing sensitive patient information. In addition, FL uses its collaborative approach to address challenges related to the scarcity and non-uniform distribution of heterogeneous medical domain data. Nevertheless, the data-opaque nature and distributed setup make FL susceptible to data poisoning attacks. There are diverse FL data poisoning attacks for classification models on natural image data in the literature. But their primary focus is on the impact of the attack and they do not consider the attack budget and attack visibility. The attack budget is essential for adversaries to optimize resource utilization in real-world scenarios, which determines the number of manipulations or perturbations they can apply. Simultaneously, attack visibility is crucial to ensure covert execution, allowing attackers to achieve their objectives without triggering detection mechanisms. Generally, an attacker’s aim is to create maximum attack impact with minimal resources and low visibility. So, considering these three entities can effectively comprehend the adversary’s perspective in designing an attack for real-world scenarios. Further, data poisoning attacks on medical images are challenging compared to natural images due to the subjective nature of medical data. Hence, we develop an attack with a low budget, low visibility, and high impact for medical image classification in FL. We propose a federated learning attention guided minimal attack (FL-AGMA), that uses class attention maps to identify specific medical image regions for perturbation. We introduce image distortion degree (IDD) as a metric to assess the attack budget. Also, we develop a feedback mechanism to regulate the attack coefficient for low attack visibility. Later, we optimize the attack budget by adaptively changing the IDD based on attack visibility. We extensively evaluate three large-scale datasets, namely, Covid-chestxray, Camelyon17, and HAM10000, covering three different data modalities. We observe that our FL-AGMA method has resulted in 44.49% less test accuracy with only 24% of IDD attack budget and lower attack visibility compared to the other attacks.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
医学图像分类联合学习中的最小数据中毒攻击:攻击者视角
医疗图像数据的隐私敏感性往往受到严格的数据共享法规的限制,因此需要新颖的建模和分析技术。联合学习(FL)使多个医疗机构能够共同训练一个深度神经网络,而无需共享敏感的患者信息。此外,FL 利用其协作方法解决了与异构医疗领域数据稀缺和分布不均有关的挑战。然而,数据不透明的性质和分布式设置使 FL 容易受到数据中毒攻击。文献中有多种针对自然图像数据分类模型的 FL 数据中毒攻击。但它们主要关注的是攻击的影响,而没有考虑攻击预算和攻击可见性。攻击预算对于对手在真实世界场景中优化资源利用率至关重要,它决定了对手可以使用的操作或扰动的数量。同时,攻击可见性对于确保隐蔽执行至关重要,它允许攻击者在不触发检测机制的情况下实现目标。一般来说,攻击者的目的是以最少的资源和较低的可见度产生最大的攻击影响。因此,考虑到这三个实体,可以有效地理解对手在设计真实世界场景攻击时的视角。此外,由于医学数据的主观性,对医学图像的数据中毒攻击与自然图像相比具有挑战性。因此,我们针对 FL 中的医学图像分类开发了一种预算低、能见度低、影响大的攻击。我们提出了一种联合学习注意力引导的最小攻击(FL-AGMA),它使用类注意力图来识别特定的医学图像区域进行扰动。我们引入了图像失真度(IDD)作为评估攻击预算的指标。此外,我们还开发了一种反馈机制,用于调节低攻击可见性的攻击系数。随后,我们根据攻击可见度自适应地改变 IDD,从而优化攻击预算。我们广泛评估了三个大型数据集,即 Covid-chestxray、Camelyon17 和 HAM10000,涵盖三种不同的数据模式。我们发现,与其他攻击相比,我们的 FL-AGMA 方法只需 24% 的 IDD 攻击预算和较低的攻击可见性,就能使测试准确率降低 44.49%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Artificial Intelligence in Medicine
Artificial Intelligence in Medicine 工程技术-工程:生物医学
CiteScore
15.00
自引率
2.70%
发文量
143
审稿时长
6.3 months
期刊介绍: Artificial Intelligence in Medicine publishes original articles from a wide variety of interdisciplinary perspectives concerning the theory and practice of artificial intelligence (AI) in medicine, medically-oriented human biology, and health care. Artificial intelligence in medicine may be characterized as the scientific discipline pertaining to research studies, projects, and applications that aim at supporting decision-based medical tasks through knowledge- and/or data-intensive computer-based solutions that ultimately support and improve the performance of a human care provider.
期刊最新文献
Hyperbolic multivariate feature learning in higher-order heterogeneous networks for drug–disease prediction Editorial Board BDFormer: Boundary-aware dual-decoder transformer for skin lesion segmentation Finger-aware Artificial Neural Network for predicting arthritis in Patients with hand pain Artificial intelligence-driven approaches in antibiotic stewardship programs and optimizing prescription practices: A systematic review
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1