RDMAA: Robust Defense Model against Adversarial Attacks in Deep Learning for Cancer Diagnosis

Atrab A. Abd El-Aziz, Reda A. El-Khoribi, Nour Eldeen Khalifa
{"title":"RDMAA: Robust Defense Model against Adversarial Attacks\nin Deep Learning for Cancer Diagnosis","authors":"Atrab A. Abd El-Aziz, Reda A. El-Khoribi, Nour Eldeen Khalifa","doi":"10.12785/ijcds/150190","DOIUrl":null,"url":null,"abstract":": Attacks against deep learning (DL) models are considered a significant security threat. However, DL especially deep convolutional neural networks (CNN) has shown extraordinary success in a wide range of medical applications, recent studies have recently proved that they are vulnerable to adversarial attacks. Adversarial attacks are techniques that add small, crafted perturbations to the input images that are practically imperceptible from the original but misclassified by the network. To address these threats, in this paper, a novel defense technique against white-box adversarial attacks based on CNN fine-tuning using the weights of the pre-trained deep convolutional autoencoder (DCAE) called Robust Defense Model against Adversarial Attacks (RDMAA), for DL-based cancer diagnosis is introduced. Before feeding the classifier with adversarial examples, the RDMAA model is trained where the perpetuated input samples are reconstructed. Then, the weights of the previously trained RDMAA are used to fine-tune the CNN-based cancer diagnosis models. The fast gradient method (FGSM) and the project gradient descent (PGD) attacks are applied against three DL-cancer modalities (lung nodule X-ray, leukemia microscopic, and brain tumor magnetic resonance imaging (MRI)) for binary and multiclass labels. The experiment’s results proved that under attacks, the accuracy decreased to 35% and 40% for X-rays, 36% and 66% for microscopic, and 70% and 77% for MRI. In contrast, RDMAA exhibited substantial improvement, achieving a maximum absolute increase of 88% and 83% for X-rays, 89% and 87% for microscopic cases, and 93% for brain MRI. The RDMAA model is compared with another common technique (adversarial training) and outperforms it. Results show that DL-based cancer diagnoses are extremely vulnerable to adversarial attacks, even imperceptible perturbations are enough to fool the model. The proposed model RDMAA provides a solid foundation for developing more robust and accurate medical DL models.","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"31 5","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computing and Digital Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.12785/ijcds/150190","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

: Attacks against deep learning (DL) models are considered a significant security threat. However, DL especially deep convolutional neural networks (CNN) has shown extraordinary success in a wide range of medical applications, recent studies have recently proved that they are vulnerable to adversarial attacks. Adversarial attacks are techniques that add small, crafted perturbations to the input images that are practically imperceptible from the original but misclassified by the network. To address these threats, in this paper, a novel defense technique against white-box adversarial attacks based on CNN fine-tuning using the weights of the pre-trained deep convolutional autoencoder (DCAE) called Robust Defense Model against Adversarial Attacks (RDMAA), for DL-based cancer diagnosis is introduced. Before feeding the classifier with adversarial examples, the RDMAA model is trained where the perpetuated input samples are reconstructed. Then, the weights of the previously trained RDMAA are used to fine-tune the CNN-based cancer diagnosis models. The fast gradient method (FGSM) and the project gradient descent (PGD) attacks are applied against three DL-cancer modalities (lung nodule X-ray, leukemia microscopic, and brain tumor magnetic resonance imaging (MRI)) for binary and multiclass labels. The experiment’s results proved that under attacks, the accuracy decreased to 35% and 40% for X-rays, 36% and 66% for microscopic, and 70% and 77% for MRI. In contrast, RDMAA exhibited substantial improvement, achieving a maximum absolute increase of 88% and 83% for X-rays, 89% and 87% for microscopic cases, and 93% for brain MRI. The RDMAA model is compared with another common technique (adversarial training) and outperforms it. Results show that DL-based cancer diagnoses are extremely vulnerable to adversarial attacks, even imperceptible perturbations are enough to fool the model. The proposed model RDMAA provides a solid foundation for developing more robust and accurate medical DL models.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
RDMAA:用于癌症诊断的深度学习中针对对抗性攻击的稳健防御模型
:针对深度学习(DL)模型的攻击被认为是一种重大的安全威胁。然而,深度学习,尤其是深度卷积神经网络(CNN)在广泛的医疗应用中取得了非凡的成功,最近的研究证明,它们很容易受到对抗性攻击。对抗性攻击是一种在输入图像中添加小的、精心制作的扰动的技术,这些扰动与原始图像相比几乎无法察觉,但却会被网络错误分类。为了应对这些威胁,本文介绍了一种基于 CNN 微调、使用预先训练的深度卷积自动编码器(DCAE)权重来抵御白盒对抗性攻击的新型防御技术,称为 "对抗对抗性攻击的鲁棒防御模型(RDMAA)",用于基于 DL 的癌症诊断。在向分类器输入对抗性示例之前,先对 RDMAA 模型进行训练,在此基础上重建永久输入样本。然后,利用先前训练的 RDMAA 的权重对基于 CNN 的癌症诊断模型进行微调。快速梯度法(FGSM)和项目梯度下降法(PGD)攻击被应用于三种DL癌症模式(肺结节X射线、白血病显微镜和脑肿瘤磁共振成像(MRI))的二分类和多分类标签。实验结果表明,在攻击下,X 射线的准确率分别下降到 35% 和 40%,显微镜的准确率分别下降到 36% 和 66%,核磁共振成像的准确率分别下降到 70% 和 77%。相比之下,RDMAA 的准确率有了大幅提高,X 射线的绝对准确率最大分别提高了 88% 和 83%,显微镜下的准确率最大分别提高了 89% 和 87%,脑部核磁共振成像的准确率最大分别提高了 93%。RDMAA 模型与另一种常用技术(对抗训练)进行了比较,结果优于后者。结果表明,基于 DL 的癌症诊断极易受到对抗性攻击,即使是难以察觉的扰动也足以骗过模型。所提出的 RDMAA 模型为开发更稳健、更准确的医学 DL 模型奠定了坚实的基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
International Journal of Computing and Digital Systems
International Journal of Computing and Digital Systems Business, Management and Accounting-Management of Technology and Innovation
CiteScore
1.70
自引率
0.00%
发文量
111
期刊最新文献
Application of Optimized Deep Learning Mechanism for Recognition and Categorization of Retinal Diseases Application of Optimized Deep Learning Mechanism for Recognition and Categorization of Retinal Diseases IoT-based AI Methods for Indoor Air Quality Monitoring Systems: A Systematic Review Machine Learning Based Smartphone Screen GestureRecognition Using Smartphone Embedded Accelerometer and Gyroscope QR Shield: A Dual Machine Learning Approach Towards Securing QR Codes
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1