Skin cancer detection using lightweight model souping and ensembling knowledge distillation for memory-constrained devices

Muhammad Rafsan Kabir, Rashidul Hassan Borshon, Mahiv Khan Wasi, Rafeed Mohammad Sultan, Ahmad Hossain, Riasat Khan
{"title":"Skin cancer detection using lightweight model souping and ensembling knowledge distillation for memory-constrained devices","authors":"Muhammad Rafsan Kabir,&nbsp;Rashidul Hassan Borshon,&nbsp;Mahiv Khan Wasi,&nbsp;Rafeed Mohammad Sultan,&nbsp;Ahmad Hossain,&nbsp;Riasat Khan","doi":"10.1016/j.ibmed.2024.100176","DOIUrl":null,"url":null,"abstract":"<div><div>In contemporary times, the escalating prevalence of skin cancer is a significant concern, impacting numerous individuals. This work comprehensively explores advanced artificial intelligence-based deep learning techniques for skin cancer detection, utilizing the HAM10000 dataset. The experimental study fine-tunes two knowledge distillation teacher models, ResNet50 (25.6M) and DenseNet161 (28.7M), achieving remarkable accuracies of 98.32% and 98.80%, respectively. Despite their notable accuracy, the training and deployment of these large models pose significant challenges for implementation on memory-constrained medical devices. To address this issue, we introduce TinyStudent (0.35M), employing knowledge distillation from ResNet50 and DenseNet161, yielding accuracies of 85.45% and 85.00%, respectively. While TinyStudent may not achieve accuracies comparable to the teacher models, it is 82 and 73 times smaller than DenseNet161 and ResNet50, respectively, implying reduced training time and computational resource requirements. This significant reduction in the number of parameters makes it feasible to deploy the model on memory-constrained edge devices. Multi-teacher distillation, incorporating knowledge from both models, results in a competitive student accuracy of 84.10%. Ensembling methods, such as average ensembling and concatenation, further enhance predictive performances, achieving accuracies of 87.74% and 88.00%, respectively, each with approximately 1.05M parameters. Compared to DenseNet161 and ResNet50, these lightweight ensemble models offer shorter inference times, suitable for medical devices. Additionally, our implementation of the Greedy method in Model Soup establishes an accuracy of 85.70%.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"10 ","pages":"Article 100176"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligence-based medicine","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666521224000437","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In contemporary times, the escalating prevalence of skin cancer is a significant concern, impacting numerous individuals. This work comprehensively explores advanced artificial intelligence-based deep learning techniques for skin cancer detection, utilizing the HAM10000 dataset. The experimental study fine-tunes two knowledge distillation teacher models, ResNet50 (25.6M) and DenseNet161 (28.7M), achieving remarkable accuracies of 98.32% and 98.80%, respectively. Despite their notable accuracy, the training and deployment of these large models pose significant challenges for implementation on memory-constrained medical devices. To address this issue, we introduce TinyStudent (0.35M), employing knowledge distillation from ResNet50 and DenseNet161, yielding accuracies of 85.45% and 85.00%, respectively. While TinyStudent may not achieve accuracies comparable to the teacher models, it is 82 and 73 times smaller than DenseNet161 and ResNet50, respectively, implying reduced training time and computational resource requirements. This significant reduction in the number of parameters makes it feasible to deploy the model on memory-constrained edge devices. Multi-teacher distillation, incorporating knowledge from both models, results in a competitive student accuracy of 84.10%. Ensembling methods, such as average ensembling and concatenation, further enhance predictive performances, achieving accuracies of 87.74% and 88.00%, respectively, each with approximately 1.05M parameters. Compared to DenseNet161 and ResNet50, these lightweight ensemble models offer shorter inference times, suitable for medical devices. Additionally, our implementation of the Greedy method in Model Soup establishes an accuracy of 85.70%.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
为内存受限设备使用轻量级模型汤和集合知识提炼技术检测皮肤癌
在当代,皮肤癌发病率的不断攀升是一个重大问题,影响着无数人。这项研究利用 HAM10000 数据集,全面探索了用于皮肤癌检测的先进人工智能深度学习技术。实验研究对 ResNet50(25.6M)和 DenseNet161(28.7M)这两个知识提炼教师模型进行了微调,分别取得了 98.32% 和 98.80% 的显著准确率。尽管准确率很高,但这些大型模型的训练和部署对在内存受限的医疗设备上实施构成了巨大挑战。为了解决这个问题,我们引入了 TinyStudent (0.35M),它采用了从 ResNet50 和 DenseNet161 中提炼的知识,准确率分别为 85.45% 和 85.00%。虽然 TinyStudent 的准确率可能无法与教师模型相提并论,但它的体积分别是 DenseNet161 和 ResNet50 的 82 倍和 73 倍,这意味着训练时间和计算资源需求都有所减少。参数数量的大幅减少使得在内存受限的边缘设备上部署该模型变得可行。多教师提炼法结合了两个模型的知识,使学生的准确率达到 84.10%,具有很强的竞争力。平均集合和串联等集合方法进一步提高了预测性能,在使用约 1.05M 个参数的情况下,准确率分别达到 87.74% 和 88.00%。与 DenseNet161 和 ResNet50 相比,这些轻量级集合模型的推理时间更短,适用于医疗设备。此外,我们在 Model Soup 中实施的 Greedy 方法的准确率达到了 85.70%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Intelligence-based medicine
Intelligence-based medicine Health Informatics
CiteScore
5.00
自引率
0.00%
发文量
0
审稿时长
187 days
期刊最新文献
Artificial intelligence in child development monitoring: A systematic review on usage, outcomes and acceptance Automatic characterization of cerebral MRI images for the detection of autism spectrum disorders DOTnet 2.0: Deep learning network for diffuse optical tomography image reconstruction Artificial intelligence in child development monitoring: A systematic review on usage, outcomes and acceptance Clustering polycystic ovary syndrome laboratory results extracted from a large internet forum with machine learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1