Enhanced lung cancer subtype classification using attention-integrated DeepCNN and radiomic features from CT images: a focus on feature reproducibility.

IF 2.8 4区 医学 Q3 ENDOCRINOLOGY & METABOLISM Discover. Oncology Pub Date : 2025-03-17 DOI:10.1007/s12672-025-02115-z
Muna Alsallal, Hanan Hassan Ahmed, Radhwan Abdul Kareem, Anupam Yadav, Subbulakshmi Ganesan, Aman Shankhyan, Sofia Gupta, Kamal Kant Joshi, Hayder Naji Sameer, Ahmed Yaseen, Zainab H Athab, Mohaned Adil, Bagher Farhood
{"title":"Enhanced lung cancer subtype classification using attention-integrated DeepCNN and radiomic features from CT images: a focus on feature reproducibility.","authors":"Muna Alsallal, Hanan Hassan Ahmed, Radhwan Abdul Kareem, Anupam Yadav, Subbulakshmi Ganesan, Aman Shankhyan, Sofia Gupta, Kamal Kant Joshi, Hayder Naji Sameer, Ahmed Yaseen, Zainab H Athab, Mohaned Adil, Bagher Farhood","doi":"10.1007/s12672-025-02115-z","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>This study aims to assess a hybrid framework that combines radiomic features with deep learning and attention mechanisms to improve the accuracy of classifying lung cancer subtypes using CT images.</p><p><strong>Materials and methods: </strong>A dataset of 2725 lung cancer images was used, covering various subtypes: adenocarcinoma (552 images), SCC (380 images), small cell lung cancer (SCLC) (307 images), large cell carcinoma (215 images), and pulmonary carcinoid tumors (180 images). The images were extracted as 2D slices from 3D CT scans, with tumor-containing slices selected from scans obtained across five healthcare centers. The number of slices per patient varied between 7 and 30, depending on tumor visibility. CT images were preprocessed using standardization, cropping, and Gaussian smoothing to ensure consistency across scans from different imaging instruments used at the centers. Radiomic features, including first-order statistics (FOS), shape-based, and texture-based features, were extracted using the PyRadiomics library. A DeepCNN architecture, integrated with attention mechanisms in the second convolutional block, was used for deep feature extraction, focusing on diagnostically important regions. The dataset was split into training (60%), validation (20%), and testing (20%) sets. Various feature selection techniques, such as Non-negative Matrix Factorization (NMF) and Recursive Feature Elimination (RFE), were used, and multiple machines learning models, including XGBoost and Stacking, were evaluated using accuracy, sensitivity, and AUC metrics. The model's reproducibility was validated using ICC analysis across different imaging conditions.</p><p><strong>Results: </strong>The hybrid model, which integrates DeepCNN with attention mechanisms, outperformed traditional methods. It achieved a testing accuracy of 92.47%, an AUC of 93.99%, and a sensitivity of 92.11%. XGBoost with NMF showed the best performance across all models, and the combination of radiomic and deep features improved classification further. Attention mechanisms played a key role in enhancing model performance by focusing on relevant tumor areas, reducing misclassification from irrelevant features. This also improved the performance of the 3D Autoencoder, boosting the AUC to 93.89% and accuracy to 93.24%.</p><p><strong>Conclusions: </strong>This study shows that combining radiomic features with deep learning-especially when enhanced by attention mechanisms-creates a powerful and accurate framework for classifying lung cancer subtypes. Clinical trial number Not applicable.</p>","PeriodicalId":11148,"journal":{"name":"Discover. Oncology","volume":"16 1","pages":"336"},"PeriodicalIF":2.8000,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Discover. Oncology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s12672-025-02115-z","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENDOCRINOLOGY & METABOLISM","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: This study aims to assess a hybrid framework that combines radiomic features with deep learning and attention mechanisms to improve the accuracy of classifying lung cancer subtypes using CT images.

Materials and methods: A dataset of 2725 lung cancer images was used, covering various subtypes: adenocarcinoma (552 images), SCC (380 images), small cell lung cancer (SCLC) (307 images), large cell carcinoma (215 images), and pulmonary carcinoid tumors (180 images). The images were extracted as 2D slices from 3D CT scans, with tumor-containing slices selected from scans obtained across five healthcare centers. The number of slices per patient varied between 7 and 30, depending on tumor visibility. CT images were preprocessed using standardization, cropping, and Gaussian smoothing to ensure consistency across scans from different imaging instruments used at the centers. Radiomic features, including first-order statistics (FOS), shape-based, and texture-based features, were extracted using the PyRadiomics library. A DeepCNN architecture, integrated with attention mechanisms in the second convolutional block, was used for deep feature extraction, focusing on diagnostically important regions. The dataset was split into training (60%), validation (20%), and testing (20%) sets. Various feature selection techniques, such as Non-negative Matrix Factorization (NMF) and Recursive Feature Elimination (RFE), were used, and multiple machines learning models, including XGBoost and Stacking, were evaluated using accuracy, sensitivity, and AUC metrics. The model's reproducibility was validated using ICC analysis across different imaging conditions.

Results: The hybrid model, which integrates DeepCNN with attention mechanisms, outperformed traditional methods. It achieved a testing accuracy of 92.47%, an AUC of 93.99%, and a sensitivity of 92.11%. XGBoost with NMF showed the best performance across all models, and the combination of radiomic and deep features improved classification further. Attention mechanisms played a key role in enhancing model performance by focusing on relevant tumor areas, reducing misclassification from irrelevant features. This also improved the performance of the 3D Autoencoder, boosting the AUC to 93.89% and accuracy to 93.24%.

Conclusions: This study shows that combining radiomic features with deep learning-especially when enhanced by attention mechanisms-creates a powerful and accurate framework for classifying lung cancer subtypes. Clinical trial number Not applicable.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
求助全文
约1分钟内获得全文 去求助
来源期刊
Discover. Oncology
Discover. Oncology Medicine-Endocrinology, Diabetes and Metabolism
CiteScore
2.40
自引率
9.10%
发文量
122
审稿时长
5 weeks
期刊最新文献
Ki-67 expression in anti-programmed cell death protein-1 antibody-bound CD8+ T cells as a predictor of clinical benefit. Nomogram construction for overall survival in breast angiosarcoma based on clinicopathological features: a population-based cohort study. Pan-cancer analysis of the prognosis and immune infiltration of NSUN7 and its potential function in renal clear cell carcinoma. Serum CDC42 level change during abiraterone plus prednisone treatment and its association with prognosis in metastatic castration-resistant prostate cancer patients. Targeting malignant adenomyoepithelioma of the breast: clinical insights on multimodal therapy and disease-free survival.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1