Development and Validation of an AI-Based Multimodal Model for Pathological Staging of Gastric Cancer Using CT and Endoscopic Images.

IF 3.8 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Academic Radiology Pub Date : 2025-01-02 DOI:10.1016/j.acra.2024.12.029
Chao Zhang, Siyuan Li, Daolai Huang, Bo Wen, Shizhuang Wei, Yaodong Song, Xianghua Wu
{"title":"Development and Validation of an AI-Based Multimodal Model for Pathological Staging of Gastric Cancer Using CT and Endoscopic Images.","authors":"Chao Zhang, Siyuan Li, Daolai Huang, Bo Wen, Shizhuang Wei, Yaodong Song, Xianghua Wu","doi":"10.1016/j.acra.2024.12.029","DOIUrl":null,"url":null,"abstract":"<p><strong>Rationale and objectives: </strong>Accurate preoperative pathological staging of gastric cancer is crucial for optimal treatment selection and improved patient outcomes. Traditional imaging methods such as CT and endoscopy have limitations in staging accuracy.</p><p><strong>Methods: </strong>This retrospective study included 691 gastric cancer patients treated from March 2017 to March 2024. Enhanced venous-phase CT and endoscopic images, along with postoperative pathological results, were collected. We developed three modeling approaches: (1) nine deep learning models applied to CT images (DeepCT), (2) 11 machine learning algorithms using handcrafted radiomic features from CT images (HandcraftedCT), and (3) ResNet-50-extracted deep features from endoscopic images followed by 11 machine learning algorithms (DeepEndo). The two top-performing models from each approach were combined into the Integrated Multi-Modal Model using a stacking ensemble method. Performance was assessed using ROC-AUC, sensitivity, and specificity.</p><p><strong>Results: </strong>The Integrated Multi-Modal Model achieved an ROC-AUC of 0.933 (95% CI, 0.887-0.979) on the test set, outperforming individual models. Sensitivity and specificity were 0.869 and 0.840, respectively. Various evaluation metrics demonstrated that the final fusion model effectively integrated the strengths of each sub-model, resulting in a balanced and robust performance with reduced false-positive and false-negative rates.</p><p><strong>Conclusion: </strong>The Integrated Multi-Modal Model effectively integrates radiomic and deep learning features from CT and endoscopic images, demonstrating superior performance in preoperative pathological staging of gastric cancer. This multimodal approach enhances predictive accuracy and provides a reliable tool for clinicians to develop individualized treatment plans, thereby improving patient outcomes.</p><p><strong>Data availability: </strong>The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical reasons. All code used in this study is based on third-party libraries and all custom code developed for this study is available upon reasonable request from the corresponding author.</p>","PeriodicalId":50928,"journal":{"name":"Academic Radiology","volume":" ","pages":""},"PeriodicalIF":3.8000,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Academic Radiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.acra.2024.12.029","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Rationale and objectives: Accurate preoperative pathological staging of gastric cancer is crucial for optimal treatment selection and improved patient outcomes. Traditional imaging methods such as CT and endoscopy have limitations in staging accuracy.

Methods: This retrospective study included 691 gastric cancer patients treated from March 2017 to March 2024. Enhanced venous-phase CT and endoscopic images, along with postoperative pathological results, were collected. We developed three modeling approaches: (1) nine deep learning models applied to CT images (DeepCT), (2) 11 machine learning algorithms using handcrafted radiomic features from CT images (HandcraftedCT), and (3) ResNet-50-extracted deep features from endoscopic images followed by 11 machine learning algorithms (DeepEndo). The two top-performing models from each approach were combined into the Integrated Multi-Modal Model using a stacking ensemble method. Performance was assessed using ROC-AUC, sensitivity, and specificity.

Results: The Integrated Multi-Modal Model achieved an ROC-AUC of 0.933 (95% CI, 0.887-0.979) on the test set, outperforming individual models. Sensitivity and specificity were 0.869 and 0.840, respectively. Various evaluation metrics demonstrated that the final fusion model effectively integrated the strengths of each sub-model, resulting in a balanced and robust performance with reduced false-positive and false-negative rates.

Conclusion: The Integrated Multi-Modal Model effectively integrates radiomic and deep learning features from CT and endoscopic images, demonstrating superior performance in preoperative pathological staging of gastric cancer. This multimodal approach enhances predictive accuracy and provides a reliable tool for clinicians to develop individualized treatment plans, thereby improving patient outcomes.

Data availability: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical reasons. All code used in this study is based on third-party libraries and all custom code developed for this study is available upon reasonable request from the corresponding author.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于CT和内镜图像的胃癌病理分期人工智能多模态模型的建立和验证。
理由和目的:准确的胃癌术前病理分期对最佳治疗选择和改善患者预后至关重要。传统的影像学方法如CT和内窥镜在分期准确性方面存在局限性。方法:回顾性研究纳入2017年3月至2024年3月期间接受治疗的691例胃癌患者。收集增强静脉期CT和内镜图像以及术后病理结果。我们开发了三种建模方法:(1)应用于CT图像的9个深度学习模型(DeepCT),(2)使用CT图像手工放射学特征的11种机器学习算法(HandcraftedCT),以及(3)resnet -50从内镜图像中提取深度特征,然后使用11种机器学习算法(DeepEndo)。采用叠加集成的方法,将两种方法中表现最好的模型组合成综合多模态模型。使用ROC-AUC、敏感性和特异性评估疗效。结果:综合多模态模型在测试集上的ROC-AUC为0.933 (95% CI, 0.887-0.979),优于单个模型。敏感性为0.869,特异性为0.840。各种评估指标表明,最终的融合模型有效地整合了每个子模型的优势,从而实现了平衡和稳健的性能,降低了假阳性和假阴性率。结论:集成多模态模型有效地整合了CT和内镜图像的放射学和深度学习特征,在胃癌术前病理分期中表现优异。这种多模式方法提高了预测的准确性,并为临床医生提供了制定个性化治疗计划的可靠工具,从而改善了患者的预后。数据可得性:本研究中提供的数据可向通讯作者索取。由于道德原因,这些数据不能公开。本研究中使用的所有代码均基于第三方库,并且为本研究开发的所有自定义代码均可根据通讯作者的合理要求提供。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Academic Radiology
Academic Radiology 医学-核医学
CiteScore
7.60
自引率
10.40%
发文量
432
审稿时长
18 days
期刊介绍: Academic Radiology publishes original reports of clinical and laboratory investigations in diagnostic imaging, the diagnostic use of radioactive isotopes, computed tomography, positron emission tomography, magnetic resonance imaging, ultrasound, digital subtraction angiography, image-guided interventions and related techniques. It also includes brief technical reports describing original observations, techniques, and instrumental developments; state-of-the-art reports on clinical issues, new technology and other topics of current medical importance; meta-analyses; scientific studies and opinions on radiologic education; and letters to the Editor.
期刊最新文献
Machine Learning Model for Risk Stratification of Papillary Thyroid Carcinoma Based on Radiopathomics. Non-invasive Assessment of Human Epidermal Growth Factor Receptor 2 Expression in Gastric Cancer Based on Deep Learning: A Computed Tomography-based Multicenter Study. Prediction of Radiation Therapy Induced Cardiovascular Toxicity from Pretreatment CT Images in Patients with Thoracic Malignancy via an Optimal Biomarker Approach. Unlocking Innovation: Promoting Scholarly Endeavors During Radiology Residency. Longitudinal Assessment of Pulmonary Involvement and Prognosis in Different Subtypes of COVID-19 Patients After One Year Using Low-Dose CT: A Prospective Observational Study.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1