基于多模态图像的多特征分析构建良性和恶性乳腺肿瘤的最佳分类模型

Ronghui Tian, Guoxiu Lu, Nannan Zhao, Wei Qian, He Ma, Wei Yang
{"title":"基于多模态图像的多特征分析构建良性和恶性乳腺肿瘤的最佳分类模型","authors":"Ronghui Tian, Guoxiu Lu, Nannan Zhao, Wei Qian, He Ma, Wei Yang","doi":"10.1007/s10278-024-01036-7","DOIUrl":null,"url":null,"abstract":"<p><p>The purpose of this study was to fuse conventional radiomic and deep features from digital breast tomosynthesis craniocaudal projection (DBT-CC) and ultrasound (US) images to establish a multimodal benign-malignant classification model and evaluate its clinical value. Data were obtained from a total of 487 patients at three centers, each of whom underwent DBT-CC and US examinations. A total of 322 patients from dataset 1 were used to construct the model, while 165 patients from datasets 2 and 3 formed the prospective testing cohort. Two radiologists with 10-20 years of work experience and three sonographers with 12-20 years of work experience semiautomatically segmented the lesions using ITK-SNAP software while considering the surrounding tissue. For the experiments, we extracted conventional radiomic and deep features from tumors from DBT-CCs and US images using PyRadiomics and Inception-v3. Additionally, we extracted conventional radiomic features from four peritumoral layers around the tumors via DBT-CC and US images. Features were fused separately from the intratumoral and peritumoral regions. For the models, we tested the SVM, KNN, decision tree, RF, XGBoost, and LightGBM classifiers. Early fusion and late fusion (ensemble and stacking) strategies were employed for feature fusion. Using the SVM classifier, stacking fusion of deep features and three peritumoral radiomic features from tumors in DBT-CC and US images achieved the optimal performance, with an accuracy and AUC of 0.953 and 0.959 [CI: 0.886-0.996], a sensitivity and specificity of 0.952 [CI: 0.888-0.992] and 0.955 [0.868-0.985], and a precision of 0.976. The experimental results indicate that the fusion model of deep features and peritumoral radiomic features from tumors in DBT-CC and US images shows promise in differentiating benign and malignant breast tumors.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300407/pdf/","citationCount":"0","resultStr":"{\"title\":\"Constructing the Optimal Classification Model for Benign and Malignant Breast Tumors Based on Multifeature Analysis from Multimodal Images.\",\"authors\":\"Ronghui Tian, Guoxiu Lu, Nannan Zhao, Wei Qian, He Ma, Wei Yang\",\"doi\":\"10.1007/s10278-024-01036-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The purpose of this study was to fuse conventional radiomic and deep features from digital breast tomosynthesis craniocaudal projection (DBT-CC) and ultrasound (US) images to establish a multimodal benign-malignant classification model and evaluate its clinical value. Data were obtained from a total of 487 patients at three centers, each of whom underwent DBT-CC and US examinations. A total of 322 patients from dataset 1 were used to construct the model, while 165 patients from datasets 2 and 3 formed the prospective testing cohort. Two radiologists with 10-20 years of work experience and three sonographers with 12-20 years of work experience semiautomatically segmented the lesions using ITK-SNAP software while considering the surrounding tissue. For the experiments, we extracted conventional radiomic and deep features from tumors from DBT-CCs and US images using PyRadiomics and Inception-v3. Additionally, we extracted conventional radiomic features from four peritumoral layers around the tumors via DBT-CC and US images. Features were fused separately from the intratumoral and peritumoral regions. For the models, we tested the SVM, KNN, decision tree, RF, XGBoost, and LightGBM classifiers. Early fusion and late fusion (ensemble and stacking) strategies were employed for feature fusion. Using the SVM classifier, stacking fusion of deep features and three peritumoral radiomic features from tumors in DBT-CC and US images achieved the optimal performance, with an accuracy and AUC of 0.953 and 0.959 [CI: 0.886-0.996], a sensitivity and specificity of 0.952 [CI: 0.888-0.992] and 0.955 [0.868-0.985], and a precision of 0.976. The experimental results indicate that the fusion model of deep features and peritumoral radiomic features from tumors in DBT-CC and US images shows promise in differentiating benign and malignant breast tumors.</p>\",\"PeriodicalId\":516858,\"journal\":{\"name\":\"Journal of imaging informatics in medicine\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300407/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of imaging informatics in medicine\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s10278-024-01036-7\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/2/21 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of imaging informatics in medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10278-024-01036-7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/2/21 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本研究旨在融合数字乳腺断层扫描头尾投影(DBT-CC)和超声(US)图像的常规放射学和深部特征,建立一个多模态良恶性分类模型,并评估其临床价值。三个中心共收集了487名患者的数据,每名患者都接受了DBT-CC和US检查。数据集 1 中的 322 名患者用于构建模型,数据集 2 和 3 中的 165 名患者组成了前瞻性测试队列。两名有 10-20 年工作经验的放射科医生和三名有 12-20 年工作经验的超声技师使用 ITK-SNAP 软件对病灶进行半自动分割,同时考虑到周围组织。在实验中,我们使用 PyRadiomics 和 Inception-v3 从 DBT-CCs 和 US 图像中提取肿瘤的常规放射学特征和深度特征。此外,我们还通过 DBT-CC 和 US 图像从肿瘤周围的四个瘤周层提取了常规放射学特征。我们分别融合了瘤内和瘤周区域的特征。在模型方面,我们测试了 SVM、KNN、决策树、RF、XGBoost 和 LightGBM 分类器。特征融合采用了早期融合和晚期融合(集合和堆叠)策略。通过使用 SVM 分类器,DBT-CC 和 US 图像中肿瘤的深度特征和三个瘤周放射学特征的堆叠融合达到了最佳性能,准确率和 AUC 分别为 0.953 和 0.959 [CI:0.886-0.996],灵敏度和特异度分别为 0.952 [CI:0.888-0.992] 和 0.955 [0.868-0.985],精确度为 0.976。实验结果表明,DBT-CC 和 US 图像中肿瘤的深度特征和肿瘤周围放射学特征的融合模型在区分良性和恶性乳腺肿瘤方面具有良好的前景。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Constructing the Optimal Classification Model for Benign and Malignant Breast Tumors Based on Multifeature Analysis from Multimodal Images.

The purpose of this study was to fuse conventional radiomic and deep features from digital breast tomosynthesis craniocaudal projection (DBT-CC) and ultrasound (US) images to establish a multimodal benign-malignant classification model and evaluate its clinical value. Data were obtained from a total of 487 patients at three centers, each of whom underwent DBT-CC and US examinations. A total of 322 patients from dataset 1 were used to construct the model, while 165 patients from datasets 2 and 3 formed the prospective testing cohort. Two radiologists with 10-20 years of work experience and three sonographers with 12-20 years of work experience semiautomatically segmented the lesions using ITK-SNAP software while considering the surrounding tissue. For the experiments, we extracted conventional radiomic and deep features from tumors from DBT-CCs and US images using PyRadiomics and Inception-v3. Additionally, we extracted conventional radiomic features from four peritumoral layers around the tumors via DBT-CC and US images. Features were fused separately from the intratumoral and peritumoral regions. For the models, we tested the SVM, KNN, decision tree, RF, XGBoost, and LightGBM classifiers. Early fusion and late fusion (ensemble and stacking) strategies were employed for feature fusion. Using the SVM classifier, stacking fusion of deep features and three peritumoral radiomic features from tumors in DBT-CC and US images achieved the optimal performance, with an accuracy and AUC of 0.953 and 0.959 [CI: 0.886-0.996], a sensitivity and specificity of 0.952 [CI: 0.888-0.992] and 0.955 [0.868-0.985], and a precision of 0.976. The experimental results indicate that the fusion model of deep features and peritumoral radiomic features from tumors in DBT-CC and US images shows promise in differentiating benign and malignant breast tumors.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Dual Energy CT for Deep Learning-Based Segmentation and Volumetric Estimation of Early Ischemic Infarcts. Empowering Women in Imaging Informatics: Confronting Imposter Syndrome, Addressing Microaggressions, and Striving for Work-Life Harmony. Deep Conformal Supervision: Leveraging Intermediate Features for Robust Uncertainty Quantification. Leveraging Ensemble Models and Follow-up Data for Accurate Prediction of mRS Scores from Radiomic Features of DSC-PWI Images. A Lightweight Method for Breast Cancer Detection Using Thermography Images with Optimized CNN Feature and Efficient Classification.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1