首页 > 最新文献

Journal of imaging informatics in medicine最新文献

英文 中文
DeepCSFusion: Deep Compressive Sensing Fusion for Efficient COVID-19 Classification. DeepCSFusion:用于 COVID-19 高效分类的深度压缩传感融合。
Pub Date : 2024-08-01 Epub Date: 2024-02-21 DOI: 10.1007/s10278-024-01011-2
Dina A Ragab, Salema Fayed, Noha Ghatwary

Worldwide, the COVID-19 epidemic, which started in 2019, has resulted in millions of deaths. The medical research community has widely used computer analysis of medical data during the pandemic, specifically deep learning models. Deploying models on devices with constrained resources is a significant challenge due to the increased storage demands associated with larger deep learning models. Accordingly, in this paper, we propose a novel compression strategy that compresses deep features with a compression ratio of 10 to 90% to accurately classify the COVID-19 and non-COVID-19 computed tomography scans. Additionally, we extensively validated the compression using various available deep learning methods to extract the most suitable features from different models. Finally, the suggested DeepCSFusion model compresses the extracted features and applies fusion to achieve the highest classification accuracy with fewer features. The proposed DeepCSFusion model was validated on the publicly available dataset "SARS-CoV-2 CT" scans composed of 1252 CT. This study demonstrates that the proposed DeepCSFusion reduced the computational time with an overall accuracy of 99.3%. Also, it outperforms state-of-the-art pipelines in terms of various classification measures.

在全球范围内,始于 2019 年的 COVID-19 流行病已造成数百万人死亡。医学研究界在疫情期间广泛使用计算机分析医疗数据,特别是深度学习模型。由于大型深度学习模型带来的存储需求增加,在资源有限的设备上部署模型是一项重大挑战。因此,在本文中,我们提出了一种新颖的压缩策略,以 10% 到 90% 的压缩率压缩深度特征,从而准确地对 COVID-19 和非 COVID-19 计算机断层扫描进行分类。此外,我们还使用各种可用的深度学习方法对压缩进行了广泛验证,以从不同模型中提取最合适的特征。最后,建议的 DeepCSFusion 模型对提取的特征进行压缩并应用融合,从而以较少的特征达到最高的分类准确率。提议的 DeepCSFusion 模型在由 1252 个 CT 组成的公开数据集 "SARS-CoV-2 CT "上进行了验证。这项研究表明,所提出的 DeepCSFusion 缩短了计算时间,总体准确率达到 99.3%。此外,在各种分类指标方面,它也优于最先进的管道。
{"title":"DeepCSFusion: Deep Compressive Sensing Fusion for Efficient COVID-19 Classification.","authors":"Dina A Ragab, Salema Fayed, Noha Ghatwary","doi":"10.1007/s10278-024-01011-2","DOIUrl":"10.1007/s10278-024-01011-2","url":null,"abstract":"<p><p>Worldwide, the COVID-19 epidemic, which started in 2019, has resulted in millions of deaths. The medical research community has widely used computer analysis of medical data during the pandemic, specifically deep learning models. Deploying models on devices with constrained resources is a significant challenge due to the increased storage demands associated with larger deep learning models. Accordingly, in this paper, we propose a novel compression strategy that compresses deep features with a compression ratio of 10 to 90% to accurately classify the COVID-19 and non-COVID-19 computed tomography scans. Additionally, we extensively validated the compression using various available deep learning methods to extract the most suitable features from different models. Finally, the suggested DeepCSFusion model compresses the extracted features and applies fusion to achieve the highest classification accuracy with fewer features. The proposed DeepCSFusion model was validated on the publicly available dataset \"SARS-CoV-2 CT\" scans composed of 1252 CT. This study demonstrates that the proposed DeepCSFusion reduced the computational time with an overall accuracy of 99.3%. Also, it outperforms state-of-the-art pipelines in terms of various classification measures.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300776/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139914401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Automatic Grading System for Orthodontically Induced External Root Resorption Based on Deep Convolutional Neural Network. 基于深度卷积神经网络的正畸诱导外根吸收自动分级系统
Pub Date : 2024-08-01 Epub Date: 2024-02-23 DOI: 10.1007/s10278-024-01045-6
Shuxi Xu, Houli Peng, Lanxin Yang, Wenjie Zhong, Xiang Gao, Jinlin Song

Orthodontically induced external root resorption (OIERR) is a common complication of orthodontic treatments. Accurate OIERR grading is crucial for clinical intervention. This study aimed to evaluate six deep convolutional neural networks (CNNs) for performing OIERR grading on tooth slices to construct an automatic grading system for OIERR. A total of 2146 tooth slices of different OIERR grades were collected and preprocessed. Six pre-trained CNNs (EfficientNet-B1, EfficientNet-B2, EfficientNet-B3, EfficientNet-B4, EfficientNet-B5, and MobileNet-V3) were trained and validated on the pre-processed images based on four different cross-validation methods. The performances of the CNNs on a test set were evaluated and compared with those of orthodontists. The gradient-weighted class activation mapping (Grad-CAM) technique was used to explore the area of maximum impact on the model decisions in the tooth slices. The six CNN models performed remarkably well in OIERR grading, with a mean accuracy of 0.92, surpassing that of the orthodontists (mean accuracy of 0.82). EfficientNet-B4 trained with fivefold cross-validation emerged as the final OIERR grading system, with a high accuracy of 0.94. Grad-CAM revealed that the apical region had the greatest effect on the OIERR grading system. The six CNNs demonstrated excellent OIERR grading and outperformed orthodontists. The proposed OIERR grading system holds potential as a reliable diagnostic support for orthodontists in clinical practice.

正畸诱发的牙根外吸收(OIERR)是正畸治疗中常见的并发症。准确的 OIERR 分级对临床干预至关重要。本研究旨在评估六种深度卷积神经网络(CNN)对牙齿切片进行OIERR分级的能力,以构建OIERR自动分级系统。共收集并预处理了 2146 个不同 OIERR 等级的牙齿切片。基于四种不同的交叉验证方法,在预处理图像上训练和验证了六个预训练 CNN(EfficientNet-B1、EfficientNet-B2、EfficientNet-B3、EfficientNet-B4、EfficientNet-B5 和 MobileNet-V3)。对 CNN 在测试集上的表现进行了评估,并与正畸医生的表现进行了比较。梯度加权类激活映射(Grad-CAM)技术用于探索牙齿切片中对模型决策影响最大的区域。六个 CNN 模型在 OIERR 评级中表现出色,平均准确率为 0.92,超过了正畸医生(平均准确率为 0.82)。经过五倍交叉验证训练的 EfficientNet-B4 成为最终的 OIERR 分级系统,准确率高达 0.94。Grad-CAM 显示,根尖区域对 OIERR 分级系统的影响最大。六种 CNN 的 OIERR 分级效果非常出色,优于正畸医生。所提出的 OIERR 分级系统有望在临床实践中为正畸医生提供可靠的诊断支持。
{"title":"An Automatic Grading System for Orthodontically Induced External Root Resorption Based on Deep Convolutional Neural Network.","authors":"Shuxi Xu, Houli Peng, Lanxin Yang, Wenjie Zhong, Xiang Gao, Jinlin Song","doi":"10.1007/s10278-024-01045-6","DOIUrl":"10.1007/s10278-024-01045-6","url":null,"abstract":"<p><p>Orthodontically induced external root resorption (OIERR) is a common complication of orthodontic treatments. Accurate OIERR grading is crucial for clinical intervention. This study aimed to evaluate six deep convolutional neural networks (CNNs) for performing OIERR grading on tooth slices to construct an automatic grading system for OIERR. A total of 2146 tooth slices of different OIERR grades were collected and preprocessed. Six pre-trained CNNs (EfficientNet-B1, EfficientNet-B2, EfficientNet-B3, EfficientNet-B4, EfficientNet-B5, and MobileNet-V3) were trained and validated on the pre-processed images based on four different cross-validation methods. The performances of the CNNs on a test set were evaluated and compared with those of orthodontists. The gradient-weighted class activation mapping (Grad-CAM) technique was used to explore the area of maximum impact on the model decisions in the tooth slices. The six CNN models performed remarkably well in OIERR grading, with a mean accuracy of 0.92, surpassing that of the orthodontists (mean accuracy of 0.82). EfficientNet-B4 trained with fivefold cross-validation emerged as the final OIERR grading system, with a high accuracy of 0.94. Grad-CAM revealed that the apical region had the greatest effect on the OIERR grading system. The six CNNs demonstrated excellent OIERR grading and outperformed orthodontists. The proposed OIERR grading system holds potential as a reliable diagnostic support for orthodontists in clinical practice.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300848/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139935079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and Preliminary Validation of a Novel Convolutional Neural Network Model for Predicting Treatment Response in Patients with Unresectable Hepatocellular Carcinoma Receiving Hepatic Arterial Infusion Chemotherapy. 开发并初步验证用于预测接受肝动脉灌注化疗的不可切除肝细胞癌患者治疗反应的新型卷积神经网络模型
Pub Date : 2024-08-01 Epub Date: 2024-02-23 DOI: 10.1007/s10278-024-01003-2
Bing Quan, Jinghuan Li, Hailin Mi, Miao Li, Wenfeng Liu, Fan Yao, Rongxin Chen, Yan Shan, Pengju Xu, Zhenggang Ren, Xin Yin

The goal of this study was to evaluate the performance of a convolutional neural network (CNN) with preoperative MRI and clinical factors in predicting the treatment response of unresectable hepatocellular carcinoma (HCC) patients receiving hepatic arterial infusion chemotherapy (HAIC). A total of 191 patients with unresectable HCC who underwent HAIC in our hospital between May 2019 and March 2022 were retrospectively recruited. We selected InceptionV4 from three representative CNN models, AlexNet, ResNet, and InceptionV4, according to the cross-entropy loss (CEL). We subsequently developed InceptionV4 to fuse the information from qualified pretreatment MRI data and patient clinical factors. Radiomic information was evaluated based on several constant sequences, including enhanced T1-weighted sequences (with arterial, portal, and delayed phases), T2 FSE sequences, and dual-echo sequences. The performance of InceptionV4 was cross-validated in the training cohort (n = 127) and internally validated in an independent cohort (n = 64), with comparisons against single important clinical factors and radiologists in terms of receiver operating characteristic (ROC) curves. Class activation mapping was used to visualize the InceptionV4 model. The InceptionV4 model achieved an AUC of 0.871 (95% confidence interval [CI] 0.761-0.981) in the cross-validation cohort and an AUC of 0.826 (95% CI 0.682-0.970) in the internal validation cohort; these two models performed better than did the other methods (AUC ranges 0.783-0.873 and 0.708-0.806 for cross- and internal validations, respectively; P < 0.01). The present InceptionV4 model, which integrates radiomic information and clinical factors, helps predict the treatment response of unresectable HCC patients receiving HAIC treatment.

本研究旨在评估卷积神经网络(CNN)与术前磁共振成像和临床因素在预测接受肝动脉灌注化疗(HAIC)的不可切除肝细胞癌(HCC)患者治疗反应方面的性能。我们回顾性地招募了2019年5月至2022年3月期间在我院接受肝动脉灌注化疗的191例不可切除肝细胞癌患者。根据交叉熵损失(CEL),我们从三个具有代表性的 CNN 模型 AlexNet、ResNet 和 InceptionV4 中选择了 InceptionV4。我们随后开发了 InceptionV4,以融合合格的预处理 MRI 数据和患者临床因素的信息。我们根据几种恒定序列对放射组学信息进行了评估,包括增强 T1 加权序列(包括动脉期、门脉期和延迟期)、T2 FSE 序列和双回波序列。InceptionV4 的性能在训练队列(n = 127)中进行了交叉验证,并在独立队列(n = 64)中进行了内部验证,通过接收器操作特征曲线(ROC)与单一重要临床因素和放射科医生进行了比较。类激活图谱用于直观显示 InceptionV4 模型。InceptionV4 模型在交叉验证队列中的 AUC 为 0.871(95% 置信区间 [CI] 0.761-0.981),在内部验证队列中的 AUC 为 0.826(95% 置信区间 [CI] 0.682-0.970);这两个模型的表现优于其他方法(交叉验证和内部验证的 AUC 范围分别为 0.783-0.873 和 0.708-0.806; P
{"title":"Development and Preliminary Validation of a Novel Convolutional Neural Network Model for Predicting Treatment Response in Patients with Unresectable Hepatocellular Carcinoma Receiving Hepatic Arterial Infusion Chemotherapy.","authors":"Bing Quan, Jinghuan Li, Hailin Mi, Miao Li, Wenfeng Liu, Fan Yao, Rongxin Chen, Yan Shan, Pengju Xu, Zhenggang Ren, Xin Yin","doi":"10.1007/s10278-024-01003-2","DOIUrl":"10.1007/s10278-024-01003-2","url":null,"abstract":"<p><p>The goal of this study was to evaluate the performance of a convolutional neural network (CNN) with preoperative MRI and clinical factors in predicting the treatment response of unresectable hepatocellular carcinoma (HCC) patients receiving hepatic arterial infusion chemotherapy (HAIC). A total of 191 patients with unresectable HCC who underwent HAIC in our hospital between May 2019 and March 2022 were retrospectively recruited. We selected InceptionV4 from three representative CNN models, AlexNet, ResNet, and InceptionV4, according to the cross-entropy loss (CEL). We subsequently developed InceptionV4 to fuse the information from qualified pretreatment MRI data and patient clinical factors. Radiomic information was evaluated based on several constant sequences, including enhanced T1-weighted sequences (with arterial, portal, and delayed phases), T2 FSE sequences, and dual-echo sequences. The performance of InceptionV4 was cross-validated in the training cohort (n = 127) and internally validated in an independent cohort (n = 64), with comparisons against single important clinical factors and radiologists in terms of receiver operating characteristic (ROC) curves. Class activation mapping was used to visualize the InceptionV4 model. The InceptionV4 model achieved an AUC of 0.871 (95% confidence interval [CI] 0.761-0.981) in the cross-validation cohort and an AUC of 0.826 (95% CI 0.682-0.970) in the internal validation cohort; these two models performed better than did the other methods (AUC ranges 0.783-0.873 and 0.708-0.806 for cross- and internal validations, respectively; P < 0.01). The present InceptionV4 model, which integrates radiomic information and clinical factors, helps predict the treatment response of unresectable HCC patients receiving HAIC treatment.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300745/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139935091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Coronary Computed Tomography Angiography Using a Novel Deep Learning-Based Algorithm. 使用基于深度学习的新型算法优化冠状动脉计算机断层扫描血管造影。
Pub Date : 2024-08-01 Epub Date: 2024-03-04 DOI: 10.1007/s10278-024-01033-w
H J H Dreesen, C Stroszczynski, M M Lell

Coronary computed tomography angiography (CCTA) is an essential part of the diagnosis of chronic coronary syndrome (CCS) in patients with low-to-intermediate pre-test probability. The minimum technical requirement is 64-row multidetector CT (64-MDCT), which is still frequently used, although it is prone to motion artifacts because of its limited temporal resolution and z-coverage. In this study, we evaluate the potential of a deep-learning-based motion correction algorithm (MCA) to eliminate these motion artifacts. 124 64-MDCT-acquired CCTA examinations with at least minor motion artifacts were included. Images were reconstructed using a conventional reconstruction algorithm (CA) and a MCA. Image quality (IQ), according to a 5-point Likert score, was evaluated per-segment, per-artery, and per-patient and was correlated with potentially disturbing factors (heart rate (HR), intra-cycle HR changes, BMI, age, and sex). Comparison was done by Wilcoxon-Signed-Rank test, and correlation by Spearman's Rho. Per-patient, insufficient IQ decreased by 5.26%, and sufficient IQ increased by 9.66% with MCA. Per-artery, insufficient IQ of the right coronary artery (RCA) decreased by 18.18%, and sufficient IQ increased by 27.27%. Per-segment, insufficient IQ in segments 1 and 2 decreased by 11.51% and 24.78%, respectively, and sufficient IQ increased by 10.62% and 18.58%, respectively. Total artifacts per-artery decreased in the RCA from 3.11 ± 1.65 to 2.26 ± 1.52. HR dependence of RCA IQ decreased to intermediate correlation in images with MCA reconstruction. The applied MCA improves the IQ of 64-MDCT-acquired images and reduces the influence of HR on IQ, increasing 64-MDCT validity in the diagnosis of CCS.

冠状动脉计算机断层扫描血管造影术(CCTA)是低至中度预检概率患者诊断慢性冠状动脉综合征(CCS)的重要组成部分。其最低技术要求是 64 排多矢量 CT(64-MDCT),但由于其时间分辨率和 Z 覆盖范围有限,容易产生运动伪影,目前仍在频繁使用。在本研究中,我们评估了基于深度学习的运动校正算法(MCA)消除这些运动伪影的潜力。研究纳入了 124 例 64-MDCT 采集的 CCTA 检查,这些检查至少存在轻微的运动伪影。使用传统重建算法(CA)和 MCA 重建图像。图像质量(IQ)根据 5 分 Likert 评分,按每个节段、每个动脉和每个患者进行评估,并与潜在干扰因素(心率 (HR)、周期内心率变化、体重指数 (BMI)、年龄和性别)相关联。比较采用 Wilcoxon-Signed-Rank 检验,相关性采用 Spearman's Rho 检验。就每个患者而言,MCA 的智商不足率降低了 5.26%,智商充足率提高了 9.66%。就动脉而言,右冠状动脉(RCA)智商不足的患者减少了 18.18%,智商充足的患者增加了 27.27%。每个节段,第 1 节段和第 2 节段的 IQ 不足分别减少了 11.51% 和 24.78%,IQ 充足分别增加了 10.62% 和 18.58%。RCA 每条动脉的总伪差从 3.11 ± 1.65 降至 2.26 ± 1.52。在使用 MCA 重建的图像中,RCA IQ 的 HR 依赖性下降至中等相关性。应用 MCA 提高了 64-MDCT 获取图像的 IQ,降低了 HR 对 IQ 的影响,从而提高了 64-MDCT 在诊断 CCS 中的有效性。
{"title":"Optimizing Coronary Computed Tomography Angiography Using a Novel Deep Learning-Based Algorithm.","authors":"H J H Dreesen, C Stroszczynski, M M Lell","doi":"10.1007/s10278-024-01033-w","DOIUrl":"10.1007/s10278-024-01033-w","url":null,"abstract":"<p><p>Coronary computed tomography angiography (CCTA) is an essential part of the diagnosis of chronic coronary syndrome (CCS) in patients with low-to-intermediate pre-test probability. The minimum technical requirement is 64-row multidetector CT (64-MDCT), which is still frequently used, although it is prone to motion artifacts because of its limited temporal resolution and z-coverage. In this study, we evaluate the potential of a deep-learning-based motion correction algorithm (MCA) to eliminate these motion artifacts. 124 64-MDCT-acquired CCTA examinations with at least minor motion artifacts were included. Images were reconstructed using a conventional reconstruction algorithm (CA) and a MCA. Image quality (IQ), according to a 5-point Likert score, was evaluated per-segment, per-artery, and per-patient and was correlated with potentially disturbing factors (heart rate (HR), intra-cycle HR changes, BMI, age, and sex). Comparison was done by Wilcoxon-Signed-Rank test, and correlation by Spearman's Rho. Per-patient, insufficient IQ decreased by 5.26%, and sufficient IQ increased by 9.66% with MCA. Per-artery, insufficient IQ of the right coronary artery (RCA) decreased by 18.18%, and sufficient IQ increased by 27.27%. Per-segment, insufficient IQ in segments 1 and 2 decreased by 11.51% and 24.78%, respectively, and sufficient IQ increased by 10.62% and 18.58%, respectively. Total artifacts per-artery decreased in the RCA from 3.11 ± 1.65 to 2.26 ± 1.52. HR dependence of RCA IQ decreased to intermediate correlation in images with MCA reconstruction. The applied MCA improves the IQ of 64-MDCT-acquired images and reduces the influence of HR on IQ, increasing 64-MDCT validity in the diagnosis of CCS.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300758/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140029985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OralEpitheliumDB: A Dataset for Oral Epithelial Dysplasia Image Segmentation and Classification. OralEpitheliumDB:用于口腔上皮发育不良图像分割和分类的数据集。
Pub Date : 2024-08-01 Epub Date: 2024-02-26 DOI: 10.1007/s10278-024-01041-w
Adriano Barbosa Silva, Alessandro Santana Martins, Thaína Aparecida Azevedo Tosta, Adriano Mota Loyola, Sérgio Vitorino Cardoso, Leandro Alves Neves, Paulo Rogério de Faria, Marcelo Zanchetta do Nascimento

Early diagnosis of potentially malignant disorders, such as oral epithelial dysplasia, is the most reliable way to prevent oral cancer. Computational algorithms have been used as an auxiliary tool to aid specialists in this process. Usually, experiments are performed on private data, making it difficult to reproduce the results. There are several public datasets of histological images, but studies focused on oral dysplasia images use inaccessible datasets. This prevents the improvement of algorithms aimed at this lesion. This study introduces an annotated public dataset of oral epithelial dysplasia tissue images. The dataset includes 456 images acquired from 30 mouse tongues. The images were categorized among the lesion grades, with nuclear structures manually marked by a trained specialist and validated by a pathologist. Also, experiments were carried out in order to illustrate the potential of the proposed dataset in classification and segmentation processes commonly explored in the literature. Convolutional neural network (CNN) models for semantic and instance segmentation were employed on the images, which were pre-processed with stain normalization methods. Then, the segmented and non-segmented images were classified with CNN architectures and machine learning algorithms. The data obtained through these processes is available in the dataset. The segmentation stage showed the F1-score value of 0.83, obtained with the U-Net model using the ResNet-50 as a backbone. At the classification stage, the most expressive result was achieved with the Random Forest method, with an accuracy value of 94.22%. The results show that the segmentation contributed to the classification results, but studies are needed for the improvement of these stages of automated diagnosis. The original, gold standard, normalized, and segmented images are publicly available and may be used for the improvement of clinical applications of CAD methods on oral epithelial dysplasia tissue images.

早期诊断潜在的恶性疾病,如口腔上皮发育不良,是预防口腔癌最可靠的方法。在这一过程中,计算算法被用作辅助工具来帮助专家。通常,实验都是在私人数据上进行的,因此很难复制实验结果。有几个组织学图像的公共数据集,但侧重于口腔发育不良图像的研究使用的是无法访问的数据集。这阻碍了针对这种病变的算法的改进。本研究引入了一个口腔上皮发育不良组织图像注释公共数据集。该数据集包括从 30 只小鼠舌头上获取的 456 幅图像。图像按病变等级分类,核结构由经过培训的专家手动标记,并由病理学家验证。此外,还进行了实验,以说明所提议的数据集在文献中常见的分类和分割过程中的潜力。使用卷积神经网络(CNN)模型对图像进行语义和实例分割,并使用污点归一化方法对图像进行预处理。然后,利用 CNN 架构和机器学习算法对分割图像和非分割图像进行分类。通过这些过程获得的数据可在数据集中找到。在分割阶段,以 ResNet-50 为骨干的 U-Net 模型得到的 F1 分数为 0.83。在分类阶段,使用随机森林方法取得的结果最具表现力,准确率达到 94.22%。结果表明,分割对分类结果做出了贡献,但还需要进行研究,以改进这些阶段的自动诊断。原始图像、金标准图像、归一化图像和分割图像均已公开,可用于改进 CAD 方法在口腔上皮发育不良组织图像上的临床应用。
{"title":"OralEpitheliumDB: A Dataset for Oral Epithelial Dysplasia Image Segmentation and Classification.","authors":"Adriano Barbosa Silva, Alessandro Santana Martins, Thaína Aparecida Azevedo Tosta, Adriano Mota Loyola, Sérgio Vitorino Cardoso, Leandro Alves Neves, Paulo Rogério de Faria, Marcelo Zanchetta do Nascimento","doi":"10.1007/s10278-024-01041-w","DOIUrl":"10.1007/s10278-024-01041-w","url":null,"abstract":"<p><p>Early diagnosis of potentially malignant disorders, such as oral epithelial dysplasia, is the most reliable way to prevent oral cancer. Computational algorithms have been used as an auxiliary tool to aid specialists in this process. Usually, experiments are performed on private data, making it difficult to reproduce the results. There are several public datasets of histological images, but studies focused on oral dysplasia images use inaccessible datasets. This prevents the improvement of algorithms aimed at this lesion. This study introduces an annotated public dataset of oral epithelial dysplasia tissue images. The dataset includes 456 images acquired from 30 mouse tongues. The images were categorized among the lesion grades, with nuclear structures manually marked by a trained specialist and validated by a pathologist. Also, experiments were carried out in order to illustrate the potential of the proposed dataset in classification and segmentation processes commonly explored in the literature. Convolutional neural network (CNN) models for semantic and instance segmentation were employed on the images, which were pre-processed with stain normalization methods. Then, the segmented and non-segmented images were classified with CNN architectures and machine learning algorithms. The data obtained through these processes is available in the dataset. The segmentation stage showed the F1-score value of 0.83, obtained with the U-Net model using the ResNet-50 as a backbone. At the classification stage, the most expressive result was achieved with the Random Forest method, with an accuracy value of 94.22%. The results show that the segmentation contributed to the classification results, but studies are needed for the improvement of these stages of automated diagnosis. The original, gold standard, normalized, and segmented images are publicly available and may be used for the improvement of clinical applications of CAD methods on oral epithelial dysplasia tissue images.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139975360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-Preserving Breast Cancer Classification: A Federated Transfer Learning Approach. 保护隐私的乳腺癌分类:联合迁移学习法
Pub Date : 2024-08-01 Epub Date: 2024-02-29 DOI: 10.1007/s10278-024-01035-8
Selvakanmani S, G Dharani Devi, Rekha V, J Jeyalakshmi

Breast cancer is deadly cancer causing a considerable number of fatalities among women in worldwide. To enhance patient outcomes as well as survival rates, early and accurate detection is crucial. Machine learning techniques, particularly deep learning, have demonstrated impressive success in various image recognition tasks, including breast cancer classification. However, the reliance on large labeled datasets poses challenges in the medical domain due to privacy issues and data silos. This study proposes a novel transfer learning approach integrated into a federated learning framework to solve the limitations of limited labeled data and data privacy in collaborative healthcare settings. For breast cancer classification, the mammography and MRO images were gathered from three different medical centers. Federated learning, an emerging privacy-preserving paradigm, empowers multiple medical institutions to jointly train the global model while maintaining data decentralization. Our proposed methodology capitalizes on the power of pre-trained ResNet, a deep neural network architecture, as a feature extractor. By fine-tuning the higher layers of ResNet using breast cancer datasets from diverse medical centers, we enable the model to learn specialized features relevant to different domains while leveraging the comprehensive image representations acquired from large-scale datasets like ImageNet. To overcome domain shift challenges caused by variations in data distributions across medical centers, we introduce domain adversarial training. The model learns to minimize the domain discrepancy while maximizing classification accuracy, facilitating the acquisition of domain-invariant features. We conducted extensive experiments on diverse breast cancer datasets obtained from multiple medical centers. Comparative analysis was performed to evaluate the proposed approach against traditional standalone training and federated learning without domain adaptation. When compared with traditional models, our proposed model showed a classification accuracy of 98.8% and a computational time of 12.22 s. The results showcase promising enhancements in classification accuracy and model generalization, underscoring the potential of our method in improving breast cancer classification performance while upholding data privacy in a federated healthcare environment.

乳腺癌是一种致命的癌症,在全世界造成大量妇女死亡。为了提高患者的治疗效果和生存率,早期准确检测至关重要。机器学习技术,尤其是深度学习,在包括乳腺癌分类在内的各种图像识别任务中取得了令人瞩目的成就。然而,由于隐私问题和数据孤岛,对大型标记数据集的依赖给医疗领域带来了挑战。本研究提出了一种集成到联合学习框架中的新型迁移学习方法,以解决协作医疗环境中标签数据有限和数据隐私的限制。在乳腺癌分类中,乳房 X 射线照相术和 MRO 图像来自三个不同的医疗中心。联邦学习是一种新兴的隐私保护范例,它使多个医疗机构能够联合训练全局模型,同时保持数据的分散性。我们提出的方法利用了预先训练好的 ResNet(一种深度神经网络架构)作为特征提取器。通过使用来自不同医疗中心的乳腺癌数据集对 ResNet 的高层进行微调,我们使模型能够学习与不同领域相关的专门特征,同时利用从 ImageNet 等大规模数据集获得的全面图像表征。为了克服各医疗中心数据分布差异造成的领域转移难题,我们引入了领域对抗训练。该模型在最大限度提高分类准确性的同时,学会最小化领域差异,从而促进领域不变特征的获取。我们在从多个医疗中心获得的各种乳腺癌数据集上进行了广泛的实验。我们进行了对比分析,以评估所提出的方法与传统的独立训练和无领域适应性的联合学习的效果。与传统模型相比,我们提出的模型显示出 98.8% 的分类准确率和 12.22 秒的计算时间。结果表明,我们的方法在提高分类准确率和模型泛化方面有很好的前景,突出了我们的方法在提高乳腺癌分类性能方面的潜力,同时维护了联合医疗环境中的数据隐私。
{"title":"Privacy-Preserving Breast Cancer Classification: A Federated Transfer Learning Approach.","authors":"Selvakanmani S, G Dharani Devi, Rekha V, J Jeyalakshmi","doi":"10.1007/s10278-024-01035-8","DOIUrl":"10.1007/s10278-024-01035-8","url":null,"abstract":"<p><p>Breast cancer is deadly cancer causing a considerable number of fatalities among women in worldwide. To enhance patient outcomes as well as survival rates, early and accurate detection is crucial. Machine learning techniques, particularly deep learning, have demonstrated impressive success in various image recognition tasks, including breast cancer classification. However, the reliance on large labeled datasets poses challenges in the medical domain due to privacy issues and data silos. This study proposes a novel transfer learning approach integrated into a federated learning framework to solve the limitations of limited labeled data and data privacy in collaborative healthcare settings. For breast cancer classification, the mammography and MRO images were gathered from three different medical centers. Federated learning, an emerging privacy-preserving paradigm, empowers multiple medical institutions to jointly train the global model while maintaining data decentralization. Our proposed methodology capitalizes on the power of pre-trained ResNet, a deep neural network architecture, as a feature extractor. By fine-tuning the higher layers of ResNet using breast cancer datasets from diverse medical centers, we enable the model to learn specialized features relevant to different domains while leveraging the comprehensive image representations acquired from large-scale datasets like ImageNet. To overcome domain shift challenges caused by variations in data distributions across medical centers, we introduce domain adversarial training. The model learns to minimize the domain discrepancy while maximizing classification accuracy, facilitating the acquisition of domain-invariant features. We conducted extensive experiments on diverse breast cancer datasets obtained from multiple medical centers. Comparative analysis was performed to evaluate the proposed approach against traditional standalone training and federated learning without domain adaptation. When compared with traditional models, our proposed model showed a classification accuracy of 98.8% and a computational time of 12.22 s. The results showcase promising enhancements in classification accuracy and model generalization, underscoring the potential of our method in improving breast cancer classification performance while upholding data privacy in a federated healthcare environment.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300768/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139998780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
URI-CADS: A Fully Automated Computer-Aided Diagnosis System for Ultrasound Renal Imaging. URI-CADS:用于超声肾脏成像的全自动计算机辅助诊断系统。
Pub Date : 2024-08-01 Epub Date: 2024-02-27 DOI: 10.1007/s10278-024-01055-4
Miguel Molina-Moreno, Iván González-Díaz, Maite Rivera Gorrín, Víctor Burguera Vion, Fernando Díaz-de-María

Ultrasound is a widespread imaging modality, with special application in medical fields such as nephrology. However, automated approaches for ultrasound renal interpretation still pose some challenges: (1) the need for manual supervision by experts at various stages of the system, which prevents its adoption in primary healthcare, and (2) their limited considered taxonomy (e.g., reduced number of pathologies), which makes them unsuitable for training practitioners and providing support to experts. This paper proposes a fully automated computer-aided diagnosis system for ultrasound renal imaging addressing both of these challenges. Our system is based in a multi-task architecture, which is implemented by a three-branched convolutional neural network and is capable of segmenting the kidney and detecting global and local pathologies with no need of human interaction during diagnosis. The integration of different image perspectives at distinct granularities enhanced the proposed diagnosis. We employ a large (1985 images) and demanding ultrasound renal imaging database, publicly released with the system and annotated on the basis of an exhaustive taxonomy of two global and nine local pathologies (including cysts, lithiasis, hydronephrosis, angiomyolipoma), establishing a benchmark for ultrasound renal interpretation. Experiments show that our proposed method outperforms several state-of-the-art methods in both segmentation and diagnosis tasks and leverages the combination of global and local image information to improve the diagnosis. Our results, with a 87.41% of AUC in healthy-pathological diagnosis and 81.90% in multi-pathological diagnosis, support the use of our system as a helpful tool in the healthcare system.

超声波是一种广泛应用的成像模式,在肾脏病学等医学领域有着特殊的应用。然而,超声肾脏解读的自动化方法仍面临一些挑战:(1) 在系统的不同阶段需要专家的人工监督,这阻碍了其在初级医疗保健领域的应用;(2) 其考虑的分类有限(如病理数量减少),这使其不适合培训从业人员和为专家提供支持。本文提出了一种全自动计算机辅助诊断系统,用于超声肾脏成像,以应对这两个挑战。我们的系统基于多任务架构,由三分支卷积神经网络实现,能够分割肾脏并检测整体和局部病变,在诊断过程中无需人工干预。整合不同粒度的不同图像视角增强了诊断效果。我们采用了一个大型(1985 幅图像)且要求苛刻的超声肾脏成像数据库,该数据库与系统一起公开发布,并根据详尽的分类法对两种整体病变和九种局部病变(包括囊肿、结石、肾积水、血管肌脂肪瘤)进行了注释,为超声肾脏解读建立了一个基准。实验表明,我们提出的方法在分割和诊断任务中的表现均优于几种最先进的方法,并能充分利用全局和局部图像信息的组合来改进诊断。我们的结果表明,我们的系统在健康病理诊断中的 AUC 为 87.41%,在多重病理诊断中的 AUC 为 81.90%,支持将我们的系统用作医疗系统中的一种有用工具。
{"title":"URI-CADS: A Fully Automated Computer-Aided Diagnosis System for Ultrasound Renal Imaging.","authors":"Miguel Molina-Moreno, Iván González-Díaz, Maite Rivera Gorrín, Víctor Burguera Vion, Fernando Díaz-de-María","doi":"10.1007/s10278-024-01055-4","DOIUrl":"10.1007/s10278-024-01055-4","url":null,"abstract":"<p><p>Ultrasound is a widespread imaging modality, with special application in medical fields such as nephrology. However, automated approaches for ultrasound renal interpretation still pose some challenges: (1) the need for manual supervision by experts at various stages of the system, which prevents its adoption in primary healthcare, and (2) their limited considered taxonomy (e.g., reduced number of pathologies), which makes them unsuitable for training practitioners and providing support to experts. This paper proposes a fully automated computer-aided diagnosis system for ultrasound renal imaging addressing both of these challenges. Our system is based in a multi-task architecture, which is implemented by a three-branched convolutional neural network and is capable of segmenting the kidney and detecting global and local pathologies with no need of human interaction during diagnosis. The integration of different image perspectives at distinct granularities enhanced the proposed diagnosis. We employ a large (1985 images) and demanding ultrasound renal imaging database, publicly released with the system and annotated on the basis of an exhaustive taxonomy of two global and nine local pathologies (including cysts, lithiasis, hydronephrosis, angiomyolipoma), establishing a benchmark for ultrasound renal interpretation. Experiments show that our proposed method outperforms several state-of-the-art methods in both segmentation and diagnosis tasks and leverages the combination of global and local image information to improve the diagnosis. Our results, with a 87.41% of AUC in healthy-pathological diagnosis and 81.90% in multi-pathological diagnosis, support the use of our system as a helpful tool in the healthcare system.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300425/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139984980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Hybrid Framework of Dual-Domain Signal Restoration and Multi-depth Feature Reinforcement for Low-Dose Lung CT Denoising. 用于低剂量肺部 CT 去噪的双域信号还原和多深度特征强化混合框架
Pub Date : 2024-08-01 Epub Date: 2024-02-29 DOI: 10.1007/s10278-023-00934-6
Jianning Chi, Zhiyi Sun, Shuyu Tian, Huan Wang, Siqi Wang

Low-dose computer tomography (LDCT) has been widely used in medical diagnosis. Various denoising methods have been presented to remove noise in LDCT scans. However, existing methods cannot achieve satisfactory results due to the difficulties in (1) distinguishing the characteristics of structures, textures, and noise confused in the image domain, and (2) representing local details and global semantics in the hierarchical features. In this paper, we propose a novel denoising method consisting of (1) a 2D dual-domain restoration framework to reconstruct noise-free structure and texture signals separately, and (2) a 3D multi-depth reinforcement U-Net model to further recover image details with enhanced hierarchical features. In the 2D dual-domain restoration framework, the convolutional neural networks are adopted in both the image domain where the image structures are well preserved through the spatial continuity, and the sinogram domain where the textures and noise are separately represented by different wavelet coefficients and processed adaptively. In the 3D multi-depth reinforcement U-Net model, the hierarchical features from the 3D U-Net are enhanced by the cross-resolution attention module (CRAM) and dual-branch graph convolution module (DBGCM). The CRAM preserves local details by integrating adjacent low-level features with different resolutions, while the DBGCM enhances global semantics by building graphs for high-level features in intra-feature and inter-feature dimensions. Experimental results on the LUNA16 dataset and 2016 NIH-AAPM-Mayo Clinic LDCT Grand Challenge dataset illustrate the proposed method outperforms the state-of-the-art methods on removing noise from LDCT images with clear structures and textures, proving its potential in clinical practice.

低剂量计算机断层扫描(LDCT)已广泛应用于医疗诊断。已有多种去噪方法用于去除 LDCT 扫描中的噪声。然而,由于难以(1)区分图像域中结构、纹理和噪声混淆的特征,以及(2)在层次特征中表现局部细节和全局语义,现有方法无法达到令人满意的效果。本文提出了一种新型去噪方法,包括:(1)二维双域还原框架,分别重建无噪声的结构和纹理信号;(2)三维多深度强化 U-Net 模型,利用增强的层次特征进一步恢复图像细节。在二维双域修复框架中,卷积神经网络同时应用于图像域和正弦图域,前者通过空间连续性很好地保留了图像结构,后者则通过不同的小波系数分别表示纹理和噪声,并进行自适应处理。在三维多深度增强 U-Net 模型中,来自三维 U-Net 的分层特征通过交叉分辨率注意模块(CRAM)和双分支图卷积模块(DBGCM)得到增强。CRAM 通过整合不同分辨率的相邻低层次特征来保留局部细节,而 DBGCM 则通过在特征内和特征间维度为高层次特征构建图来增强全局语义。在 LUNA16 数据集和 2016 NIH-AAPM-Mayo Clinic LDCT 大挑战数据集上的实验结果表明,所提出的方法在去除结构和纹理清晰的 LDCT 图像中的噪声方面优于最先进的方法,证明了其在临床实践中的潜力。
{"title":"A Hybrid Framework of Dual-Domain Signal Restoration and Multi-depth Feature Reinforcement for Low-Dose Lung CT Denoising.","authors":"Jianning Chi, Zhiyi Sun, Shuyu Tian, Huan Wang, Siqi Wang","doi":"10.1007/s10278-023-00934-6","DOIUrl":"10.1007/s10278-023-00934-6","url":null,"abstract":"<p><p>Low-dose computer tomography (LDCT) has been widely used in medical diagnosis. Various denoising methods have been presented to remove noise in LDCT scans. However, existing methods cannot achieve satisfactory results due to the difficulties in (1) distinguishing the characteristics of structures, textures, and noise confused in the image domain, and (2) representing local details and global semantics in the hierarchical features. In this paper, we propose a novel denoising method consisting of (1) a 2D dual-domain restoration framework to reconstruct noise-free structure and texture signals separately, and (2) a 3D multi-depth reinforcement U-Net model to further recover image details with enhanced hierarchical features. In the 2D dual-domain restoration framework, the convolutional neural networks are adopted in both the image domain where the image structures are well preserved through the spatial continuity, and the sinogram domain where the textures and noise are separately represented by different wavelet coefficients and processed adaptively. In the 3D multi-depth reinforcement U-Net model, the hierarchical features from the 3D U-Net are enhanced by the cross-resolution attention module (CRAM) and dual-branch graph convolution module (DBGCM). The CRAM preserves local details by integrating adjacent low-level features with different resolutions, while the DBGCM enhances global semantics by building graphs for high-level features in intra-feature and inter-feature dimensions. Experimental results on the LUNA16 dataset and 2016 NIH-AAPM-Mayo Clinic LDCT Grand Challenge dataset illustrate the proposed method outperforms the state-of-the-art methods on removing noise from LDCT images with clear structures and textures, proving its potential in clinical practice.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300419/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139998751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multiparametric MRI-based Radiomics Model for Stratifying Postoperative Recurrence in Luminal B Breast Cancer. 基于多参数核磁共振成像的放射组学模型,用于分层检测B型乳腺癌术后复发情况
Pub Date : 2024-08-01 Epub Date: 2024-02-29 DOI: 10.1007/s10278-023-00923-9
Kepei Xu, Meiqi Hua, Ting Mai, Xiaojing Ren, Xiaozheng Fang, Chunjie Wang, Min Ge, Hua Qian, Maosheng Xu, Ruixin Zhang

This study aims to develop an MRI-based radiomics model to assess the likelihood of recurrence in luminal B breast cancer. The study analyzed medical images and clinical data from 244 patients with luminal B breast cancer. Of 244 patients, 35 had experienced recurrence and 209 had not. The patients were randomly divided into the training set (51.5 ± 12.5 years old; n = 171) and the test set (51.7 ± 11.3 years old; n = 73) in a ratio of 7:3. The study employed univariate and multivariate Cox regression along with the least absolute shrinkage and selection operator (LASSO) regression methods to select radiomics features and calculate a risk score. A combined model was constructed by integrating the risk score with the clinical and pathological characteristics. The study identified two radiomics features (GLSZM and GLRLM) from DCE-MRI that were used to calculate a risk score. The AUCs were 0.860 and 0.868 in the training set and 0.816 and 0.714 in the testing set for 3- and 5-year recurrence risk, respectively. The combined model incorporating the risk score, pN, and endocrine therapy showed improved predictive power, with AUCs of 0.857 and 0.912 in the training set and 0.943 and 0.945 in the testing set for 3- and 5-year recurrence risk, respectively. The calibration curve of the combined model showed good consistency between predicted and measured values. Our study developed an MRI-based radiomics model that integrates clinical and radiomics features to assess the likelihood of recurrence in luminal B breast cancer. The model shows promise for improving clinical risk stratification and treatment decision-making.

本研究旨在开发一种基于核磁共振成像的放射组学模型,以评估管腔B型乳腺癌复发的可能性。研究分析了 244 名腔隙 B 型乳腺癌患者的医学影像和临床数据。在244名患者中,35人经历过复发,209人未经历过复发。患者按 7:3 的比例随机分为训练集(51.5 ± 12.5 岁;n = 171)和测试集(51.7 ± 11.3 岁;n = 73)。研究采用单变量和多变量 Cox 回归以及最小绝对收缩和选择算子(LASSO)回归方法来选择放射组学特征并计算风险评分。通过将风险评分与临床和病理特征相结合,构建了一个综合模型。该研究从DCE-MRI中确定了两个放射组学特征(GLSZM和GLRLM),用于计算风险评分。对于3年和5年复发风险,训练集的AUC分别为0.860和0.868,测试集的AUC分别为0.816和0.714。包含风险评分、pN 和内分泌治疗的组合模型显示出更高的预测能力,在训练集中,3 年和 5 年复发风险的 AUC 分别为 0.857 和 0.912,在测试集中分别为 0.943 和 0.945。综合模型的校准曲线显示预测值与测量值之间具有良好的一致性。我们的研究建立了一个基于核磁共振成像的放射组学模型,该模型整合了临床和放射组学特征,用于评估管腔B型乳腺癌的复发可能性。该模型有望改善临床风险分层和治疗决策。
{"title":"A Multiparametric MRI-based Radiomics Model for Stratifying Postoperative Recurrence in Luminal B Breast Cancer.","authors":"Kepei Xu, Meiqi Hua, Ting Mai, Xiaojing Ren, Xiaozheng Fang, Chunjie Wang, Min Ge, Hua Qian, Maosheng Xu, Ruixin Zhang","doi":"10.1007/s10278-023-00923-9","DOIUrl":"10.1007/s10278-023-00923-9","url":null,"abstract":"<p><p>This study aims to develop an MRI-based radiomics model to assess the likelihood of recurrence in luminal B breast cancer. The study analyzed medical images and clinical data from 244 patients with luminal B breast cancer. Of 244 patients, 35 had experienced recurrence and 209 had not. The patients were randomly divided into the training set (51.5 ± 12.5 years old; n = 171) and the test set (51.7 ± 11.3 years old; n = 73) in a ratio of 7:3. The study employed univariate and multivariate Cox regression along with the least absolute shrinkage and selection operator (LASSO) regression methods to select radiomics features and calculate a risk score. A combined model was constructed by integrating the risk score with the clinical and pathological characteristics. The study identified two radiomics features (GLSZM and GLRLM) from DCE-MRI that were used to calculate a risk score. The AUCs were 0.860 and 0.868 in the training set and 0.816 and 0.714 in the testing set for 3- and 5-year recurrence risk, respectively. The combined model incorporating the risk score, pN, and endocrine therapy showed improved predictive power, with AUCs of 0.857 and 0.912 in the training set and 0.943 and 0.945 in the testing set for 3- and 5-year recurrence risk, respectively. The calibration curve of the combined model showed good consistency between predicted and measured values. Our study developed an MRI-based radiomics model that integrates clinical and radiomics features to assess the likelihood of recurrence in luminal B breast cancer. The model shows promise for improving clinical risk stratification and treatment decision-making.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300413/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139998752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Constructing the Optimal Classification Model for Benign and Malignant Breast Tumors Based on Multifeature Analysis from Multimodal Images. 基于多模态图像的多特征分析构建良性和恶性乳腺肿瘤的最佳分类模型
Pub Date : 2024-08-01 Epub Date: 2024-02-21 DOI: 10.1007/s10278-024-01036-7
Ronghui Tian, Guoxiu Lu, Nannan Zhao, Wei Qian, He Ma, Wei Yang

The purpose of this study was to fuse conventional radiomic and deep features from digital breast tomosynthesis craniocaudal projection (DBT-CC) and ultrasound (US) images to establish a multimodal benign-malignant classification model and evaluate its clinical value. Data were obtained from a total of 487 patients at three centers, each of whom underwent DBT-CC and US examinations. A total of 322 patients from dataset 1 were used to construct the model, while 165 patients from datasets 2 and 3 formed the prospective testing cohort. Two radiologists with 10-20 years of work experience and three sonographers with 12-20 years of work experience semiautomatically segmented the lesions using ITK-SNAP software while considering the surrounding tissue. For the experiments, we extracted conventional radiomic and deep features from tumors from DBT-CCs and US images using PyRadiomics and Inception-v3. Additionally, we extracted conventional radiomic features from four peritumoral layers around the tumors via DBT-CC and US images. Features were fused separately from the intratumoral and peritumoral regions. For the models, we tested the SVM, KNN, decision tree, RF, XGBoost, and LightGBM classifiers. Early fusion and late fusion (ensemble and stacking) strategies were employed for feature fusion. Using the SVM classifier, stacking fusion of deep features and three peritumoral radiomic features from tumors in DBT-CC and US images achieved the optimal performance, with an accuracy and AUC of 0.953 and 0.959 [CI: 0.886-0.996], a sensitivity and specificity of 0.952 [CI: 0.888-0.992] and 0.955 [0.868-0.985], and a precision of 0.976. The experimental results indicate that the fusion model of deep features and peritumoral radiomic features from tumors in DBT-CC and US images shows promise in differentiating benign and malignant breast tumors.

本研究旨在融合数字乳腺断层扫描头尾投影(DBT-CC)和超声(US)图像的常规放射学和深部特征,建立一个多模态良恶性分类模型,并评估其临床价值。三个中心共收集了487名患者的数据,每名患者都接受了DBT-CC和US检查。数据集 1 中的 322 名患者用于构建模型,数据集 2 和 3 中的 165 名患者组成了前瞻性测试队列。两名有 10-20 年工作经验的放射科医生和三名有 12-20 年工作经验的超声技师使用 ITK-SNAP 软件对病灶进行半自动分割,同时考虑到周围组织。在实验中,我们使用 PyRadiomics 和 Inception-v3 从 DBT-CCs 和 US 图像中提取肿瘤的常规放射学特征和深度特征。此外,我们还通过 DBT-CC 和 US 图像从肿瘤周围的四个瘤周层提取了常规放射学特征。我们分别融合了瘤内和瘤周区域的特征。在模型方面,我们测试了 SVM、KNN、决策树、RF、XGBoost 和 LightGBM 分类器。特征融合采用了早期融合和晚期融合(集合和堆叠)策略。通过使用 SVM 分类器,DBT-CC 和 US 图像中肿瘤的深度特征和三个瘤周放射学特征的堆叠融合达到了最佳性能,准确率和 AUC 分别为 0.953 和 0.959 [CI:0.886-0.996],灵敏度和特异度分别为 0.952 [CI:0.888-0.992] 和 0.955 [0.868-0.985],精确度为 0.976。实验结果表明,DBT-CC 和 US 图像中肿瘤的深度特征和肿瘤周围放射学特征的融合模型在区分良性和恶性乳腺肿瘤方面具有良好的前景。
{"title":"Constructing the Optimal Classification Model for Benign and Malignant Breast Tumors Based on Multifeature Analysis from Multimodal Images.","authors":"Ronghui Tian, Guoxiu Lu, Nannan Zhao, Wei Qian, He Ma, Wei Yang","doi":"10.1007/s10278-024-01036-7","DOIUrl":"10.1007/s10278-024-01036-7","url":null,"abstract":"<p><p>The purpose of this study was to fuse conventional radiomic and deep features from digital breast tomosynthesis craniocaudal projection (DBT-CC) and ultrasound (US) images to establish a multimodal benign-malignant classification model and evaluate its clinical value. Data were obtained from a total of 487 patients at three centers, each of whom underwent DBT-CC and US examinations. A total of 322 patients from dataset 1 were used to construct the model, while 165 patients from datasets 2 and 3 formed the prospective testing cohort. Two radiologists with 10-20 years of work experience and three sonographers with 12-20 years of work experience semiautomatically segmented the lesions using ITK-SNAP software while considering the surrounding tissue. For the experiments, we extracted conventional radiomic and deep features from tumors from DBT-CCs and US images using PyRadiomics and Inception-v3. Additionally, we extracted conventional radiomic features from four peritumoral layers around the tumors via DBT-CC and US images. Features were fused separately from the intratumoral and peritumoral regions. For the models, we tested the SVM, KNN, decision tree, RF, XGBoost, and LightGBM classifiers. Early fusion and late fusion (ensemble and stacking) strategies were employed for feature fusion. Using the SVM classifier, stacking fusion of deep features and three peritumoral radiomic features from tumors in DBT-CC and US images achieved the optimal performance, with an accuracy and AUC of 0.953 and 0.959 [CI: 0.886-0.996], a sensitivity and specificity of 0.952 [CI: 0.888-0.992] and 0.955 [0.868-0.985], and a precision of 0.976. The experimental results indicate that the fusion model of deep features and peritumoral radiomic features from tumors in DBT-CC and US images shows promise in differentiating benign and malignant breast tumors.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300407/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139914400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of imaging informatics in medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1