首页 > 最新文献

Journal of imaging informatics in medicine最新文献

英文 中文
Deep Convolutional Neural Network for Automated Staging of Periodontal Bone Loss Severity on Bite-wing Radiographs: An Eigen-CAM Explainability Mapping Approach. 深度卷积神经网络用于自动分期咬翼X光片上的牙周骨损失严重程度:Eigen-CAM 可解释性映射法
Pub Date : 2024-08-15 DOI: 10.1007/s10278-024-01218-3
Mediha Erturk, Muhammet Üsame Öziç, Melek Tassoker

Periodontal disease is a significant global oral health problem. Radiographic staging is critical in determining periodontitis severity and treatment requirements. This study aims to automatically stage periodontal bone loss using a deep learning approach using bite-wing images. A total of 1752 bite-wing images were used for the study. Radiological examinations were classified into 4 groups. Healthy (normal), no bone loss; stage I (mild destruction), bone loss in the coronal third (< 15%); stage II (moderate destruction), bone loss is in the coronal third and from 15 to 33% (15-33%); stage III-IV (severe destruction), bone loss extending from the middle third to the apical third with furcation destruction (> 33%). All images were converted to 512 × 400 dimensions using bilinear interpolation. The data was divided into 80% training validation and 20% testing. The classification module of the YOLOv8 deep learning model was used for the artificial intelligence-based classification of the images. Based on four class results, it was trained using fivefold cross-validation after transfer learning and fine tuning. After the training, 20% of test data, which the system had never seen, were analyzed using the artificial intelligence weights obtained in each cross-validation. Training and test results were calculated with average accuracy, precision, recall, and F1-score performance metrics. Test images were analyzed with Eigen-CAM explainability heat maps. In the classification of bite-wing images as healthy, mild destruction, moderate destruction, and severe destruction, training performance results were 86.100% accuracy, 84.790% precision, 82.350% recall, and 84.411% F1-score, and test performance results were 83.446% accuracy, 81.742% precision, 80.883% recall, and 81.090% F1-score. The deep learning model gave successful results in staging periodontal bone loss in bite-wing images. Classification scores were relatively high for normal (no bone loss) and severe bone loss in bite-wing images, as they are more clearly visible than mild and moderate damage.

牙周病是一个重大的全球性口腔健康问题。放射学分期对于确定牙周炎的严重程度和治疗要求至关重要。本研究旨在使用深度学习方法,利用咬翼图像对牙周骨质流失进行自动分期。研究共使用了 1752 张咬翼图像。放射学检查分为 4 组。健康组(正常),无骨质流失;I期(轻度破坏),冠状面三分之一处骨质流失(33%)。使用双线性插值法将所有图像转换为 512 × 400 尺寸。数据分为 80% 的训练验证和 20% 的测试。YOLOv8 深度学习模型的分类模块用于对图像进行基于人工智能的分类。根据四类结果,经过迁移学习和微调后,使用五倍交叉验证对其进行训练。训练结束后,使用每次交叉验证中获得的人工智能权重对系统从未见过的 20% 测试数据进行分析。训练和测试结果以平均准确率、精确度、召回率和 F1 分数等性能指标进行计算。测试图像使用 Eigen-CAM 可解释性热图进行分析。在将咬翼图像分类为健康、轻度破坏、中度破坏和严重破坏时,训练结果的准确率为 86.100%,精确率为 84.790%,召回率为 82.350%,F1 分数为 84.411%;测试结果的准确率为 83.446%,精确率为 81.742%,召回率为 80.883%,F1 分数为 81.090%。深度学习模型在对咬合翼图像中的牙周骨质流失进行分期方面取得了成功。咬合翼图像中正常(无骨质流失)和严重骨质流失的分类得分相对较高,因为它们比轻度和中度损伤更清晰可见。
{"title":"Deep Convolutional Neural Network for Automated Staging of Periodontal Bone Loss Severity on Bite-wing Radiographs: An Eigen-CAM Explainability Mapping Approach.","authors":"Mediha Erturk, Muhammet Üsame Öziç, Melek Tassoker","doi":"10.1007/s10278-024-01218-3","DOIUrl":"https://doi.org/10.1007/s10278-024-01218-3","url":null,"abstract":"<p><p>Periodontal disease is a significant global oral health problem. Radiographic staging is critical in determining periodontitis severity and treatment requirements. This study aims to automatically stage periodontal bone loss using a deep learning approach using bite-wing images. A total of 1752 bite-wing images were used for the study. Radiological examinations were classified into 4 groups. Healthy (normal), no bone loss; stage I (mild destruction), bone loss in the coronal third (< 15%); stage II (moderate destruction), bone loss is in the coronal third and from 15 to 33% (15-33%); stage III-IV (severe destruction), bone loss extending from the middle third to the apical third with furcation destruction (> 33%). All images were converted to 512 × 400 dimensions using bilinear interpolation. The data was divided into 80% training validation and 20% testing. The classification module of the YOLOv8 deep learning model was used for the artificial intelligence-based classification of the images. Based on four class results, it was trained using fivefold cross-validation after transfer learning and fine tuning. After the training, 20% of test data, which the system had never seen, were analyzed using the artificial intelligence weights obtained in each cross-validation. Training and test results were calculated with average accuracy, precision, recall, and F1-score performance metrics. Test images were analyzed with Eigen-CAM explainability heat maps. In the classification of bite-wing images as healthy, mild destruction, moderate destruction, and severe destruction, training performance results were 86.100% accuracy, 84.790% precision, 82.350% recall, and 84.411% F1-score, and test performance results were 83.446% accuracy, 81.742% precision, 80.883% recall, and 81.090% F1-score. The deep learning model gave successful results in staging periodontal bone loss in bite-wing images. Classification scores were relatively high for normal (no bone loss) and severe bone loss in bite-wing images, as they are more clearly visible than mild and moderate damage.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Construction and Validation of a General Medical Image Dataset for Pretraining. 构建和验证用于预培训的普通医学图像数据集
Pub Date : 2024-08-15 DOI: 10.1007/s10278-024-01226-3
Rongguo Zhang, Chenhao Pei, Ji Shi, Shaokang Wang

In the field of deep learning for medical image analysis, training models from scratch are often used and sometimes, transfer learning from pretrained parameters on ImageNet models is also adopted. However, there is no universally accepted medical image dataset specifically designed for pretraining models currently. The purpose of this study is to construct such a general dataset and validate its effectiveness on downstream medical imaging tasks, including classification and segmentation. In this work, we first build a medical image dataset by collecting several public medical image datasets (CPMID). And then, some pretrained models used for transfer learning are obtained based on CPMID. Various-complexity Resnet and the Vision Transformer network are used as the backbone architectures. In the tasks of classification and segmentation on three other datasets, we compared the experimental results of training from scratch, from the pretrained parameters on ImageNet, and from the pretrained parameters on CPMID. Accuracy, the area under the receiver operating characteristic curve, and class activation map are used as metrics for classification performance. Intersection over Union as the metric is for segmentation evaluation. Utilizing the pretrained parameters on the constructed dataset CPMID, we achieved the best classification accuracy, weighted accuracy, and ROC-AUC values on three validation datasets. Notably, the average classification accuracy outperformed ImageNet-based results by 4.30%, 8.86%, and 3.85% respectively. Furthermore, we achieved the optimal balanced outcome of performance and efficiency in both classification and segmentation tasks. The pretrained parameters on the proposed dataset CPMID are very effective for common tasks in medical image analysis such as classification and segmentation.

在用于医学图像分析的深度学习领域,通常采用从头开始训练模型的方法,有时也采用从 ImageNet 模型上的预训练参数进行迁移学习的方法。然而,目前还没有公认的专门用于预训练模型的医学图像数据集。本研究的目的就是构建这样一个通用数据集,并验证其在下游医学影像任务(包括分类和分割)中的有效性。在这项工作中,我们首先通过收集多个公共医疗图像数据集(CPMID)来构建一个医疗图像数据集。然后,基于 CPMID 获得一些用于迁移学习的预训练模型。各种复杂度的 Resnet 和 Vision Transformer 网络被用作骨干架构。在其他三个数据集的分类和分割任务中,我们比较了从头开始训练、根据 ImageNet 上的预训练参数训练和根据 CPMID 上的预训练参数训练的实验结果。准确率、接收者操作特征曲线下面积和类激活图谱被用作分类性能的衡量标准。交集大于联合作为分割评估指标。利用在构建的数据集 CPMID 上预先训练的参数,我们在三个验证数据集上取得了最佳的分类准确率、加权准确率和 ROC-AUC 值。值得注意的是,平均分类准确率分别比基于 ImageNet 的结果高出 4.30%、8.86% 和 3.85%。此外,在分类和分割任务中,我们实现了性能和效率的最佳平衡。拟议数据集 CPMID 上的预训练参数对于医学图像分析中的常见任务(如分类和分割)非常有效。
{"title":"Construction and Validation of a General Medical Image Dataset for Pretraining.","authors":"Rongguo Zhang, Chenhao Pei, Ji Shi, Shaokang Wang","doi":"10.1007/s10278-024-01226-3","DOIUrl":"https://doi.org/10.1007/s10278-024-01226-3","url":null,"abstract":"<p><p>In the field of deep learning for medical image analysis, training models from scratch are often used and sometimes, transfer learning from pretrained parameters on ImageNet models is also adopted. However, there is no universally accepted medical image dataset specifically designed for pretraining models currently. The purpose of this study is to construct such a general dataset and validate its effectiveness on downstream medical imaging tasks, including classification and segmentation. In this work, we first build a medical image dataset by collecting several public medical image datasets (CPMID). And then, some pretrained models used for transfer learning are obtained based on CPMID. Various-complexity Resnet and the Vision Transformer network are used as the backbone architectures. In the tasks of classification and segmentation on three other datasets, we compared the experimental results of training from scratch, from the pretrained parameters on ImageNet, and from the pretrained parameters on CPMID. Accuracy, the area under the receiver operating characteristic curve, and class activation map are used as metrics for classification performance. Intersection over Union as the metric is for segmentation evaluation. Utilizing the pretrained parameters on the constructed dataset CPMID, we achieved the best classification accuracy, weighted accuracy, and ROC-AUC values on three validation datasets. Notably, the average classification accuracy outperformed ImageNet-based results by 4.30%, 8.86%, and 3.85% respectively. Furthermore, we achieved the optimal balanced outcome of performance and efficiency in both classification and segmentation tasks. The pretrained parameters on the proposed dataset CPMID are very effective for common tasks in medical image analysis such as classification and segmentation.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based Prediction of Post-treatment Survival in Hepatocellular Carcinoma Patients Using Pre-treatment CT Images and Clinical Data. 利用治疗前 CT 图像和临床数据,基于深度学习预测肝细胞癌患者的治疗后生存期。
Pub Date : 2024-08-15 DOI: 10.1007/s10278-024-01227-2
Kyung Hwa Lee, Jungwook Lee, Gwang Hyeon Choi, Jihye Yun, Jiseon Kang, Jonggi Choi, Kang Mo Kim, Namkug Kim

The objective of this study was to develop and evaluate a model for predicting post-treatment survival in hepatocellular carcinoma (HCC) patients using their CT images and clinical information, including various treatment information. We collected pre-treatment contrast-enhanced CT images and clinical information including patient-related factors, initial treatment options, and survival status from 692 patients. The patient cohort was divided into a training cohort (n = 507), a testing cohort (n = 146), and an external CT cohort (n = 39), which included patients who underwent CT scans at other institutions. After model training using fivefold cross-validation, model validation was performed on both the testing cohort and the external CT cohort. Our cascaded model employed a 3D convolutional neural network (CNN) to extract features from CT images and derive final survival probabilities. These probabilities were obtained by concatenating previously predicted probabilities for each interval with the patient-related factors and treatment options. We utilized two consecutive fully connected layers for this process, resulting in a number of final outputs corresponding to the number of time intervals, with values representing conditional survival probabilities for each interval. Performance was assessed using the concordance index (C-index), the mean cumulative/dynamic area under the receiver operating characteristics curve (mC/D AUC), and the mean Brier score (mBS), calculated every 3 months. Through an ablation study, we found that using DenseNet-121 as the backbone network and setting the prediction interval to 6 months optimized the model's performance. The integration of multimodal data resulted in superior predictive capabilities compared to models using only CT images or clinical information (C index 0.824 [95% CI 0.822-0.826], mC/D AUC 0.893 [95% CI 0.891-0.895], and mBS 0.121 [95% CI 0.120-0.123] for internal test cohort; C index 0.750 [95% CI 0.747-0.753], mC/D AUC 0.819 [95% CI 0.816-0.823], and mBS 0.159 [95% CI 0.158-0.161] for external CT cohort, respectively). Our CNN-based discrete-time survival prediction model with CT images and clinical information demonstrated promising results in predicting post-treatment survival of patients with HCC.

本研究的目的是利用肝细胞癌(HCC)患者的 CT 图像和临床信息(包括各种治疗信息),开发并评估一个预测患者治疗后生存期的模型。我们收集了 692 名患者的治疗前对比增强 CT 图像和临床信息,包括患者相关因素、初始治疗方案和生存状况。患者队列分为训练队列(507 人)、测试队列(146 人)和外部 CT 队列(39 人),外部 CT 队列包括在其他机构接受 CT 扫描的患者。使用五倍交叉验证进行模型训练后,在测试队列和外部 CT 队列中进行模型验证。我们的级联模型采用三维卷积神经网络(CNN)从 CT 图像中提取特征,并得出最终的生存概率。这些概率是通过将先前预测的每个区间的概率与患者相关因素和治疗方案连接起来而获得的。在这一过程中,我们使用了两个连续的全连接层,最终输出的数量与时间间隔的数量相对应,其值代表了每个时间间隔的条件生存概率。使用一致性指数(C-index)、接收器操作特征曲线下的平均累积/动态面积(mC/D AUC)以及每 3 个月计算一次的平均布赖尔评分(mBS)来评估性能。通过一项消融研究,我们发现使用 DenseNet-121 作为骨干网络并将预测时间间隔设定为 6 个月可以优化模型的性能。与仅使用 CT 图像或临床信息的模型相比,多模态数据的整合带来了更优越的预测能力(C 指数 0.824 [95% CI 0.822-0.826], mC/D AUC 0.893 [95% CI 0.891-0.895]和 mBS 0.121 [95% CI 0.120-0.123];外部 CT 队列的 C 指数分别为 0.750 [95% CI 0.747-0.753]、mC/D AUC 0.819 [95% CI 0.816-0.823] 和 mBS 0.159 [95% CI 0.158-0.161])。我们的基于CNN的离散时间生存预测模型结合了CT图像和临床信息,在预测HCC患者的治疗后生存率方面取得了良好的效果。
{"title":"Deep Learning-Based Prediction of Post-treatment Survival in Hepatocellular Carcinoma Patients Using Pre-treatment CT Images and Clinical Data.","authors":"Kyung Hwa Lee, Jungwook Lee, Gwang Hyeon Choi, Jihye Yun, Jiseon Kang, Jonggi Choi, Kang Mo Kim, Namkug Kim","doi":"10.1007/s10278-024-01227-2","DOIUrl":"https://doi.org/10.1007/s10278-024-01227-2","url":null,"abstract":"<p><p>The objective of this study was to develop and evaluate a model for predicting post-treatment survival in hepatocellular carcinoma (HCC) patients using their CT images and clinical information, including various treatment information. We collected pre-treatment contrast-enhanced CT images and clinical information including patient-related factors, initial treatment options, and survival status from 692 patients. The patient cohort was divided into a training cohort (n = 507), a testing cohort (n = 146), and an external CT cohort (n = 39), which included patients who underwent CT scans at other institutions. After model training using fivefold cross-validation, model validation was performed on both the testing cohort and the external CT cohort. Our cascaded model employed a 3D convolutional neural network (CNN) to extract features from CT images and derive final survival probabilities. These probabilities were obtained by concatenating previously predicted probabilities for each interval with the patient-related factors and treatment options. We utilized two consecutive fully connected layers for this process, resulting in a number of final outputs corresponding to the number of time intervals, with values representing conditional survival probabilities for each interval. Performance was assessed using the concordance index (C-index), the mean cumulative/dynamic area under the receiver operating characteristics curve (mC/D AUC), and the mean Brier score (mBS), calculated every 3 months. Through an ablation study, we found that using DenseNet-121 as the backbone network and setting the prediction interval to 6 months optimized the model's performance. The integration of multimodal data resulted in superior predictive capabilities compared to models using only CT images or clinical information (C index 0.824 [95% CI 0.822-0.826], mC/D AUC 0.893 [95% CI 0.891-0.895], and mBS 0.121 [95% CI 0.120-0.123] for internal test cohort; C index 0.750 [95% CI 0.747-0.753], mC/D AUC 0.819 [95% CI 0.816-0.823], and mBS 0.159 [95% CI 0.158-0.161] for external CT cohort, respectively). Our CNN-based discrete-time survival prediction model with CT images and clinical information demonstrated promising results in predicting post-treatment survival of patients with HCC.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ensemble of Deep Learning Architectures with Machine Learning for Pneumonia Classification Using Chest X-rays. 利用胸部 X 光片进行肺炎分类的深度学习架构与机器学习的集合。
Pub Date : 2024-08-13 DOI: 10.1007/s10278-024-01201-y
Rupali Vyas, Deepak Rao Khadatkar

Pneumonia is a severe health concern, particularly for vulnerable groups, needing early and correct classification for optimal treatment. This study addresses the use of deep learning combined with machine learning classifiers (DLxMLCs) for pneumonia classification from chest X-ray (CXR) images. We deployed modified VGG19, ResNet50V2, and DenseNet121 models for feature extraction, followed by five machine learning classifiers (logistic regression, support vector machine, decision tree, random forest, artificial neural network). The approach we suggested displayed remarkable accuracy, with VGG19 and DenseNet121 models obtaining 99.98% accuracy when combined with random forest or decision tree classifiers. ResNet50V2 achieved 99.25% accuracy with random forest. These results illustrate the advantages of merging deep learning models with machine learning classifiers in boosting the speedy and accurate identification of pneumonia. The study underlines the potential of DLxMLC systems in enhancing diagnostic accuracy and efficiency. By integrating these models into clinical practice, healthcare practitioners could greatly boost patient care and results. Future research should focus on refining these models and exploring their application to other medical imaging tasks, as well as including explainability methodologies to better understand their decision-making processes and build trust in their clinical use. This technique promises promising breakthroughs in medical imaging and patient management.

肺炎是一个严重的健康问题,尤其是对弱势群体而言,需要及早进行正确的分类,以获得最佳治疗。本研究将深度学习与机器学习分类器(DLxMLCs)相结合,对胸部 X 光(CXR)图像进行肺炎分类。我们部署了修改后的 VGG19、ResNet50V2 和 DenseNet121 模型用于特征提取,然后部署了五个机器学习分类器(逻辑回归、支持向量机、决策树、随机森林、人工神经网络)。我们建议的方法显示了卓越的准确性,当 VGG19 和 DenseNet121 模型与随机森林或决策树分类器相结合时,准确率达到 99.98%。ResNet50V2 与随机森林相结合的准确率达到 99.25%。这些结果表明,将深度学习模型与机器学习分类器相结合可提高肺炎识别的速度和准确性。这项研究强调了 DLxMLC 系统在提高诊断准确性和效率方面的潜力。通过将这些模型与临床实践相结合,医疗从业人员可以大大提高患者护理水平和效果。未来的研究应侧重于完善这些模型,探索它们在其他医学成像任务中的应用,并纳入可解释性方法,以更好地了解它们的决策过程,并在临床使用中建立信任。这项技术有望在医学成像和患者管理方面取得突破。
{"title":"Ensemble of Deep Learning Architectures with Machine Learning for Pneumonia Classification Using Chest X-rays.","authors":"Rupali Vyas, Deepak Rao Khadatkar","doi":"10.1007/s10278-024-01201-y","DOIUrl":"https://doi.org/10.1007/s10278-024-01201-y","url":null,"abstract":"<p><p>Pneumonia is a severe health concern, particularly for vulnerable groups, needing early and correct classification for optimal treatment. This study addresses the use of deep learning combined with machine learning classifiers (DLxMLCs) for pneumonia classification from chest X-ray (CXR) images. We deployed modified VGG19, ResNet50V2, and DenseNet121 models for feature extraction, followed by five machine learning classifiers (logistic regression, support vector machine, decision tree, random forest, artificial neural network). The approach we suggested displayed remarkable accuracy, with VGG19 and DenseNet121 models obtaining 99.98% accuracy when combined with random forest or decision tree classifiers. ResNet50V2 achieved 99.25% accuracy with random forest. These results illustrate the advantages of merging deep learning models with machine learning classifiers in boosting the speedy and accurate identification of pneumonia. The study underlines the potential of DLxMLC systems in enhancing diagnostic accuracy and efficiency. By integrating these models into clinical practice, healthcare practitioners could greatly boost patient care and results. Future research should focus on refining these models and exploring their application to other medical imaging tasks, as well as including explainability methodologies to better understand their decision-making processes and build trust in their clinical use. This technique promises promising breakthroughs in medical imaging and patient management.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Acute Stroke Segmentation on MRI Using Deep Learning: Self-Configuring Neural Networks Provide High Performance Using Only DWI Sequences. 利用深度学习优化核磁共振成像上的急性中风分割:自配置神经网络仅使用 DWI 序列就能提供高性能
Pub Date : 2024-08-13 DOI: 10.1007/s10278-024-00994-2
Peter Kamel, Adway Kanhere, Pranav Kulkarni, Mazhar Khalid, Rachel Steger, Uttam Bodanapally, Dheeraj Gandhi, Vishwa Parekh, Paul H Yi

Segmentation of infarcts is clinically important in ischemic stroke management and prognostication. It is unclear what role the combination of DWI, ADC, and FLAIR MRI sequences provide for deep learning in infarct segmentation. Recent technologies in model self-configuration have promised greater performance and generalizability through automated optimization. We assessed the utility of DWI, ADC, and FLAIR sequences on ischemic stroke segmentation, compared self-configuring nnU-Net models to conventional U-Net models without manual optimization, and evaluated the generalizability of results on an external clinical dataset. 3D self-configuring nnU-Net models and standard 3D U-Net models with MONAI were trained on 200 infarcts using DWI, ADC, and FLAIR sequences separately and in all combinations. Segmentation results were compared between models using paired t-test comparison on a hold-out test set of 50 cases. The highest performing model was externally validated on a clinical dataset of 50 MRIs. nnU-Net with DWI sequences attained a Dice score of 0.810 ± 0.155. There was no statistically significant difference when DWI sequences were supplemented with ADC and FLAIR images (Dice score of 0.813 ± 0.150; p = 0.15). nnU-Net models significantly outperformed standard U-Net models for all sequence combinations (p < 0.001). On the external dataset, Dice scores measured 0.704 ± 0.199 for positive cases with false positives with intracranial hemorrhage. Highly optimized neural networks such as nnU-Net provide excellent stroke segmentation even when only provided DWI images, without significant improvement from other sequences. This differs from-and significantly outperforms-standard U-Net architectures. Results translated well to the external clinical environment and provide the groundwork for optimized acute stroke segmentation on MRI.

在缺血性中风的治疗和预后判断中,梗死区的分割具有重要的临床意义。目前还不清楚 DWI、ADC 和 FLAIR MRI 序列的组合在梗塞分割的深度学习中能发挥什么作用。最近的模型自配置技术有望通过自动优化提高性能和通用性。我们评估了 DWI、ADC 和 FLAIR 序列对缺血性中风分割的实用性,比较了自配置 nnU-Net 模型和未经人工优化的传统 U-Net 模型,并在外部临床数据集上评估了结果的通用性。使用 DWI、ADC 和 FLAIR 序列分别和以所有组合对 200 例梗死进行了三维自配置 nnU-Net 模型和带有 MONAI 的标准三维 U-Net 模型的训练。在 50 个病例的保留测试集上,使用配对 t 检验比较不同模型的分割结果。nnU-Net 使用 DWI 序列的 Dice 得分为 0.810 ± 0.155。当 DWI 序列辅以 ADC 和 FLAIR 图像时,在统计学上没有明显差异(Dice 得分为 0.813 ± 0.150;p = 0.15)。
{"title":"Optimizing Acute Stroke Segmentation on MRI Using Deep Learning: Self-Configuring Neural Networks Provide High Performance Using Only DWI Sequences.","authors":"Peter Kamel, Adway Kanhere, Pranav Kulkarni, Mazhar Khalid, Rachel Steger, Uttam Bodanapally, Dheeraj Gandhi, Vishwa Parekh, Paul H Yi","doi":"10.1007/s10278-024-00994-2","DOIUrl":"https://doi.org/10.1007/s10278-024-00994-2","url":null,"abstract":"<p><p>Segmentation of infarcts is clinically important in ischemic stroke management and prognostication. It is unclear what role the combination of DWI, ADC, and FLAIR MRI sequences provide for deep learning in infarct segmentation. Recent technologies in model self-configuration have promised greater performance and generalizability through automated optimization. We assessed the utility of DWI, ADC, and FLAIR sequences on ischemic stroke segmentation, compared self-configuring nnU-Net models to conventional U-Net models without manual optimization, and evaluated the generalizability of results on an external clinical dataset. 3D self-configuring nnU-Net models and standard 3D U-Net models with MONAI were trained on 200 infarcts using DWI, ADC, and FLAIR sequences separately and in all combinations. Segmentation results were compared between models using paired t-test comparison on a hold-out test set of 50 cases. The highest performing model was externally validated on a clinical dataset of 50 MRIs. nnU-Net with DWI sequences attained a Dice score of 0.810 ± 0.155. There was no statistically significant difference when DWI sequences were supplemented with ADC and FLAIR images (Dice score of 0.813 ± 0.150; p = 0.15). nnU-Net models significantly outperformed standard U-Net models for all sequence combinations (p < 0.001). On the external dataset, Dice scores measured 0.704 ± 0.199 for positive cases with false positives with intracranial hemorrhage. Highly optimized neural networks such as nnU-Net provide excellent stroke segmentation even when only provided DWI images, without significant improvement from other sequences. This differs from-and significantly outperforms-standard U-Net architectures. Results translated well to the external clinical environment and provide the groundwork for optimized acute stroke segmentation on MRI.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep-Learning-Enabled Electrocardiogram and Chest X-Ray for Detecting Pulmonary Arterial Hypertension. 用于检测肺动脉高压的深度学习心电图和胸部 X 光片。
Pub Date : 2024-08-13 DOI: 10.1007/s10278-024-01225-4
Pang-Yen Liu, Shi-Chue Hsing, Dung-Jang Tsai, Chin Lin, Chin-Sheng Lin, Chih-Hung Wang, Wen-Hui Fang

The diagnosis and treatment of pulmonary hypertension have changed dramatically through the re-defined diagnostic criteria and advanced drug development in the past decade. The application of Artificial Intelligence for the detection of elevated pulmonary arterial pressure (ePAP) was reported recently. Artificial Intelligence (AI) has demonstrated the capability to identify ePAP and its association with hospitalization due to heart failure when analyzing chest X-rays (CXR). An AI model based on electrocardiograms (ECG) has shown promise in not only detecting ePAP but also in predicting future risks related to cardiovascular mortality. We aimed to develop an AI model integrating ECG and CXR to detect ePAP and evaluate their performance. We developed a deep-learning model (DLM) using paired ECG and CXR to detect ePAP (systolic pulmonary artery pressure > 50 mmHg in transthoracic echocardiography). This model was further validated in a community hospital. Additionally, our DLM was evaluated for its ability to predict future occurrences of left ventricular dysfunction (LVD, ejection fraction < 35%) and cardiovascular mortality. The AUCs for detecting ePAP were as follows: 0.8261 with ECG (sensitivity 76.6%, specificity 74.5%), 0.8525 with CXR (sensitivity 82.8%, specificity 72.7%), and 0.8644 with a combination of both (sensitivity 78.6%, specificity 79.2%) in the internal dataset. In the external validation dataset, the AUCs for ePAP detection were 0.8348 with ECG, 0.8605 with CXR, and 0.8734 with the combination. Furthermore, using the combination of ECGs and CXR, the negative predictive value (NPV) was 98% in the internal dataset and 98.1% in the external dataset. Patients with ePAP detected by the DLM using combination had a higher risk of new-onset LVD with a hazard ratio (HR) of 4.51 (95% CI: 3.54-5.76) in the internal dataset and cardiovascular mortality with a HR of 6.08 (95% CI: 4.66-7.95). Similar results were seen in the external validation dataset. The DLM, integrating ECG and CXR, effectively detected ePAP with a strong NPV and forecasted future risks of developing LVD and cardiovascular mortality. This model has the potential to expedite the early identification of pulmonary hypertension in patients, prompting further evaluation through echocardiography and, when necessary, right heart catheterization (RHC), potentially resulting in enhanced cardiovascular outcomes.

过去十年间,通过重新定义诊断标准和先进的药物开发,肺动脉高压的诊断和治疗发生了巨大变化。最近有报道称,人工智能可用于检测肺动脉压升高(ePAP)。人工智能(AI)已证明有能力在分析胸部 X 光片(CXR)时识别 ePAP 及其与心衰住院的关联。基于心电图(ECG)的人工智能模型不仅能检测出电子心率,还能预测与心血管死亡率相关的未来风险。我们的目标是开发一种整合心电图和 X 光造影的人工智能模型来检测电子心率,并评估其性能。我们开发了一种深度学习模型(DLM),利用成对的心电图和心血管造影来检测 ePAP(经胸超声心动图显示肺动脉收缩压大于 50 mmHg)。该模型在一家社区医院得到了进一步验证。此外,还评估了我们的 DLM 预测未来发生左心室功能障碍(LVD,射血分数
{"title":"A Deep-Learning-Enabled Electrocardiogram and Chest X-Ray for Detecting Pulmonary Arterial Hypertension.","authors":"Pang-Yen Liu, Shi-Chue Hsing, Dung-Jang Tsai, Chin Lin, Chin-Sheng Lin, Chih-Hung Wang, Wen-Hui Fang","doi":"10.1007/s10278-024-01225-4","DOIUrl":"https://doi.org/10.1007/s10278-024-01225-4","url":null,"abstract":"<p><p>The diagnosis and treatment of pulmonary hypertension have changed dramatically through the re-defined diagnostic criteria and advanced drug development in the past decade. The application of Artificial Intelligence for the detection of elevated pulmonary arterial pressure (ePAP) was reported recently. Artificial Intelligence (AI) has demonstrated the capability to identify ePAP and its association with hospitalization due to heart failure when analyzing chest X-rays (CXR). An AI model based on electrocardiograms (ECG) has shown promise in not only detecting ePAP but also in predicting future risks related to cardiovascular mortality. We aimed to develop an AI model integrating ECG and CXR to detect ePAP and evaluate their performance. We developed a deep-learning model (DLM) using paired ECG and CXR to detect ePAP (systolic pulmonary artery pressure > 50 mmHg in transthoracic echocardiography). This model was further validated in a community hospital. Additionally, our DLM was evaluated for its ability to predict future occurrences of left ventricular dysfunction (LVD, ejection fraction < 35%) and cardiovascular mortality. The AUCs for detecting ePAP were as follows: 0.8261 with ECG (sensitivity 76.6%, specificity 74.5%), 0.8525 with CXR (sensitivity 82.8%, specificity 72.7%), and 0.8644 with a combination of both (sensitivity 78.6%, specificity 79.2%) in the internal dataset. In the external validation dataset, the AUCs for ePAP detection were 0.8348 with ECG, 0.8605 with CXR, and 0.8734 with the combination. Furthermore, using the combination of ECGs and CXR, the negative predictive value (NPV) was 98% in the internal dataset and 98.1% in the external dataset. Patients with ePAP detected by the DLM using combination had a higher risk of new-onset LVD with a hazard ratio (HR) of 4.51 (95% CI: 3.54-5.76) in the internal dataset and cardiovascular mortality with a HR of 6.08 (95% CI: 4.66-7.95). Similar results were seen in the external validation dataset. The DLM, integrating ECG and CXR, effectively detected ePAP with a strong NPV and forecasted future risks of developing LVD and cardiovascular mortality. This model has the potential to expedite the early identification of pulmonary hypertension in patients, prompting further evaluation through echocardiography and, when necessary, right heart catheterization (RHC), potentially resulting in enhanced cardiovascular outcomes.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141972525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Usefulness of Low-Kiloelectron Volt Virtual Monochromatic Contrast-Enhanced Computed Tomography with Deep Learning Image Reconstruction Technique in Improving the Delineation of Pancreatic Ductal Adenocarcinoma. 低千兆电子伏特虚拟单色对比度增强计算机断层扫描与深度学习图像重建技术在改善胰腺导管腺癌分界中的应用。
Pub Date : 2024-08-13 DOI: 10.1007/s10278-024-01214-7
Yasutaka Ichikawa, Yoshinori Kanii, Akio Yamazaki, Mai Kobayashi, Kensuke Domae, Motonori Nagata, Hajime Sakuma

To evaluate the usefulness of low-keV multiphasic computed tomography (CT) with deep learning image reconstruction (DLIR) in improving the delineation of pancreatic ductal adenocarcinoma (PDAC) compared to conventional hybrid iterative reconstruction (HIR). Thirty-five patients with PDAC who underwent multiphasic CT were retrospectively evaluated. Raw data were reconstructed with two energy levels (40 keV and 70 keV) of virtual monochromatic imaging (VMI) using HIR (ASiR-V50%) and DLIR (TrueFidelity-H). Contrast-to-noise ratio (CNRtumor) was calculated from the CT values within regions of interest in tumor and normal pancreas in the pancreatic parenchymal phase images. Lesion conspicuity of PDAC in pancreatic parenchymal phase on 40-keV HIR, 40-keV DLIR, and 70-keV DLIR images was qualitatively rated on a 5-point scale, using 70-keV HIR images as reference (score 1 = poor; score 3 = equivalent to reference; score 5 = excellent) by two radiologists. CNRtumor of 40-keV DLIR images (median 10.4, interquartile range (IQR) 7.8-14.9) was significantly higher than that of the other VMIs (40 keV HIR, median 6.2, IQR 4.4-8.5, P < 0.0001; 70-keV DLIR, median 6.3, IQR 5.1-9.9, P = 0.0002; 70-keV HIR, median 4.2, IQR 3.1-6.1, P < 0.0001). CNRtumor of 40-keV DLIR images were significantly better than those of the 40-keV HIR and 70-keV HIR images by 72 ± 22% and 211 ± 340%, respectively. Lesion conspicuity scores on 40-keV DLIR images (observer 1, 4.5 ± 0.7; observer 2, 3.4 ± 0.5) were significantly higher than on 40-keV HIR (observer 1, 3.3 ± 0.9, P < 0.0001; observer 2, 3.1 ± 0.4, P = 0.013). DLIR is a promising reconstruction method to improve PDAC delineation in 40-keV VMI at the pancreatic parenchymal phase compared to conventional HIR.

目的:与传统的混合迭代重建(HIR)相比,评估采用深度学习图像重建(DLIR)的低keV多相计算机断层扫描(CT)在改善胰腺导管腺癌(PDAC)分界方面的作用。对 35 名接受多相 CT 检查的 PDAC 患者进行了回顾性评估。使用 HIR(ASiR-V50%)和 DLIR(TrueFidelity-H)对原始数据进行了两种能量水平(40 keV 和 70 keV)的虚拟单色成像(VMI)重建。根据胰腺实质相图像中肿瘤和正常胰腺感兴趣区内的 CT 值计算对比-噪声比(CNRtumor)。两名放射科医生以 70-keV HIR 图像为参照,对 40-keV HIR、40-keV DLIR 和 70-keV DLIR 图像上胰腺实质相中 PDAC 病变的清晰度进行了 5 级定性评分(1 分 = 差;3 分 = 与参照相当;5 分 = 极佳)。40-keV DLIR 图像的 CNRtumor(中位数 10.4,四分位数间距 (IQR) 7.8-14.9)明显高于其他 VMI(40keV HIR,中位数 6.2,IQR 4.4-8.5,P),40-keV DLIR 图像的 CNRtumor 明显优于 40-keV HIR 和 70-keV HIR 图像,分别为 72 ± 22% 和 211 ± 340%。40-keV DLIR 图像上的病灶清晰度评分(观察者 1,4.5 ± 0.7;观察者 2,3.4 ± 0.5)明显高于 40-keV HIR 图像(观察者 1,3.3 ± 0.9,P<0.05)。
{"title":"The Usefulness of Low-Kiloelectron Volt Virtual Monochromatic Contrast-Enhanced Computed Tomography with Deep Learning Image Reconstruction Technique in Improving the Delineation of Pancreatic Ductal Adenocarcinoma.","authors":"Yasutaka Ichikawa, Yoshinori Kanii, Akio Yamazaki, Mai Kobayashi, Kensuke Domae, Motonori Nagata, Hajime Sakuma","doi":"10.1007/s10278-024-01214-7","DOIUrl":"https://doi.org/10.1007/s10278-024-01214-7","url":null,"abstract":"<p><p>To evaluate the usefulness of low-keV multiphasic computed tomography (CT) with deep learning image reconstruction (DLIR) in improving the delineation of pancreatic ductal adenocarcinoma (PDAC) compared to conventional hybrid iterative reconstruction (HIR). Thirty-five patients with PDAC who underwent multiphasic CT were retrospectively evaluated. Raw data were reconstructed with two energy levels (40 keV and 70 keV) of virtual monochromatic imaging (VMI) using HIR (ASiR-V50%) and DLIR (TrueFidelity-H). Contrast-to-noise ratio (CNR<sub>tumor</sub>) was calculated from the CT values within regions of interest in tumor and normal pancreas in the pancreatic parenchymal phase images. Lesion conspicuity of PDAC in pancreatic parenchymal phase on 40-keV HIR, 40-keV DLIR, and 70-keV DLIR images was qualitatively rated on a 5-point scale, using 70-keV HIR images as reference (score 1 = poor; score 3 = equivalent to reference; score 5 = excellent) by two radiologists. CNR<sub>tumor</sub> of 40-keV DLIR images (median 10.4, interquartile range (IQR) 7.8-14.9) was significantly higher than that of the other VMIs (40 keV HIR, median 6.2, IQR 4.4-8.5, P < 0.0001; 70-keV DLIR, median 6.3, IQR 5.1-9.9, P = 0.0002; 70-keV HIR, median 4.2, IQR 3.1-6.1, P < 0.0001). CNR<sub>tumor</sub> of 40-keV DLIR images were significantly better than those of the 40-keV HIR and 70-keV HIR images by 72 ± 22% and 211 ± 340%, respectively. Lesion conspicuity scores on 40-keV DLIR images (observer 1, 4.5 ± 0.7; observer 2, 3.4 ± 0.5) were significantly higher than on 40-keV HIR (observer 1, 3.3 ± 0.9, P < 0.0001; observer 2, 3.1 ± 0.4, P = 0.013). DLIR is a promising reconstruction method to improve PDAC delineation in 40-keV VMI at the pancreatic parenchymal phase compared to conventional HIR.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141972526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Cerebrovascular Segmentation and Visualization of Intracranial Time-of-Flight Magnetic Resonance Angiography Based on Deep Learning. 基于深度学习的颅内飞行时间磁共振血管造影的脑血管自动分割和可视化。
Pub Date : 2024-08-12 DOI: 10.1007/s10278-024-01215-6
Yuqin Min, Jing Li, Shouqiang Jia, Yuehua Li, Shengdong Nie

Time-of-flight magnetic resonance angiography (TOF-MRA) is a non-contrast technique used to visualize neurovascular. However, manual reconstruction of the volume render (VR) by radiologists is time-consuming and labor-intensive. Deep learning-based (DL-based) vessel segmentation technology may provide intelligent automation workflow. To evaluate the image quality of DL vessel segmentation for automatically acquiring intracranial arteries in TOF-MRA. A total of 394 TOF-MRA scans were selected, which included cerebral vascular health, aneurysms, or stenoses. Both our proposed method and two state-of-the-art DL methods are evaluated on external datasets for generalization ability. For qualitative assessment, two experienced clinical radiologists evaluated the image quality of cerebrovascular diagnostic and visualization (scoring 0-5 as unacceptable to excellent) obtained by manual VR reconstruction or automatic convolutional neural network (CNN) segmentation. The proposed CNN outperforms the other two DL-based methods in clinical scoring on external datasets, and its visualization was evaluated by readers as having the appearance of the radiologists' manual reconstructions. Scoring of proposed CNN and VR of intracranial arteries demonstrated good to excellent agreement with no significant differences (median, 5.0 and 5.0, P ≥ 12) at healthy-type scans. All proposed CNN image quality were considered to have adequate diagnostic quality (median scores > 2). Quantitative analysis demonstrated a superior dice similarity coefficient of cerebrovascular overlap (training sets and validation sets; 0.947 and 0.927). Automatic cerebrovascular segmentation using DL is feasible and the image quality in terms of vessel integrity, collateral circulation and lesion morphology is comparable to expert manual VR without significant differences.

飞行时间磁共振血管成像(TOF-MRA)是一种用于观察神经血管的非对比技术。然而,放射科医生手动重建容积渲染(VR)既耗时又耗力。基于深度学习(DL)的血管分割技术可提供智能自动化工作流程。评估在 TOF-MRA 中自动获取颅内动脉的 DL 血管分割的图像质量。共选取了 394 张 TOF-MRA 扫描图像,其中包括脑血管健康、动脉瘤或狭窄。我们提出的方法和两种最先进的 DL 方法都在外部数据集上进行了泛化能力评估。在定性评估方面,两位经验丰富的临床放射科医生对通过手动 VR 重建或自动卷积神经网络(CNN)分割获得的脑血管诊断和可视化图像质量(0-5 分,从不可接受到优秀)进行了评估。在外部数据集的临床评分中,拟议的 CNN 优于其他两种基于 DL 的方法,其可视化效果被读者评价为与放射科医生的手动重建效果一致。在健康型扫描中,建议的 CNN 和 VR 对颅内动脉的评分显示出良好到极佳的一致性,没有显著差异(中位数,5.0 和 5.0,P≥12)。所有建议的 CNN 图像质量都被认为具有足够的诊断质量(中位数分数大于 2)。定量分析显示,脑血管重叠的骰子相似系数(训练集和验证集;0.947 和 0.927)更优。使用 DL 进行脑血管自动分割是可行的,而且在血管完整性、侧支循环和病变形态方面的图像质量与专家手动 VR 相当,没有显著差异。
{"title":"Automated Cerebrovascular Segmentation and Visualization of Intracranial Time-of-Flight Magnetic Resonance Angiography Based on Deep Learning.","authors":"Yuqin Min, Jing Li, Shouqiang Jia, Yuehua Li, Shengdong Nie","doi":"10.1007/s10278-024-01215-6","DOIUrl":"https://doi.org/10.1007/s10278-024-01215-6","url":null,"abstract":"<p><p>Time-of-flight magnetic resonance angiography (TOF-MRA) is a non-contrast technique used to visualize neurovascular. However, manual reconstruction of the volume render (VR) by radiologists is time-consuming and labor-intensive. Deep learning-based (DL-based) vessel segmentation technology may provide intelligent automation workflow. To evaluate the image quality of DL vessel segmentation for automatically acquiring intracranial arteries in TOF-MRA. A total of 394 TOF-MRA scans were selected, which included cerebral vascular health, aneurysms, or stenoses. Both our proposed method and two state-of-the-art DL methods are evaluated on external datasets for generalization ability. For qualitative assessment, two experienced clinical radiologists evaluated the image quality of cerebrovascular diagnostic and visualization (scoring 0-5 as unacceptable to excellent) obtained by manual VR reconstruction or automatic convolutional neural network (CNN) segmentation. The proposed CNN outperforms the other two DL-based methods in clinical scoring on external datasets, and its visualization was evaluated by readers as having the appearance of the radiologists' manual reconstructions. Scoring of proposed CNN and VR of intracranial arteries demonstrated good to excellent agreement with no significant differences (median, 5.0 and 5.0, P ≥ 12) at healthy-type scans. All proposed CNN image quality were considered to have adequate diagnostic quality (median scores > 2). Quantitative analysis demonstrated a superior dice similarity coefficient of cerebrovascular overlap (training sets and validation sets; 0.947 and 0.927). Automatic cerebrovascular segmentation using DL is feasible and the image quality in terms of vessel integrity, collateral circulation and lesion morphology is comparable to expert manual VR without significant differences.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141918551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust ROI Detection in Whole Slide Images Guided by Pathologists' Viewing Patterns. 以病理学家的观察模式为指导,在整张切片图像中进行可靠的 ROI 检测。
Pub Date : 2024-08-09 DOI: 10.1007/s10278-024-01202-x
Fatemeh Ghezloo, Oliver H Chang, Stevan R Knezevich, Kristin C Shaw, Kia Gianni Thigpen, Lisa M Reisch, Linda G Shapiro, Joann G Elmore

Deep learning techniques offer improvements in computer-aided diagnosis systems. However, acquiring image domain annotations is challenging due to the knowledge and commitment required of expert pathologists. Pathologists often identify regions in whole slide images with diagnostic relevance rather than examining the entire slide, with a positive correlation between the time spent on these critical image regions and diagnostic accuracy. In this paper, a heatmap is generated to represent pathologists' viewing patterns during diagnosis and used to guide a deep learning architecture during training. The proposed system outperforms traditional approaches based on color and texture image characteristics, integrating pathologists' domain expertise to enhance region of interest detection without needing individual case annotations. Evaluating our best model, a U-Net model with a pre-trained ResNet-18 encoder, on a skin biopsy whole slide image dataset for melanoma diagnosis, shows its potential in detecting regions of interest, surpassing conventional methods with an increase of 20%, 11%, 22%, and 12% in precision, recall, F1-score, and Intersection over Union, respectively. In a clinical evaluation, three dermatopathologists agreed on the model's effectiveness in replicating pathologists' diagnostic viewing behavior and accurately identifying critical regions. Finally, our study demonstrates that incorporating heatmaps as supplementary signals can enhance the performance of computer-aided diagnosis systems. Without the availability of eye tracking data, identifying precise focus areas is challenging, but our approach shows promise in assisting pathologists in improving diagnostic accuracy and efficiency, streamlining annotation processes, and aiding the training of new pathologists.

深度学习技术可改善计算机辅助诊断系统。然而,由于病理专家需要大量的知识和投入,获取图像域注释具有挑战性。病理学家通常会在整张切片图像中识别与诊断相关的区域,而不是检查整张切片,在这些关键图像区域上花费的时间与诊断准确率之间存在正相关。本文生成了一个热图来表示病理学家在诊断过程中的观察模式,并在训练过程中用于指导深度学习架构。所提出的系统优于基于颜色和纹理图像特征的传统方法,它整合了病理学家的专业领域知识,无需单个病例注释即可增强感兴趣区检测。在用于黑色素瘤诊断的皮肤活检全切片图像数据集上评估了我们的最佳模型,即带有预训练的 ResNet-18 编码器的 U-Net 模型,结果表明该模型在检测感兴趣区方面具有潜力,其精确度、召回率、F1 分数和交集比 Union 分别提高了 20%、11%、22% 和 12%,超过了传统方法。在临床评估中,三位皮肤病理学家一致认为该模型能有效复制病理学家的诊断观察行为,并准确识别关键区域。最后,我们的研究表明,将热图作为补充信号可以提高计算机辅助诊断系统的性能。在没有眼动跟踪数据的情况下,确定精确的重点区域具有挑战性,但我们的方法在协助病理学家提高诊断准确性和效率、简化注释流程以及帮助培训新病理学家方面显示出了前景。
{"title":"Robust ROI Detection in Whole Slide Images Guided by Pathologists' Viewing Patterns.","authors":"Fatemeh Ghezloo, Oliver H Chang, Stevan R Knezevich, Kristin C Shaw, Kia Gianni Thigpen, Lisa M Reisch, Linda G Shapiro, Joann G Elmore","doi":"10.1007/s10278-024-01202-x","DOIUrl":"https://doi.org/10.1007/s10278-024-01202-x","url":null,"abstract":"<p><p>Deep learning techniques offer improvements in computer-aided diagnosis systems. However, acquiring image domain annotations is challenging due to the knowledge and commitment required of expert pathologists. Pathologists often identify regions in whole slide images with diagnostic relevance rather than examining the entire slide, with a positive correlation between the time spent on these critical image regions and diagnostic accuracy. In this paper, a heatmap is generated to represent pathologists' viewing patterns during diagnosis and used to guide a deep learning architecture during training. The proposed system outperforms traditional approaches based on color and texture image characteristics, integrating pathologists' domain expertise to enhance region of interest detection without needing individual case annotations. Evaluating our best model, a U-Net model with a pre-trained ResNet-18 encoder, on a skin biopsy whole slide image dataset for melanoma diagnosis, shows its potential in detecting regions of interest, surpassing conventional methods with an increase of 20%, 11%, 22%, and 12% in precision, recall, F1-score, and Intersection over Union, respectively. In a clinical evaluation, three dermatopathologists agreed on the model's effectiveness in replicating pathologists' diagnostic viewing behavior and accurately identifying critical regions. Finally, our study demonstrates that incorporating heatmaps as supplementary signals can enhance the performance of computer-aided diagnosis systems. Without the availability of eye tracking data, identifying precise focus areas is challenging, but our approach shows promise in assisting pathologists in improving diagnostic accuracy and efficiency, streamlining annotation processes, and aiding the training of new pathologists.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141915024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Allergy Wheal and Erythema Segmentation Using Attention U-Net. 使用注意力 U-Net 进行过敏性眼屎和红斑分类
Pub Date : 2024-08-09 DOI: 10.1007/s10278-024-01075-0
Yul Hee Lee, Ji-Su Shim, Young Jae Kim, Ji Soo Jeon, Sung-Yoon Kang, Sang Pyo Lee, Sang Min Lee, Kwang Gi Kim

The skin prick test (SPT) is a key tool for identifying sensitized allergens associated with immunoglobulin E-mediated allergic diseases such as asthma, allergic rhinitis, atopic dermatitis, urticaria, angioedema, and anaphylaxis. However, the SPT is labor-intensive and time-consuming due to the necessity of measuring the sizes of the erythema and wheals induced by allergens on the skin. In this study, we used an image preprocessing method and a deep learning model to segment wheals and erythema in SPT images captured by a smartphone camera. Subsequently, we assessed the deep learning model's performance by comparing the results with ground-truth data. Using contrast-limited adaptive histogram equalization (CLAHE), an image preprocessing technique designed to enhance image contrast, we augmented the chromatic contrast in 46 SPT images from 33 participants. We established a deep learning model for wheal and erythema segmentation using 144 and 150 training datasets, respectively. The wheal segmentation model achieved an accuracy of 0.9985, a sensitivity of 0.5621, a specificity of 0.9995, and a Dice similarity coefficient of 0.7079, whereas the erythema segmentation model achieved an accuracy of 0.9660, a sensitivity of 0.5787, a specificity of 0.97977, and a Dice similarity coefficient of 0.6636. The use of image preprocessing and deep learning technology in SPT is expected to have a significant positive impact on medical practice by ensuring the accurate segmentation of wheals and erythema, producing consistent evaluation results, and simplifying diagnostic processes.

皮肤点刺试验(SPT)是确定与免疫球蛋白 E 介导的过敏性疾病(如哮喘、过敏性鼻炎、特应性皮炎、荨麻疹、血管性水肿和过敏性休克)相关的致敏过敏原的重要工具。然而,由于必须测量过敏原在皮肤上诱发的红斑和荨麻疹的大小,SPT 需要耗费大量人力和时间。在本研究中,我们使用了一种图像预处理方法和一个深度学习模型来分割智能手机摄像头捕获的 SPT 图像中的麦粒肿和红斑。随后,我们通过将结果与地面实况数据进行比较,评估了深度学习模型的性能。使用旨在增强图像对比度的图像预处理技术--对比度限制自适应直方图均衡化(CLAHE),我们增强了来自 33 名参与者的 46 幅 SPT 图像的色度对比。我们分别使用 144 个和 150 个训练数据集建立了用于乳轮和红斑分割的深度学习模型。乳轮分割模型的准确率为 0.9985,灵敏度为 0.5621,特异性为 0.9995,Dice 相似系数为 0.7079;红斑分割模型的准确率为 0.9660,灵敏度为 0.5787,特异性为 0.97977,Dice 相似系数为 0.6636。在 SPT 中使用图像预处理和深度学习技术可确保准确分割麦粒肿和红斑,产生一致的评估结果,并简化诊断流程,有望对医疗实践产生重大积极影响。
{"title":"Allergy Wheal and Erythema Segmentation Using Attention U-Net.","authors":"Yul Hee Lee, Ji-Su Shim, Young Jae Kim, Ji Soo Jeon, Sung-Yoon Kang, Sang Pyo Lee, Sang Min Lee, Kwang Gi Kim","doi":"10.1007/s10278-024-01075-0","DOIUrl":"https://doi.org/10.1007/s10278-024-01075-0","url":null,"abstract":"<p><p>The skin prick test (SPT) is a key tool for identifying sensitized allergens associated with immunoglobulin E-mediated allergic diseases such as asthma, allergic rhinitis, atopic dermatitis, urticaria, angioedema, and anaphylaxis. However, the SPT is labor-intensive and time-consuming due to the necessity of measuring the sizes of the erythema and wheals induced by allergens on the skin. In this study, we used an image preprocessing method and a deep learning model to segment wheals and erythema in SPT images captured by a smartphone camera. Subsequently, we assessed the deep learning model's performance by comparing the results with ground-truth data. Using contrast-limited adaptive histogram equalization (CLAHE), an image preprocessing technique designed to enhance image contrast, we augmented the chromatic contrast in 46 SPT images from 33 participants. We established a deep learning model for wheal and erythema segmentation using 144 and 150 training datasets, respectively. The wheal segmentation model achieved an accuracy of 0.9985, a sensitivity of 0.5621, a specificity of 0.9995, and a Dice similarity coefficient of 0.7079, whereas the erythema segmentation model achieved an accuracy of 0.9660, a sensitivity of 0.5787, a specificity of 0.97977, and a Dice similarity coefficient of 0.6636. The use of image preprocessing and deep learning technology in SPT is expected to have a significant positive impact on medical practice by ensuring the accurate segmentation of wheals and erythema, producing consistent evaluation results, and simplifying diagnostic processes.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141908779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of imaging informatics in medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1