首页 > 最新文献

Journal of Digital Imaging最新文献

英文 中文
An Automated Multi-scale Feature Fusion Network for Spine Fracture Segmentation Using Computed Tomography Images 利用计算机断层扫描图像进行脊柱骨折自动多尺度特征融合网络
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-04-15 DOI: 10.1007/s10278-024-01091-0
Muhammad Usman Saeed, Wang Bin, Jinfang Sheng, Hussain Mobarak Albarakati

Spine fractures represent a critical health concern with far-reaching implications for patient care and clinical decision-making. Accurate segmentation of spine fractures from medical images is a crucial task due to its location, shape, type, and severity. Addressing these challenges often requires the use of advanced machine learning and deep learning techniques. In this research, a novel multi-scale feature fusion deep learning model is proposed for the automated spine fracture segmentation using Computed Tomography (CT) to these challenges. The proposed model consists of six modules; Feature Fusion Module (FFM), Squeeze and Excitation (SEM), Atrous Spatial Pyramid Pooling (ASPP), Residual Convolution Block Attention Module (RCBAM), Residual Border Refinement Attention Block (RBRAB), and Local Position Residual Attention Block (LPRAB). These modules are used to apply multi-scale feature fusion, spatial feature extraction, channel-wise feature improvement, segmentation border results border refinement, and positional focus on the region of interest. After that, a decoder network is used to predict the fractured spine. The experimental results show that the proposed approach achieves better accuracy results in solving the above challenges and also performs well compared to the existing segmentation methods.

脊柱骨折是一个严重的健康问题,对患者护理和临床决策具有深远影响。由于脊柱骨折的位置、形状、类型和严重程度不同,从医学图像中准确分割脊柱骨折是一项至关重要的任务。应对这些挑战往往需要使用先进的机器学习和深度学习技术。本研究针对这些挑战,提出了一种新型多尺度特征融合深度学习模型,用于使用计算机断层扫描(CT)进行脊柱骨折自动分割。所提出的模型由六个模块组成:特征融合模块(FFM)、挤压和激发(SEM)、Atrous 空间金字塔池化(ASPP)、残差卷积块注意模块(RCBAM)、残差边界细化注意块(RBRAB)和局部位置残差注意块(LPRAB)。这些模块用于对感兴趣区域进行多尺度特征融合、空间特征提取、信道特征改进、分割边界结果边界细化和位置聚焦。之后,解码器网络用于预测脊柱骨折。实验结果表明,与现有的分割方法相比,所提出的方法在解决上述难题方面取得了更高的准确度,同时也表现出色。
{"title":"An Automated Multi-scale Feature Fusion Network for Spine Fracture Segmentation Using Computed Tomography Images","authors":"Muhammad Usman Saeed, Wang Bin, Jinfang Sheng, Hussain Mobarak Albarakati","doi":"10.1007/s10278-024-01091-0","DOIUrl":"https://doi.org/10.1007/s10278-024-01091-0","url":null,"abstract":"<p>Spine fractures represent a critical health concern with far-reaching implications for patient care and clinical decision-making. Accurate segmentation of spine fractures from medical images is a crucial task due to its location, shape, type, and severity. Addressing these challenges often requires the use of advanced machine learning and deep learning techniques. In this research, a novel multi-scale feature fusion deep learning model is proposed for the automated spine fracture segmentation using Computed Tomography (CT) to these challenges. The proposed model consists of six modules; Feature Fusion Module (FFM), Squeeze and Excitation (SEM), Atrous Spatial Pyramid Pooling (ASPP), Residual Convolution Block Attention Module (RCBAM), Residual Border Refinement Attention Block (RBRAB), and Local Position Residual Attention Block (LPRAB). These modules are used to apply multi-scale feature fusion, spatial feature extraction, channel-wise feature improvement, segmentation border results border refinement, and positional focus on the region of interest. After that, a decoder network is used to predict the fractured spine. The experimental results show that the proposed approach achieves better accuracy results in solving the above challenges and also performs well compared to the existing segmentation methods.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"169 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140591064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pure Vision Transformer (CT-ViT) with Noise2Neighbors Interpolation for Low-Dose CT Image Denoising 采用 Noise2Neighbors 插值技术的纯视觉变换器(CT-ViT)用于低剂量 CT 图像去噪
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-04-15 DOI: 10.1007/s10278-024-01108-8
Luella Marcos, Paul Babyn, Javad Alirezaie

Convolutional neural networks (CNN) have been used for a wide variety of deep learning applications, especially in computer vision. For medical image processing, researchers have identified certain challenges associated with CNNs. These challenges encompass the generation of less informative features, limitations in capturing both high and low-frequency information within feature maps, and the computational cost incurred when enhancing receptive fields by deepening the network. Transformers have emerged as an approach aiming to address and overcome these specific limitations of CNNs in the context of medical image analysis. Preservation of all spatial details of medical images is necessary to ensure accurate patient diagnosis. Hence, this research introduced the use of a pure Vision Transformer (ViT) for a denoising artificial neural network for medical image processing specifically for low-dose computed tomography (LDCT) image denoising. The proposed model follows a U-Net framework that contains ViT modules with the integration of Noise2Neighbor (N2N) interpolation operation. Five different datasets containing LDCT and normal-dose CT (NDCT) image pairs were used to carry out this experiment. To test the efficacy of the proposed model, this experiment includes comparisons between the quantitative and visual results among CNN-based (BM3D, RED-CNN, DRL-E-MP), hybrid CNN-ViT-based (TED-Net), and the proposed pure ViT-based denoising model. The findings of this study showed that there is about 15–20% increase in SSIM and PSNR when using self-attention transformers than using the typical pure CNN. Visual results also showed improvements especially when it comes to showing fine structural details of CT images.

卷积神经网络(CNN)已被广泛用于各种深度学习应用,尤其是计算机视觉领域。在医学图像处理方面,研究人员发现了与卷积神经网络相关的某些挑战。这些挑战包括生成信息量较少的特征、在特征图中捕捉高频和低频信息的局限性,以及通过加深网络来增强感受野所产生的计算成本。在医学图像分析中,变形器作为一种方法应运而生,旨在解决和克服 CNN 的这些特定局限性。要确保准确诊断病人,就必须保留医学图像的所有空间细节。因此,本研究将纯视觉变换器(ViT)用于医学图像处理的去噪人工神经网络,特别是用于低剂量计算机断层扫描(LDCT)图像的去噪。所提出的模型采用 U-Net 框架,其中包含 ViT 模块,并集成了 Noise2Neighbor(N2N)插值操作。实验使用了五个不同的数据集,其中包含 LDCT 和正常剂量 CT(NDCT)图像对。为了测试所提模型的功效,本实验比较了基于 CNN(BM3D、RED-CNN、DRL-E-MP)、基于 CNN-ViT 混合(TED-Net)和所提纯 ViT 去噪模型的定量和视觉结果。研究结果表明,使用自注意变换器时,SSIM 和 PSNR 比使用典型的纯 CNN 时提高了约 15-20%。视觉结果也显示,尤其是在显示 CT 图像的精细结构细节方面,效果有所改善。
{"title":"Pure Vision Transformer (CT-ViT) with Noise2Neighbors Interpolation for Low-Dose CT Image Denoising","authors":"Luella Marcos, Paul Babyn, Javad Alirezaie","doi":"10.1007/s10278-024-01108-8","DOIUrl":"https://doi.org/10.1007/s10278-024-01108-8","url":null,"abstract":"<p>Convolutional neural networks (CNN) have been used for a wide variety of deep learning applications, especially in computer vision. For medical image processing, researchers have identified certain challenges associated with CNNs. These challenges encompass the generation of less informative features, limitations in capturing both high and low-frequency information within feature maps, and the computational cost incurred when enhancing receptive fields by deepening the network. Transformers have emerged as an approach aiming to address and overcome these specific limitations of CNNs in the context of medical image analysis. Preservation of all spatial details of medical images is necessary to ensure accurate patient diagnosis. Hence, this research introduced the use of a pure Vision Transformer (ViT) for a denoising artificial neural network for medical image processing specifically for low-dose computed tomography (LDCT) image denoising. The proposed model follows a U-Net framework that contains ViT modules with the integration of Noise2Neighbor (N2N) interpolation operation. Five different datasets containing LDCT and normal-dose CT (NDCT) image pairs were used to carry out this experiment. To test the efficacy of the proposed model, this experiment includes comparisons between the quantitative and visual results among CNN-based (BM3D, RED-CNN, DRL-E-MP), hybrid CNN-ViT-based (TED-Net), and the proposed pure ViT-based denoising model. The findings of this study showed that there is about 15–20% increase in SSIM and PSNR when using self-attention transformers than using the typical pure CNN. Visual results also showed improvements especially when it comes to showing fine structural details of CT images.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"2 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140591432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Pulmonary Tuberculosis Severity Assessment on Chest X-rays 胸部 X 射线肺结核严重程度自动评估系统
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-04-08 DOI: 10.1007/s10278-024-01052-7
Karthik Kantipudi, Jingwen Gu, Vy Bui, Hang Yu, Stefan Jaeger, Ziv Yaniv

According to the 2022 World Health Organization's Global Tuberculosis (TB) report, an estimated 10.6 million people fell ill with TB, and 1.6 million died from the disease in 2021. In addition, 2021 saw a reversal of a decades-long trend of declining TB infections and deaths, with an estimated increase of 4.5% in the number of people who fell ill with TB compared to 2020, and an estimated yearly increase of 450,000 cases of drug resistant TB. Estimating the severity of pulmonary TB using frontal chest X-rays (CXR) can enable better resource allocation in resource constrained settings and monitoring of treatment response, enabling prompt treatment modifications if disease severity does not decrease over time. The Timika score is a clinically used TB severity score based on a CXR reading. This work proposes and evaluates three deep learning-based approaches for predicting the Timika score with varying levels of explainability. The first approach uses two deep learning-based models, one to explicitly detect lesion regions using YOLOV5n and another to predict the presence of cavitation using DenseNet121, which are then utilized in score calculation. The second approach uses a DenseNet121-based regression model to directly predict the affected lung percentage and another to predict cavitation presence using a DenseNet121-based classification model. Finally, the third approach directly predicts the Timika score using a DenseNet121-based regression model. The best performance is achieved by the second approach with a mean absolute error of 13-14% and a Pearson correlation of 0.7-0.84 using three held-out datasets for evaluating generalization.

根据世界卫生组织 2022 年全球结核病(TB)报告,2021 年估计有 1060 万人感染结核病,160 万人死于结核病。此外,2021 年结核病感染和死亡人数数十年来持续下降的趋势发生了逆转,与 2020 年相比,结核病患病人数估计增加了 4.5%,耐药结核病病例估计每年增加 45 万例。使用前胸 X 光片(CXR)估计肺结核的严重程度,可以在资源有限的环境中更好地分配资源,并监测治疗反应,以便在病情严重程度没有随时间推移而减轻时及时调整治疗方法。Timika 评分是临床上使用的基于 CXR 读数的结核病严重程度评分。这项研究提出并评估了三种基于深度学习的方法,用于预测具有不同可解释性的 Timika 评分。第一种方法使用两个基于深度学习的模型,一个使用 YOLOV5n 明确检测病变区域,另一个使用 DenseNet121 预测空洞的存在,然后将其用于分数计算。第二种方法使用基于 DenseNet121 的回归模型直接预测受影响肺的百分比,另一种方法使用基于 DenseNet121 的分类模型预测空洞化的存在。最后,第三种方法使用基于 DenseNet121 的回归模型直接预测 Timika 分数。第二种方法的性能最佳,平均绝对误差为 13%-14%,皮尔逊相关性为 0.7-0.84。
{"title":"Automated Pulmonary Tuberculosis Severity Assessment on Chest X-rays","authors":"Karthik Kantipudi, Jingwen Gu, Vy Bui, Hang Yu, Stefan Jaeger, Ziv Yaniv","doi":"10.1007/s10278-024-01052-7","DOIUrl":"https://doi.org/10.1007/s10278-024-01052-7","url":null,"abstract":"<p>According to the 2022 World Health Organization's Global Tuberculosis (TB) report, an estimated 10.6 million people fell ill with TB, and 1.6 million died from the disease in 2021. In addition, 2021 saw a reversal of a decades-long trend of declining TB infections and deaths, with an estimated increase of 4.5% in the number of people who fell ill with TB compared to 2020, and an estimated yearly increase of 450,000 cases of drug resistant TB. Estimating the severity of pulmonary TB using frontal chest X-rays (CXR) can enable better resource allocation in resource constrained settings and monitoring of treatment response, enabling prompt treatment modifications if disease severity does not decrease over time. The Timika score is a clinically used TB severity score based on a CXR reading. This work proposes and evaluates three deep learning-based approaches for predicting the Timika score with varying levels of explainability. The first approach uses two deep learning-based models, one to explicitly detect lesion regions using YOLOV5n and another to predict the presence of cavitation using DenseNet121, which are then utilized in score calculation. The second approach uses a DenseNet121-based regression model to directly predict the affected lung percentage and another to predict cavitation presence using a DenseNet121-based classification model. Finally, the third approach directly predicts the Timika score using a DenseNet121-based regression model. The best performance is achieved by the second approach with a mean absolute error of 13-14% and a Pearson correlation of 0.7-0.84 using three held-out datasets for evaluating generalization.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"95 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140591143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cascade-EC Network: Recognition of Gastrointestinal Multiple Lesions Based on EfficientNet and CA_stm_Retinanet 级联-EC 网络:基于 EfficientNet 和 CA_stm_Retinanet 的胃肠道多发病灶识别技术
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-04-08 DOI: 10.1007/s10278-024-01096-9
Xudong Guo, Lei Xu, Shengnan Li, Meidong Xu, Yuan Chu, Qinfen Jiang

Capsule endoscopy (CE) is non-invasive and painless during gastrointestinal examination. However, capsule endoscopy can increase the workload of image reviewing for clinicians, making it prone to missed and misdiagnosed diagnoses. Current researches primarily concentrated on binary classifiers, multiple classifiers targeting fewer than four abnormality types and detectors within a specific segment of the digestive tract, and segmenters for a single type of anomaly. Due to intra-class variations, the task of creating a unified scheme for detecting multiple gastrointestinal diseases is particularly challenging. A cascade neural network designed in this study, Cascade-EC, can automatically identify and localize four types of gastrointestinal lesions in CE images: angiectasis, bleeding, erosion, and polyp. Cascade-EC consists of EfficientNet for image classification and CA_stm_Retinanet for lesion detection and location. As the first layer of Cascade-EC, the EfficientNet network classifies CE images. CA_stm_Retinanet, as the second layer, performs the target detection and location task on the classified image. CA_stm_Retinanet adopts the general architecture of Retinanet. Its feature extraction module is the CA_stm_Backbone from the stack of CA_stm Block. CA_stm Block adopts the split-transform-merge strategy and introduces the coordinate attention. The dataset in this study is from Shanghai East Hospital, collected by PillCam SB3 and AnKon capsule endoscopes, which contains a total of 7936 images of 317 patients from the years 2017 to 2021. In the testing set, the average precision of Cascade-EC in the multi-lesions classification task was 94.55%, the average recall was 90.60%, and the average F1 score was 92.26%. The mean mAP@ 0.5 of Cascade-EC for detecting the four types of diseases is 85.88%. The experimental results show that compared with a single target detection network, Cascade-EC has better performance and can effectively assist clinicians to classify and detect multiple lesions in CE images.

胶囊内镜(CE)是一种无创、无痛的胃肠道检查方法。然而,胶囊内镜检查会增加临床医生的图像审核工作量,容易造成漏诊和误诊。目前的研究主要集中在二元分类器、针对少于四种异常类型的多分类器和消化道特定区段内的检测器,以及针对单一异常类型的分割器。由于类内差异,创建一个检测多种胃肠道疾病的统一方案尤其具有挑战性。本研究设计的级联神经网络 Cascade-EC 可以自动识别和定位 CE 图像中的四种胃肠道病变:血管扩张、出血、糜烂和息肉。Cascade-EC 由用于图像分类的 EfficientNet 和用于病变检测和定位的 CA_stm_Retinanet 组成。作为 Cascade-EC 的第一层,EfficientNet 网络对 CE 图像进行分类。CA_stm_Retinanet 作为第二层,在分类图像上执行目标检测和定位任务。CA_stm_Retinanet 采用 Retinanet 的一般架构。其特征提取模块是 CA_stm Block 堆栈中的 CA_stm_Backbone。CA_stm Block 采用分割-变换-合并策略,并引入了坐标注意力。本研究的数据集来自上海东方医院,由 PillCam SB3 和安康胶囊内镜采集,共包含 2017 年至 2021 年 317 名患者的 7936 张图像。在测试集中,Cascade-EC 在多病灶分类任务中的平均精确度为 94.55%,平均召回率为 90.60%,平均 F1 得分为 92.26%。Cascade-EC 检测四种疾病的平均 mAP@ 0.5 为 85.88%。实验结果表明,与单一目标检测网络相比,Cascade-EC 具有更好的性能,可以有效地帮助临床医生对 CE 图像中的多个病灶进行分类和检测。
{"title":"Cascade-EC Network: Recognition of Gastrointestinal Multiple Lesions Based on EfficientNet and CA_stm_Retinanet","authors":"Xudong Guo, Lei Xu, Shengnan Li, Meidong Xu, Yuan Chu, Qinfen Jiang","doi":"10.1007/s10278-024-01096-9","DOIUrl":"https://doi.org/10.1007/s10278-024-01096-9","url":null,"abstract":"<p>Capsule endoscopy (CE) is non-invasive and painless during gastrointestinal examination. However, capsule endoscopy can increase the workload of image reviewing for clinicians, making it prone to missed and misdiagnosed diagnoses. Current researches primarily concentrated on binary classifiers, multiple classifiers targeting fewer than four abnormality types and detectors within a specific segment of the digestive tract, and segmenters for a single type of anomaly. Due to intra-class variations, the task of creating a unified scheme for detecting multiple gastrointestinal diseases is particularly challenging. A cascade neural network designed in this study, Cascade-EC, can automatically identify and localize four types of gastrointestinal lesions in CE images: angiectasis, bleeding, erosion, and polyp. Cascade-EC consists of EfficientNet for image classification and CA_stm_Retinanet for lesion detection and location. As the first layer of Cascade-EC, the EfficientNet network classifies CE images. CA_stm_Retinanet, as the second layer, performs the target detection and location task on the classified image. CA_stm_Retinanet adopts the general architecture of Retinanet. Its feature extraction module is the CA_stm_Backbone from the stack of CA_stm Block. CA_stm Block adopts the split-transform-merge strategy and introduces the coordinate attention. The dataset in this study is from Shanghai East Hospital, collected by PillCam SB3 and AnKon capsule endoscopes, which contains a total of 7936 images of 317 patients from the years 2017 to 2021. In the testing set, the average precision of Cascade-EC in the multi-lesions classification task was 94.55%, the average recall was 90.60%, and the average F1 score was 92.26%. The mean mAP@ 0.5 of Cascade-EC for detecting the four types of diseases is 85.88%. The experimental results show that compared with a single target detection network, Cascade-EC has better performance and can effectively assist clinicians to classify and detect multiple lesions in CE images.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"17 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140574032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Automated Deep Learning-Based Framework for Uptake Segmentation and Classification on PSMA PET/CT Imaging of Patients with Prostate Cancer 基于深度学习的自动框架:前列腺癌患者 PSMA PET/CT 成像的摄取分割与分类
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-04-08 DOI: 10.1007/s10278-024-01104-y
Yang Li, Maliha R. Imami, Linmei Zhao, Alireza Amindarolzarbi, Esther Mena, Jeffrey Leal, Junyu Chen, Andrei Gafita, Andrew F. Voter, Xin Li, Yong Du, Chengzhang Zhu, Peter L. Choyke, Beiji Zou, Zhicheng Jiao, Steven P. Rowe, Martin G. Pomper, Harrison X. Bai

Uptake segmentation and classification on PSMA PET/CT are important for automating whole-body tumor burden determinations. We developed and evaluated an automated deep learning (DL)-based framework that segments and classifies uptake on PSMA PET/CT. We identified 193 [18F] DCFPyL PET/CT scans of patients with biochemically recurrent prostate cancer from two institutions, including 137 [18F] DCFPyL PET/CT scans for training and internally testing, and 56 scans from another institution for external testing. Two radiologists segmented and labelled foci as suspicious or non-suspicious for malignancy. A DL-based segmentation was developed with two independent CNNs. An anatomical prior guidance was applied to make the DL framework focus on PSMA-avid lesions. Segmentation performance was evaluated by Dice, IoU, precision, and recall. Classification model was constructed with multi-modal decision fusion framework evaluated by accuracy, AUC, F1 score, precision, and recall. Automatic segmentation of suspicious lesions was improved under prior guidance, with mean Dice, IoU, precision, and recall of 0.700, 0.566, 0.809, and 0.660 on the internal test set and 0.680, 0.548, 0.749, and 0.740 on the external test set. Our multi-modal decision fusion framework outperformed single-modal and multi-modal CNNs with accuracy, AUC, F1 score, precision, and recall of 0.764, 0.863, 0.844, 0.841, and 0.847 in distinguishing suspicious and non-suspicious foci on the internal test set and 0.796, 0.851, 0.865, 0.814, and 0.923 on the external test set. DL-based lesion segmentation on PSMA PET is facilitated through our anatomical prior guidance strategy. Our classification framework differentiates suspicious foci from those not suspicious for cancer with good accuracy.

PSMA PET/CT 的摄取分割和分类对于自动确定全身肿瘤负荷非常重要。我们开发并评估了一种基于深度学习(DL)的自动框架,该框架可对 PSMA PET/CT 的摄取量进行分割和分类。我们从两家机构确定了 193 例生化复发前列腺癌患者的 [18F] DCFPyL PET/CT 扫描,其中 137 例 [18F] DCFPyL PET/CT 扫描用于训练和内部测试,56 例扫描来自另一家机构用于外部测试。两名放射科医生对病灶进行了分割,并将其标记为可疑或非可疑恶性病灶。利用两个独立的 CNN 开发了基于 DL 的分割。应用解剖先验指导使 DL 框架聚焦于 PSMA 相关病灶。分割性能通过 Dice、IoU、精确度和召回率进行评估。利用多模态决策融合框架构建的分类模型通过准确率、AUC、F1 分数、精确度和召回率进行评估。在先验指导下,可疑病变的自动分割得到了改善,内部测试集的平均 Dice、IoU、精确度和召回率分别为 0.700、0.566、0.809 和 0.660,外部测试集的平均 Dice、IoU、精确度和召回率分别为 0.680、0.548、0.749 和 0.740。我们的多模态决策融合框架在内部测试集上区分可疑病灶和非可疑病灶的准确度、AUC、F1 分数、精确度和召回率分别为 0.764、0.863、0.844、0.841 和 0.847,在外部测试集上分别为 0.796、0.851、0.865、0.814 和 0.923,表现优于单模态和多模态 CNN。通过我们的解剖先验引导策略,PSMA PET 上基于 DL 的病灶分割变得更加容易。我们的分类框架能准确区分可疑病灶和非可疑癌症病灶。
{"title":"An Automated Deep Learning-Based Framework for Uptake Segmentation and Classification on PSMA PET/CT Imaging of Patients with Prostate Cancer","authors":"Yang Li, Maliha R. Imami, Linmei Zhao, Alireza Amindarolzarbi, Esther Mena, Jeffrey Leal, Junyu Chen, Andrei Gafita, Andrew F. Voter, Xin Li, Yong Du, Chengzhang Zhu, Peter L. Choyke, Beiji Zou, Zhicheng Jiao, Steven P. Rowe, Martin G. Pomper, Harrison X. Bai","doi":"10.1007/s10278-024-01104-y","DOIUrl":"https://doi.org/10.1007/s10278-024-01104-y","url":null,"abstract":"<p>Uptake segmentation and classification on PSMA PET/CT are important for automating whole-body tumor burden determinations. We developed and evaluated an automated deep learning (DL)-based framework that segments and classifies uptake on PSMA PET/CT. We identified 193 [<sup>18</sup>F] DCFPyL PET/CT scans of patients with biochemically recurrent prostate cancer from two institutions, including 137 [<sup>18</sup>F] DCFPyL PET/CT scans for training and internally testing, and 56 scans from another institution for external testing. Two radiologists segmented and labelled foci as suspicious or non-suspicious for malignancy. A DL-based segmentation was developed with two independent CNNs. An anatomical prior guidance was applied to make the DL framework focus on PSMA-avid lesions. Segmentation performance was evaluated by Dice, IoU, precision, and recall. Classification model was constructed with multi-modal decision fusion framework evaluated by accuracy, AUC, F1 score, precision, and recall. Automatic segmentation of suspicious lesions was improved under prior guidance, with mean Dice, IoU, precision, and recall of 0.700, 0.566, 0.809, and 0.660 on the internal test set and 0.680, 0.548, 0.749, and 0.740 on the external test set. Our multi-modal decision fusion framework outperformed single-modal and multi-modal CNNs with accuracy, AUC, F1 score, precision, and recall of 0.764, 0.863, 0.844, 0.841, and 0.847 in distinguishing suspicious and non-suspicious foci on the internal test set and 0.796, 0.851, 0.865, 0.814, and 0.923 on the external test set. DL-based lesion segmentation on PSMA PET is facilitated through our anatomical prior guidance strategy. Our classification framework differentiates suspicious foci from those not suspicious for cancer with good accuracy.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"89 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140574050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Classification-Based Adaptive Segmentation Pipeline: Feasibility Study Using Polycystic Liver Disease and Metastases from Colorectal Cancer CT Images 基于分类的自适应分割管道:利用多囊性肝病和结直肠癌 CT 图像中的转移灶进行可行性研究
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-04-08 DOI: 10.1007/s10278-024-01072-3
Peilong Wang, Timothy L. Kline, Andrew D. Missert, Cole J. Cook, Matthew R. Callstrom, Alex Chan, Robert P. Hartman, Zachary S. Kelm, Panagiotis Korfiatis

Automated segmentation tools often encounter accuracy and adaptability issues when applied to images of different pathology. The purpose of this study is to explore the feasibility of building a workflow to efficiently route images to specifically trained segmentation models. By implementing a deep learning classifier to automatically classify the images and route them to appropriate segmentation models, we hope that our workflow can segment the images with different pathology accurately. The data we used in this study are 350 CT images from patients affected by polycystic liver disease and 350 CT images from patients presenting with liver metastases from colorectal cancer. All images had the liver manually segmented by trained imaging analysts. Our proposed adaptive segmentation workflow achieved a statistically significant improvement for the task of total liver segmentation compared to the generic single-segmentation model (non-parametric Wilcoxon signed rank test, n = 100, p-value << 0.001). This approach is applicable in a wide range of scenarios and should prove useful in clinical implementations of segmentation pipelines.

自动分割工具在应用于不同病理图像时,经常会遇到准确性和适应性问题。本研究的目的是探索建立一个工作流程的可行性,以便将图像高效地路由到经过专门训练的分割模型。通过实施深度学习分类器来自动对图像进行分类,并将它们路由到适当的分割模型,我们希望我们的工作流程能够准确地分割不同病理的图像。我们在这项研究中使用的数据是 350 张来自多囊肝病患者的 CT 图像和 350 张来自结直肠癌肝转移患者的 CT 图像。所有图像均由经过培训的成像分析师对肝脏进行人工分割。与通用的单一分割模型相比,我们提出的自适应分割工作流程在全肝分割任务方面取得了统计学意义上的显著改进(非参数 Wilcoxon 符号秩检验,n = 100,p 值为 <<0.001)。这种方法适用于多种情况,在临床实施分割管道时应该会很有用。
{"title":"A Classification-Based Adaptive Segmentation Pipeline: Feasibility Study Using Polycystic Liver Disease and Metastases from Colorectal Cancer CT Images","authors":"Peilong Wang, Timothy L. Kline, Andrew D. Missert, Cole J. Cook, Matthew R. Callstrom, Alex Chan, Robert P. Hartman, Zachary S. Kelm, Panagiotis Korfiatis","doi":"10.1007/s10278-024-01072-3","DOIUrl":"https://doi.org/10.1007/s10278-024-01072-3","url":null,"abstract":"<p>Automated segmentation tools often encounter accuracy and adaptability issues when applied to images of different pathology. The purpose of this study is to explore the feasibility of building a workflow to efficiently route images to specifically trained segmentation models. By implementing a deep learning classifier to automatically classify the images and route them to appropriate segmentation models, we hope that our workflow can segment the images with different pathology accurately. The data we used in this study are 350 CT images from patients affected by polycystic liver disease and 350 CT images from patients presenting with liver metastases from colorectal cancer. All images had the liver manually segmented by trained imaging analysts. Our proposed adaptive segmentation workflow achieved a statistically significant improvement for the task of total liver segmentation compared to the generic single-segmentation model (non-parametric Wilcoxon signed rank test, <i>n</i> = 100, <i>p</i>-value &lt;&lt; 0.001). This approach is applicable in a wide range of scenarios and should prove useful in clinical implementations of segmentation pipelines.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"30 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140574741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Method for Efficient De-identification of DICOM Metadata and Burned-in Pixel Text DICOM 元数据和烧入像素文本的高效去标识化方法
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-04-08 DOI: 10.1007/s10278-024-01098-7
Jacob A. Macdonald, Katelyn R. Morgan, Brandon Konkel, Kulsoom Abdullah, Mark Martin, Cory Ennis, Joseph Y. Lo, Marissa Stroo, Denise C. Snyder, Mustafa R. Bashir

De-identification of DICOM images is an essential component of medical image research. While many established methods exist for the safe removal of protected health information (PHI) in DICOM metadata, approaches for the removal of PHI “burned-in” to image pixel data are typically manual, and automated high-throughput approaches are not well validated. Emerging optical character recognition (OCR) models can potentially detect and remove PHI-bearing text from medical images but are very time-consuming to run on the high volume of images found in typical research studies. We present a data processing method that performs metadata de-identification for all images combined with a targeted approach to only apply OCR to images with a high likelihood of burned-in text. The method was validated on a dataset of 415,182 images across ten modalities representative of the de-identification requests submitted at our institution over a 20-year span. Of the 12,578 images in this dataset with burned-in text of any kind, only 10 passed undetected with the method. OCR was only required for 6050 images (1.5% of the dataset).

DICOM 图像的去标识化是医学图像研究的重要组成部分。虽然有许多成熟的方法可以安全去除 DICOM 元数据中的受保护健康信息 (PHI),但去除 "烙印 "在图像像素数据中的 PHI 的方法通常都是人工操作,而自动高通量方法还没有得到很好的验证。新兴的光学字符识别 (OCR) 模型有可能检测并移除医学图像中含有 PHI 的文本,但在典型研究中发现的大量图像上运行非常耗时。我们介绍了一种数据处理方法,该方法可对所有图像执行元数据去标识化处理,并结合一种有针对性的方法,只将 OCR 应用于极有可能存在焚烧文本的图像。该方法在一个包含 415,182 张图像的数据集上进行了验证,该数据集涵盖十种模式,代表了本机构 20 年来所提交的去标识化请求。在该数据集中的 12,578 张带有任何类型烧入文本的图像中,只有 10 张未被该方法检测到。只有 6050 张图像(占数据集的 1.5%)需要进行 OCR 识别。
{"title":"A Method for Efficient De-identification of DICOM Metadata and Burned-in Pixel Text","authors":"Jacob A. Macdonald, Katelyn R. Morgan, Brandon Konkel, Kulsoom Abdullah, Mark Martin, Cory Ennis, Joseph Y. Lo, Marissa Stroo, Denise C. Snyder, Mustafa R. Bashir","doi":"10.1007/s10278-024-01098-7","DOIUrl":"https://doi.org/10.1007/s10278-024-01098-7","url":null,"abstract":"<p>De-identification of DICOM images is an essential component of medical image research. While many established methods exist for the safe removal of protected health information (PHI) in DICOM metadata, approaches for the removal of PHI “burned-in” to image pixel data are typically manual, and automated high-throughput approaches are not well validated. Emerging optical character recognition (OCR) models can potentially detect and remove PHI-bearing text from medical images but are very time-consuming to run on the high volume of images found in typical research studies. We present a data processing method that performs metadata de-identification for all images combined with a targeted approach to only apply OCR to images with a high likelihood of burned-in text. The method was validated on a dataset of 415,182 images across ten modalities representative of the de-identification requests submitted at our institution over a 20-year span. Of the 12,578 images in this dataset with burned-in text of any kind, only 10 passed undetected with the method. OCR was only required for 6050 images (1.5% of the dataset).</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"30 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140574049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comparative Analysis of Deep Learning-Based Approaches for Classifying Dental Implants Decision Support System 基于深度学习的牙科植入物分类决策支持系统比较分析
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-04-02 DOI: 10.1007/s10278-024-01086-x
Mohammed A. H. Lubbad, Ikbal Leblebicioglu Kurtulus, Dervis Karaboga, Kerem Kilic, Alper Basturk, Bahriye Akay, Ozkan Ufuk Nalbantoglu, Ozden Melis Durmaz Yilmaz, Mustafa Ayata, Serkan Yilmaz, Ishak Pacal

This study aims to provide an effective solution for the autonomous identification of dental implant brands through a deep learning-based computer diagnostic system. It also seeks to ascertain the system’s potential in clinical practices and to offer a strategic framework for improving diagnosis and treatment processes in implantology. This study employed a total of 28 different deep learning models, including 18 convolutional neural network (CNN) models (VGG, ResNet, DenseNet, EfficientNet, RegNet, ConvNeXt) and 10 vision transformer models (Swin and Vision Transformer). The dataset comprises 1258 panoramic radiographs from patients who received implant treatments at Erciyes University Faculty of Dentistry between 2012 and 2023. It is utilized for the training and evaluation process of deep learning models and consists of prototypes from six different implant systems provided by six manufacturers. The deep learning-based dental implant system provided high classification accuracy for different dental implant brands using deep learning models. Furthermore, among all the architectures evaluated, the small model of the ConvNeXt architecture achieved an impressive accuracy rate of 94.2%, demonstrating a high level of classification success.This study emphasizes the effectiveness of deep learning-based systems in achieving high classification accuracy in dental implant types. These findings pave the way for integrating advanced deep learning tools into clinical practice, promising significant improvements in patient care and treatment outcomes.

本研究旨在通过基于深度学习的计算机诊断系统,为自主识别牙科植入物品牌提供有效的解决方案。本研究还试图确定该系统在临床实践中的潜力,并为改进种植牙诊断和治疗过程提供一个战略框架。本研究共采用了 28 种不同的深度学习模型,包括 18 种卷积神经网络(CNN)模型(VGG、ResNet、DenseNet、EfficientNet、RegNet、ConvNeXt)和 10 种视觉转换器模型(Swin 和 Vision Transformer)。数据集包括 2012 年至 2023 年期间在埃尔希耶斯大学牙科学院接受种植治疗的 1258 名患者的全景照片。该数据集用于深度学习模型的训练和评估过程,由六家制造商提供的六种不同种植系统的原型组成。基于深度学习的牙科植入系统利用深度学习模型对不同品牌的牙科植入体进行了高分类准确性评估。此外,在所有接受评估的架构中,ConvNeXt 架构的小型模型达到了令人印象深刻的 94.2% 的准确率,展示了高水平的分类成功率。这项研究强调了基于深度学习的系统在实现牙科植入物类型高分类准确率方面的有效性。这些发现为将先进的深度学习工具融入临床实践铺平了道路,有望显著改善患者护理和治疗效果。
{"title":"A Comparative Analysis of Deep Learning-Based Approaches for Classifying Dental Implants Decision Support System","authors":"Mohammed A. H. Lubbad, Ikbal Leblebicioglu Kurtulus, Dervis Karaboga, Kerem Kilic, Alper Basturk, Bahriye Akay, Ozkan Ufuk Nalbantoglu, Ozden Melis Durmaz Yilmaz, Mustafa Ayata, Serkan Yilmaz, Ishak Pacal","doi":"10.1007/s10278-024-01086-x","DOIUrl":"https://doi.org/10.1007/s10278-024-01086-x","url":null,"abstract":"<p>This study aims to provide an effective solution for the autonomous identification of dental implant brands through a deep learning-based computer diagnostic system. It also seeks to ascertain the system’s potential in clinical practices and to offer a strategic framework for improving diagnosis and treatment processes in implantology. This study employed a total of 28 different deep learning models, including 18 convolutional neural network (CNN) models (VGG, ResNet, DenseNet, EfficientNet, RegNet, ConvNeXt) and 10 vision transformer models (Swin and Vision Transformer). The dataset comprises 1258 panoramic radiographs from patients who received implant treatments at Erciyes University Faculty of Dentistry between 2012 and 2023. It is utilized for the training and evaluation process of deep learning models and consists of prototypes from six different implant systems provided by six manufacturers. The deep learning-based dental implant system provided high classification accuracy for different dental implant brands using deep learning models. Furthermore, among all the architectures evaluated, the small model of the ConvNeXt architecture achieved an impressive accuracy rate of 94.2%, demonstrating a high level of classification success.This study emphasizes the effectiveness of deep learning-based systems in achieving high classification accuracy in dental implant types. These findings pave the way for integrating advanced deep learning tools into clinical practice, promising significant improvements in patient care and treatment outcomes.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"27 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140574569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fusing Diverse Decision Rules in 3D-Radiomics for Assisting Diagnosis of Lung Adenocarcinoma 融合三维放射组学中的多种决策规则辅助诊断肺腺癌
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-04-02 DOI: 10.1007/s10278-024-00967-5
He Ren, Qiubo Wang, Zhengguang Xiao, Runwei Mo, Jiachen Guo, Gareth Richard Hide, Mengting Tu, Yanan Zeng, Chen Ling, Ping Li

This study aimed to develop an interpretable diagnostic model for subtyping of pulmonary adenocarcinoma, including minimally invasive adenocarcinoma (MIA), adenocarcinoma in situ (AIS), and invasive adenocarcinoma (IAC), by integrating 3D-radiomic features and clinical data. Data from multiple hospitals were collected, and 10 key features were selected from 1600 3D radiomic signatures and 11 radiological features. Diverse decision rules were extracted using ensemble learning methods (gradient boosting, random forest, and AdaBoost), fused, ranked, and selected via RuleFit and SHAP to construct a rule-based diagnostic model. The model’s performance was evaluated using AUC, precision, accuracy, recall, and F1-score and compared with other models. The rule-based diagnostic model exhibited excellent performance in the training, testing, and validation cohorts, with AUC values of 0.9621, 0.9529, and 0.8953, respectively. This model outperformed counterparts relying solely on selected features and previous research models. Specifically, the AUC values for the previous research models in the three cohorts were 0.851, 0.893, and 0.836. It is noteworthy that individual models employing GBDT, random forest, and AdaBoost demonstrated AUC values of 0.9391, 0.8681, and 0.9449 in the training cohort, 0.9093, 0.8722, and 0.9363 in the testing cohort, and 0.8440, 0.8640, and 0.8750 in the validation cohort, respectively. These results highlight the superiority of the rule-based diagnostic model in the assessment of lung adenocarcinoma subtypes, while also providing insights into the performance of individual models. Integrating diverse decision rules enhanced the accuracy and interpretability of the diagnostic model for lung adenocarcinoma subtypes. This approach bridges the gap between complex predictive models and clinical utility, offering valuable support to healthcare professionals and patients.

本研究旨在通过整合三维放射学特征和临床数据,建立一个可解释的肺腺癌亚型诊断模型,包括微侵袭性腺癌(MIA)、原位腺癌(AIS)和侵袭性腺癌(IAC)。该研究收集了多家医院的数据,并从 1600 个三维放射学特征和 11 个放射学特征中筛选出 10 个关键特征。利用集合学习方法(梯度提升、随机森林和 AdaBoost)提取了多种决策规则,并通过 RuleFit 和 SHAP 进行融合、排序和筛选,从而构建了基于规则的诊断模型。该模型的性能使用 AUC、精确度、准确度、召回率和 F1 分数进行评估,并与其他模型进行比较。基于规则的诊断模型在训练、测试和验证队列中均表现出色,AUC 值分别为 0.9621、0.9529 和 0.8953。该模型的表现优于仅依靠选定特征的模型和以前的研究模型。具体来说,三个队列中以往研究模型的 AUC 值分别为 0.851、0.893 和 0.836。值得注意的是,采用 GBDT、随机森林和 AdaBoost 的单个模型在训练队列中的 AUC 值分别为 0.9391、0.8681 和 0.9449,在测试队列中的 AUC 值分别为 0.9093、0.8722 和 0.9363,在验证队列中的 AUC 值分别为 0.8440、0.8640 和 0.8750。这些结果凸显了基于规则的诊断模型在评估肺腺癌亚型方面的优越性,同时也为了解单个模型的性能提供了启示。整合不同的决策规则提高了肺腺癌亚型诊断模型的准确性和可解释性。这种方法在复杂的预测模型和临床实用性之间架起了一座桥梁,为医护人员和患者提供了宝贵的支持。
{"title":"Fusing Diverse Decision Rules in 3D-Radiomics for Assisting Diagnosis of Lung Adenocarcinoma","authors":"He Ren, Qiubo Wang, Zhengguang Xiao, Runwei Mo, Jiachen Guo, Gareth Richard Hide, Mengting Tu, Yanan Zeng, Chen Ling, Ping Li","doi":"10.1007/s10278-024-00967-5","DOIUrl":"https://doi.org/10.1007/s10278-024-00967-5","url":null,"abstract":"<p>This study aimed to develop an interpretable diagnostic model for subtyping of pulmonary adenocarcinoma, including minimally invasive adenocarcinoma (MIA), adenocarcinoma in situ (AIS), and invasive adenocarcinoma (IAC), by integrating 3D-radiomic features and clinical data. Data from multiple hospitals were collected, and 10 key features were selected from 1600 3D radiomic signatures and 11 radiological features. Diverse decision rules were extracted using ensemble learning methods (gradient boosting, random forest, and AdaBoost), fused, ranked, and selected via RuleFit and SHAP to construct a rule-based diagnostic model. The model’s performance was evaluated using AUC, precision, accuracy, recall, and <i>F</i>1-score and compared with other models. The rule-based diagnostic model exhibited excellent performance in the training, testing, and validation cohorts, with AUC values of 0.9621, 0.9529, and 0.8953, respectively. This model outperformed counterparts relying solely on selected features and previous research models. Specifically, the AUC values for the previous research models in the three cohorts were 0.851, 0.893, and 0.836. It is noteworthy that individual models employing GBDT, random forest, and AdaBoost demonstrated AUC values of 0.9391, 0.8681, and 0.9449 in the training cohort, 0.9093, 0.8722, and 0.9363 in the testing cohort, and 0.8440, 0.8640, and 0.8750 in the validation cohort, respectively. These results highlight the superiority of the rule-based diagnostic model in the assessment of lung adenocarcinoma subtypes, while also providing insights into the performance of individual models. Integrating diverse decision rules enhanced the accuracy and interpretability of the diagnostic model for lung adenocarcinoma subtypes. This approach bridges the gap between complex predictive models and clinical utility, offering valuable support to healthcare professionals and patients.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"24 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140574030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-End Multi-task Learning Architecture for Brain Tumor Analysis with Uncertainty Estimation in MRI Images 针对磁共振成像图像中不确定性估计的脑肿瘤分析的端到端多任务学习架构
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-04-02 DOI: 10.1007/s10278-024-01009-w
Maria Nazir, Sadia Shakil, Khurram Khurshid

Brain tumors are a threat to life for every other human being, be it adults or children. Gliomas are one of the deadliest brain tumors with an extremely difficult diagnosis. The reason is their complex and heterogenous structure which gives rise to subjective as well as objective errors. Their manual segmentation is a laborious task due to their complex structure and irregular appearance. To cater to all these issues, a lot of research has been done and is going on to develop AI-based solutions that can help doctors and radiologists in the effective diagnosis of gliomas with the least subjective and objective errors, but an end-to-end system is still missing. An all-in-one framework has been proposed in this research. The developed end-to-end multi-task learning (MTL) architecture with a feature attention module can classify, segment, and predict the overall survival of gliomas by leveraging task relationships between similar tasks. Uncertainty estimation has also been incorporated into the framework to enhance the confidence level of healthcare practitioners. Extensive experimentation was performed by using combinations of MRI sequences. Brain tumor segmentation (BraTS) challenge datasets of 2019 and 2020 were used for experimental purposes. Results of the best model with four sequences show 95.1% accuracy for classification, 86.3% dice score for segmentation, and a mean absolute error (MAE) of 456.59 for survival prediction on the test data. It is evident from the results that deep learning–based MTL models have the potential to automate the whole brain tumor analysis process and give efficient results with least inference time without human intervention. Uncertainty quantification confirms the idea that more data can improve the generalization ability and in turn can produce more accurate results with less uncertainty. The proposed model has the potential to be utilized in a clinical setup for the initial screening of glioma patients.

无论是成人还是儿童,脑肿瘤对每个人的生命都构成威胁。胶质瘤是最致命的脑肿瘤之一,诊断极为困难。究其原因,是胶质瘤的结构复杂而多变,容易造成主观和客观误差。由于其结构复杂、外观不规则,人工分割是一项费力的工作。为了解决所有这些问题,人们已经进行了大量研究,并正在开发基于人工智能的解决方案,以帮助医生和放射科医生有效诊断胶质瘤,同时减少主观和客观误差,但目前仍缺少一个端到端的系统。本研究提出了一个一体化框架。所开发的端到端多任务学习(MTL)架构带有一个特征关注模块,可利用类似任务之间的任务关系对胶质瘤进行分类、分割和预测其总体存活率。该框架还加入了不确定性估计,以提高医疗从业人员的信心水平。通过使用核磁共振成像序列组合进行了广泛的实验。实验使用了 2019 年和 2020 年的脑肿瘤分割(BraTS)挑战数据集。使用四种序列的最佳模型结果显示,分类准确率为 95.1%,分割骰子得分率为 86.3%,测试数据的生存预测平均绝对误差(MAE)为 456.59。从结果中可以看出,基于深度学习的 MTL 模型有望实现整个脑肿瘤分析过程的自动化,并在无需人工干预的情况下以最少的推理时间给出高效的结果。不确定性量化证实了这一观点,即更多的数据可以提高泛化能力,进而产生更准确、不确定性更小的结果。所提出的模型有望用于临床设置,对胶质瘤患者进行初步筛查。
{"title":"End-to-End Multi-task Learning Architecture for Brain Tumor Analysis with Uncertainty Estimation in MRI Images","authors":"Maria Nazir, Sadia Shakil, Khurram Khurshid","doi":"10.1007/s10278-024-01009-w","DOIUrl":"https://doi.org/10.1007/s10278-024-01009-w","url":null,"abstract":"<p>Brain tumors are a threat to life for every other human being, be it adults or children. Gliomas are one of the deadliest brain tumors with an extremely difficult diagnosis. The reason is their complex and heterogenous structure which gives rise to subjective as well as objective errors. Their manual segmentation is a laborious task due to their complex structure and irregular appearance. To cater to all these issues, a lot of research has been done and is going on to develop AI-based solutions that can help doctors and radiologists in the effective diagnosis of gliomas with the least subjective and objective errors, but an end-to-end system is still missing. An all-in-one framework has been proposed in this research. The developed end-to-end multi-task learning (MTL) architecture with a feature attention module can classify, segment, and predict the overall survival of gliomas by leveraging task relationships between similar tasks. Uncertainty estimation has also been incorporated into the framework to enhance the confidence level of healthcare practitioners. Extensive experimentation was performed by using combinations of MRI sequences. Brain tumor segmentation (BraTS) challenge datasets of 2019 and 2020 were used for experimental purposes. Results of the best model with four sequences show 95.1% accuracy for classification, 86.3% dice score for segmentation, and a mean absolute error (MAE) of 456.59 for survival prediction on the test data. It is evident from the results that deep learning–based MTL models have the potential to automate the whole brain tumor analysis process and give efficient results with least inference time without human intervention<b>.</b> Uncertainty quantification confirms the idea that more data can improve the generalization ability and in turn can produce more accurate results with less uncertainty. The proposed model has the potential to be utilized in a clinical setup for the initial screening of glioma patients.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"4 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140574029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Digital Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1