首页 > 最新文献

Journal of imaging informatics in medicine最新文献

英文 中文
LAMA: Lesion-Aware Mixup Augmentation for Skin Lesion Segmentation. LAMA:用于皮肤病变分割的病变感知混合增强技术。
Pub Date : 2024-08-01 Epub Date: 2024-02-26 DOI: 10.1007/s10278-024-01000-5
Norsang Lama, Ronald Joe Stanley, Binita Lama, Akanksha Maurya, Anand Nambisan, Jason Hagerty, Thanh Phan, William Van Stoecker

Deep learning can exceed dermatologists' diagnostic accuracy in experimental image environments. However, inaccurate segmentation of images with multiple skin lesions can be seen with current methods. Thus, information present in multiple-lesion images, available to specialists, is not retrievable by machine learning. While skin lesion images generally capture a single lesion, there may be cases in which a patient's skin variation may be identified as skin lesions, leading to multiple false positive segmentations in a single image. Conversely, image segmentation methods may find only one region and may not capture multiple lesions in an image. To remedy these problems, we propose a novel and effective data augmentation technique for skin lesion segmentation in dermoscopic images with multiple lesions. The lesion-aware mixup augmentation (LAMA) method generates a synthetic multi-lesion image by mixing two or more lesion images from the training set. We used the publicly available International Skin Imaging Collaboration (ISIC) 2017 Challenge skin lesion segmentation dataset to train the deep neural network with the proposed LAMA method. As none of the previous skin lesion datasets (including ISIC 2017) has considered multiple lesions per image, we created a new multi-lesion (MuLe) segmentation dataset utilizing publicly available ISIC 2020 skin lesion images with multiple lesions per image. MuLe was used as a test set to evaluate the effectiveness of the proposed method. Our test results show that the proposed method improved the Jaccard score 8.3% from 0.687 to 0.744 and the Dice score 5% from 0.7923 to 0.8321 over a baseline model on MuLe test images. On the single-lesion ISIC 2017 test images, LAMA improved the baseline model's segmentation performance by 0.08%, raising the Jaccard score from 0.7947 to 0.8013 and the Dice score 0.6% from 0.8714 to 0.8766. The experimental results showed that LAMA improved the segmentation accuracy on both single-lesion and multi-lesion dermoscopic images. The proposed LAMA technique warrants further study.

在实验图像环境中,深度学习可以超越皮肤科医生的诊断准确率。然而,目前的方法对多皮损图像的分割并不准确。因此,专家可以获得的多皮损图像中的信息无法通过机器学习进行检索。虽然皮损图像一般只能捕捉到单个皮损,但在某些情况下,患者的皮肤变异可能会被识别为皮损,从而导致在单张图像中出现多个假阳性分割。相反,图像分割方法可能只能找到一个区域,而无法捕捉到图像中的多个病变。为了解决这些问题,我们提出了一种新颖有效的数据增强技术,用于在具有多个皮损的皮肤镜图像中进行皮损分割。病变感知混合增强(LAMA)方法通过混合训练集中的两幅或多幅病变图像,生成合成的多病变图像。我们使用公开的国际皮肤成像协作组织(ISIC)2017 挑战赛皮损分割数据集,用提出的 LAMA 方法训练深度神经网络。由于之前的皮损数据集(包括 ISIC 2017)都没有考虑每张图像有多个皮损,因此我们利用公开的 ISIC 2020 皮损图像创建了一个新的多皮损(MuLe)分割数据集,每张图像有多个皮损。MuLe 被用作测试集来评估所提出方法的有效性。测试结果表明,在 MuLe 测试图像上,与基线模型相比,所提出的方法将 Jaccard 分数从 0.687 提高到 0.744,提高了 8.3%;将 Dice 分数从 0.7923 提高到 0.8321,提高了 5%。在单离子ISIC 2017测试图像上,LAMA将基线模型的分割性能提高了0.08%,Jaccard得分从0.7947提高到0.8013,Dice得分从0.8714提高到0.8766,提高了0.6%。实验结果表明,LAMA 提高了单病灶和多病灶皮肤镜图像的分割准确率。拟议的 LAMA 技术值得进一步研究。
{"title":"LAMA: Lesion-Aware Mixup Augmentation for Skin Lesion Segmentation.","authors":"Norsang Lama, Ronald Joe Stanley, Binita Lama, Akanksha Maurya, Anand Nambisan, Jason Hagerty, Thanh Phan, William Van Stoecker","doi":"10.1007/s10278-024-01000-5","DOIUrl":"10.1007/s10278-024-01000-5","url":null,"abstract":"<p><p>Deep learning can exceed dermatologists' diagnostic accuracy in experimental image environments. However, inaccurate segmentation of images with multiple skin lesions can be seen with current methods. Thus, information present in multiple-lesion images, available to specialists, is not retrievable by machine learning. While skin lesion images generally capture a single lesion, there may be cases in which a patient's skin variation may be identified as skin lesions, leading to multiple false positive segmentations in a single image. Conversely, image segmentation methods may find only one region and may not capture multiple lesions in an image. To remedy these problems, we propose a novel and effective data augmentation technique for skin lesion segmentation in dermoscopic images with multiple lesions. The lesion-aware mixup augmentation (LAMA) method generates a synthetic multi-lesion image by mixing two or more lesion images from the training set. We used the publicly available International Skin Imaging Collaboration (ISIC) 2017 Challenge skin lesion segmentation dataset to train the deep neural network with the proposed LAMA method. As none of the previous skin lesion datasets (including ISIC 2017) has considered multiple lesions per image, we created a new multi-lesion (MuLe) segmentation dataset utilizing publicly available ISIC 2020 skin lesion images with multiple lesions per image. MuLe was used as a test set to evaluate the effectiveness of the proposed method. Our test results show that the proposed method improved the Jaccard score 8.3% from 0.687 to 0.744 and the Dice score 5% from 0.7923 to 0.8321 over a baseline model on MuLe test images. On the single-lesion ISIC 2017 test images, LAMA improved the baseline model's segmentation performance by 0.08%, raising the Jaccard score from 0.7947 to 0.8013 and the Dice score 0.6% from 0.8714 to 0.8766. The experimental results showed that LAMA improved the segmentation accuracy on both single-lesion and multi-lesion dermoscopic images. The proposed LAMA technique warrants further study.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300415/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139975359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Machine Learning Approach for Importance Evaluation of Multimodal Breast Cancer Radiomic Features. 多模态乳腺癌放射学特征重要性评估的自适应机器学习方法
Pub Date : 2024-08-01 Epub Date: 2024-03-13 DOI: 10.1007/s10278-024-01064-3
Giulio Del Corso, Danila Germanese, Claudia Caudai, Giada Anastasi, Paolo Belli, Alessia Formica, Alberto Nicolucci, Simone Palma, Maria Antonietta Pascali, Stefania Pieroni, Charlotte Trombadori, Sara Colantonio, Michela Franchini, Sabrina Molinaro

Breast cancer holds the highest diagnosis rate among female tumors and is the leading cause of death among women. Quantitative analysis of radiological images shows the potential to address several medical challenges, including the early detection and classification of breast tumors. In the P.I.N.K study, 66 women were enrolled. Their paired Automated Breast Volume Scanner (ABVS) and Digital Breast Tomosynthesis (DBT) images, annotated with cancerous lesions, populated the first ABVS+DBT dataset. This enabled not only a radiomic analysis for the malignant vs. benign breast cancer classification, but also the comparison of the two modalities. For this purpose, the models were trained using a leave-one-out nested cross-validation strategy combined with a proper threshold selection approach. This approach provides statistically significant results even with medium-sized data sets. Additionally it provides distributional variables of importance, thus identifying the most informative radiomic features. The analysis proved the predictive capacity of radiomic models even using a reduced number of features. Indeed, from tomography we achieved AUC-ROC 89.9 % using 19 features and 92.1 % using 7 of them; while from ABVS we attained an AUC-ROC of 72.3 % using 22 features and 85.8 % using only 3 features. Although the predictive power of DBT outperforms ABVS, when comparing the predictions at the patient level, only 8.7% of lesions are misclassified by both methods, suggesting a partial complementarity. Notably, promising results (AUC-ROC ABVS-DBT 71.8 % - 74.1 % ) were achieved using non-geometric features, thus opening the way to the integration of virtual biopsy in medical routine.

乳腺癌是女性肿瘤中诊断率最高的一种,也是女性死亡的主要原因。放射图像的定量分析显示出解决一些医学难题的潜力,包括乳腺肿瘤的早期检测和分类。P.I.N.K研究共招募了66名妇女。她们的配对自动乳腺容积扫描仪(ABVS)和数字乳腺断层合成术(DBT)图像标注了癌症病灶,组成了第一个 ABVS+DBT 数据集。这样不仅能对恶性与良性乳腺癌分类进行放射学分析,还能对两种模式进行比较。为此,模型的训练采用了 "留一 "嵌套交叉验证策略,并结合了适当的阈值选择方法。即使是中等规模的数据集,这种方法也能提供具有统计意义的结果。此外,它还提供了重要的分布变量,从而确定了信息量最大的放射学特征。分析证明,即使使用较少数量的特征,放射体模型也具有预测能力。事实上,在断层扫描中,我们使用 19 个特征获得了 AUC-ROC 89.9%,使用其中 7 个特征获得了 AUC-ROC 92.1%;而在 ABVS 中,我们使用 22 个特征获得了 AUC-ROC 72.3%,仅使用 3 个特征获得了 AUC-ROC 85.8%。虽然 DBT 的预测能力优于 ABVS,但在比较患者层面的预测结果时,两种方法都只误诊了 8.7% 的病变,这表明两种方法具有部分互补性。值得注意的是,使用非几何特征也取得了令人鼓舞的结果(AUC-ROC ABVS-DBT 71.8 % - 74.1 %),从而为将虚拟活检纳入医疗常规开辟了道路。
{"title":"Adaptive Machine Learning Approach for Importance Evaluation of Multimodal Breast Cancer Radiomic Features.","authors":"Giulio Del Corso, Danila Germanese, Claudia Caudai, Giada Anastasi, Paolo Belli, Alessia Formica, Alberto Nicolucci, Simone Palma, Maria Antonietta Pascali, Stefania Pieroni, Charlotte Trombadori, Sara Colantonio, Michela Franchini, Sabrina Molinaro","doi":"10.1007/s10278-024-01064-3","DOIUrl":"10.1007/s10278-024-01064-3","url":null,"abstract":"<p><p>Breast cancer holds the highest diagnosis rate among female tumors and is the leading cause of death among women. Quantitative analysis of radiological images shows the potential to address several medical challenges, including the early detection and classification of breast tumors. In the P.I.N.K study, 66 women were enrolled. Their paired Automated Breast Volume Scanner (ABVS) and Digital Breast Tomosynthesis (DBT) images, annotated with cancerous lesions, populated the first ABVS+DBT dataset. This enabled not only a radiomic analysis for the malignant vs. benign breast cancer classification, but also the comparison of the two modalities. For this purpose, the models were trained using a leave-one-out nested cross-validation strategy combined with a proper threshold selection approach. This approach provides statistically significant results even with medium-sized data sets. Additionally it provides distributional variables of importance, thus identifying the most informative radiomic features. The analysis proved the predictive capacity of radiomic models even using a reduced number of features. Indeed, from tomography we achieved AUC-ROC <math><mrow><mn>89.9</mn> <mo>%</mo></mrow> </math> using 19 features and <math><mrow><mn>92.1</mn> <mo>%</mo></mrow> </math> using 7 of them; while from ABVS we attained an AUC-ROC of <math><mrow><mn>72.3</mn> <mo>%</mo></mrow> </math> using 22 features and <math><mrow><mn>85.8</mn> <mo>%</mo></mrow> </math> using only 3 features. Although the predictive power of DBT outperforms ABVS, when comparing the predictions at the patient level, only 8.7% of lesions are misclassified by both methods, suggesting a partial complementarity. Notably, promising results (AUC-ROC ABVS-DBT <math><mrow><mn>71.8</mn> <mo>%</mo></mrow> </math> - <math><mrow><mn>74.1</mn> <mo>%</mo></mrow> </math> ) were achieved using non-geometric features, thus opening the way to the integration of virtual biopsy in medical routine.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300750/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140121777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classification of Lung Diseases Using an Attention-Based Modified DenseNet Model. 使用基于注意力的修正 DenseNet 模型进行肺部疾病分类
Pub Date : 2024-08-01 Epub Date: 2024-03-11 DOI: 10.1007/s10278-024-01005-0
Upasana Chutia, Anand Shanker Tewari, Jyoti Prakash Singh, Vikash Kumar Raj

Lung diseases represent a significant global health threat, impacting both well-being and mortality rates. Diagnostic procedures such as Computed Tomography (CT) scans and X-ray imaging play a pivotal role in identifying these conditions. X-rays, due to their easy accessibility and affordability, serve as a convenient and cost-effective option for diagnosing lung diseases. Our proposed method utilized the Contrast-Limited Adaptive Histogram Equalization (CLAHE) enhancement technique on X-ray images to highlight the key feature maps related to lung diseases using DenseNet201. We have augmented the existing Densenet201 model with a hybrid pooling and channel attention mechanism. The experimental results demonstrate the superiority of our model over well-known pre-trained models, such as VGG16, VGG19, InceptionV3, Xception, ResNet50, ResNet152, ResNet50V2, ResNet152V2, MobileNetV2, DenseNet121, DenseNet169, and DenseNet201. Our model achieves impressive accuracy, precision, recall, and F1-scores of 95.34%, 97%, 96%, and 96%, respectively. We also provide visual insights into our model's decision-making process using Gradient-weighted Class Activation Mapping (Grad-CAM) to identify normal, pneumothorax, and atelectasis cases. The experimental results of our model in terms of heatmap may help radiologists improve their diagnostic abilities and labelling processes.

肺部疾病是对全球健康的重大威胁,影响着人们的健康和死亡率。计算机断层扫描(CT)和 X 射线成像等诊断程序在确定这些疾病方面发挥着关键作用。X 射线由于易于获取且价格低廉,是诊断肺部疾病的一种方便且经济的选择。我们提出的方法利用对比度受限自适应直方图均衡化(CLAHE)增强技术对 X 光图像进行增强,利用 DenseNet201 突出与肺部疾病相关的关键特征图。我们采用混合池化和通道关注机制增强了现有的 Densenet201 模型。实验结果表明,我们的模型优于著名的预训练模型,如 VGG16、VGG19、InceptionV3、Xception、ResNet50、ResNet152、ResNet50V2、ResNet152V2、MobileNetV2、DenseNet121、DenseNet169 和 DenseNet201。我们的模型在准确度、精确度、召回率和 F1 分数上分别达到了令人印象深刻的 95.34%、97%、96% 和 96%。我们还利用梯度加权类激活映射(Gradient-weighted Class Activation Mapping,Grad-CAM)对模型的决策过程进行了直观的分析,以识别正常、气胸和肺不张病例。我们的模型在热图方面的实验结果可以帮助放射科医生提高诊断能力和标记过程。
{"title":"Classification of Lung Diseases Using an Attention-Based Modified DenseNet Model.","authors":"Upasana Chutia, Anand Shanker Tewari, Jyoti Prakash Singh, Vikash Kumar Raj","doi":"10.1007/s10278-024-01005-0","DOIUrl":"10.1007/s10278-024-01005-0","url":null,"abstract":"<p><p>Lung diseases represent a significant global health threat, impacting both well-being and mortality rates. Diagnostic procedures such as Computed Tomography (CT) scans and X-ray imaging play a pivotal role in identifying these conditions. X-rays, due to their easy accessibility and affordability, serve as a convenient and cost-effective option for diagnosing lung diseases. Our proposed method utilized the Contrast-Limited Adaptive Histogram Equalization (CLAHE) enhancement technique on X-ray images to highlight the key feature maps related to lung diseases using DenseNet201. We have augmented the existing Densenet201 model with a hybrid pooling and channel attention mechanism. The experimental results demonstrate the superiority of our model over well-known pre-trained models, such as VGG16, VGG19, InceptionV3, Xception, ResNet50, ResNet152, ResNet50V2, ResNet152V2, MobileNetV2, DenseNet121, DenseNet169, and DenseNet201. Our model achieves impressive accuracy, precision, recall, and F1-scores of 95.34%, 97%, 96%, and 96%, respectively. We also provide visual insights into our model's decision-making process using Gradient-weighted Class Activation Mapping (Grad-CAM) to identify normal, pneumothorax, and atelectasis cases. The experimental results of our model in terms of heatmap may help radiologists improve their diagnostic abilities and labelling processes.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140103150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Omics Nomogram Based on Incoherent Motion Diffusion-Weighted Imaging in Voxels Predicts ATRX Gene Mutation Status of Brain Glioma Patients. 基于体素非相干运动扩散加权成像的图像 Omics Nomogram 预测脑胶质瘤患者的 ATRX 基因突变状态
Pub Date : 2024-08-01 Epub Date: 2024-02-20 DOI: 10.1007/s10278-024-00984-4
Xueyao Lin, Chaochao Wang, Jingjing Zheng, Mengru Liu, Ming Li, Hongbin Xu, Haibo Dong

This study aimed to construct an imaging genomics nomogram based on intravoxel incoherent motion diffusion-weighted imaging (IVIM-DWI) to predict the status of the alpha thalassemia/mental retardation syndrome X-linked (ATRX) gene in patients with brain gliomas. We retrospectively analyzed routine MR and IVIM-DWI data from 85 patients with pathologically confirmed brain gliomas from January 2017 to May 2023. The data were divided into a training set (N=61) and a test set (N=24) in a 7:3 ratio. Regions of interest (ROIs) of brain gliomas, including the solid tumor region (rCET), edema region (rE), and necrotic region (rNec), were delineated using 3D-Slicer software and projected onto the D, D*, and f sequences. A total of 1037 features were extracted from each ROI, resulting in 3111 features per patient. Age was incorporated in the calculation of the Radscore, and a clinical-imaging genomics combined model was constructed, from which a nomogram graph was generated. Separate models were built for the D, D*, and f parameters. The AUC value of the D parameter model was 0.97 (95% CI: 0.93-1.00) in the training set and 0.91 (95% CI: 0.79-1.00) in the validation set, which was significantly higher than that of the D* parameter model (0.90, 0.82) and the f parameter model (0.89, 0.91). The imaging genomics nomogram based on IVIM-DWI can effectively predict the ATRX gene status of patients with brain gliomas, with the D parameter showing the highest efficacy.

本研究旨在构建基于体细胞内非相干运动弥散加权成像(IVIM-DWI)的成像基因组学提名图,以预测脑胶质瘤患者的阿尔法地中海贫血/智力低下综合征X连锁(ATRX)基因状态。我们回顾性分析了2017年1月至2023年5月期间85例经病理确诊的脑胶质瘤患者的常规MR和IVIM-DWI数据。数据按 7:3 的比例分为训练集(N=61)和测试集(N=24)。脑胶质瘤的感兴趣区(ROI),包括实体瘤区(rCET)、水肿区(rE)和坏死区(rNec),使用3D-Slicer软件进行划定,并投影到D、D*和f序列上。从每个 ROI 共提取了 1037 个特征,每位患者共提取了 3111 个特征。在计算 Radscore 时考虑了年龄因素,并构建了一个临床-成像基因组学组合模型,由此生成了一个提名图。为 D、D* 和 f 参数分别建立了模型。在训练集中,D 参数模型的 AUC 值为 0.97(95% CI:0.93-1.00),在验证集中为 0.91(95% CI:0.79-1.00),明显高于 D* 参数模型(0.90,0.82)和 f 参数模型(0.89,0.91)。基于IVIM-DWI的成像基因组学提名图能有效预测脑胶质瘤患者的ATRX基因状态,其中D参数的有效性最高。
{"title":"Image Omics Nomogram Based on Incoherent Motion Diffusion-Weighted Imaging in Voxels Predicts ATRX Gene Mutation Status of Brain Glioma Patients.","authors":"Xueyao Lin, Chaochao Wang, Jingjing Zheng, Mengru Liu, Ming Li, Hongbin Xu, Haibo Dong","doi":"10.1007/s10278-024-00984-4","DOIUrl":"10.1007/s10278-024-00984-4","url":null,"abstract":"<p><p>This study aimed to construct an imaging genomics nomogram based on intravoxel incoherent motion diffusion-weighted imaging (IVIM-DWI) to predict the status of the alpha thalassemia/mental retardation syndrome X-linked (ATRX) gene in patients with brain gliomas. We retrospectively analyzed routine MR and IVIM-DWI data from 85 patients with pathologically confirmed brain gliomas from January 2017 to May 2023. The data were divided into a training set (N=61) and a test set (N=24) in a 7:3 ratio. Regions of interest (ROIs) of brain gliomas, including the solid tumor region (rCET), edema region (rE), and necrotic region (rNec), were delineated using 3D-Slicer software and projected onto the D, D*, and f sequences. A total of 1037 features were extracted from each ROI, resulting in 3111 features per patient. Age was incorporated in the calculation of the Radscore, and a clinical-imaging genomics combined model was constructed, from which a nomogram graph was generated. Separate models were built for the D, D*, and f parameters. The AUC value of the D parameter model was 0.97 (95% CI: 0.93-1.00) in the training set and 0.91 (95% CI: 0.79-1.00) in the validation set, which was significantly higher than that of the D* parameter model (0.90, 0.82) and the f parameter model (0.89, 0.91). The imaging genomics nomogram based on IVIM-DWI can effectively predict the ATRX gene status of patients with brain gliomas, with the D parameter showing the highest efficacy.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300756/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139914402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Radiomics Features on Enhanced Computed Tomography Predict FOXP3 Expression and Clinical Prognosis in Patients with Head and Neck Squamous Cell Carcinoma. 增强计算机断层扫描的放射组学特征可预测头颈部鳞状细胞癌患者的 FOXP3 表达和临床预后
Pub Date : 2024-08-01 Epub Date: 2024-02-20 DOI: 10.1007/s10278-023-00910-0
Yi Wang, Juan Ye, Kai Zhou, Nian Chen, Gang Huang, Guangyong Feng, Guihai Zhang, Xiaoxia Gou

Forkhead box P3 (FOXP3) has been identified as a novel molecular marker in various types of cancer. The present study assessed the expression of FOXP3 in patients with head and neck squamous cell carcinoma (HNSCC) and its potential as a clinical prognostic indicator, and developed a radiomics model based on enhanced computed tomography (CT) imaging. Data from 483 patients with HNSCC were downloaded from the Cancer Genome Atlas for FOXP3 prognostic analysis and enhanced CT images from 139 patients included in the Cancer Imaging Archives, which were subjected to the maximum relevance and minimum redundancy and recursive feature elimination algorithms for radiomics feature extraction and processing. Logistic regression was used to build a model for predicting FOXP3 expression. A prognostic scoring system for radiomics score (RS), FOXP3, and patient clinicopathological factors was established to predict patient survival. The area under the receiver operating characteristic (ROC) curve (AUC) and calibration curve and decision curve analysis (DCA) were used to evaluate model performance. Furthermore, the relationship between FOXP3 and the immune microenvironment, as well as the association between RS and immune checkpoint-related genes, was analyzed. Results of analysis revealed that patients with HNSCC and high FOXP3 mRNA expression exhibited better overall survival. Immune infiltration analysis revealed that FOXP3 had a positive correlation with CD4 + and CD8 + T cells and other immune cells. The 8 best radiomics features were selected to construct the radiomics model. In the FOXP3 expression prediction model, the AUC values were 0.707 and 0.702 for the training and validation sets, respectively. Additionally, the calibration curve and DCA demonstrated the positive diagnostic utility of the model. RS was correlated with immune checkpoint-related genes such as ICOS, CTLA4, and PDCD1. A predictive nomogram was established, the AUCs were 0.87, 0.787, and 0.801 at 12, 24, and 36 months, respectively, and DCA demonstrated the high clinical applicability of the nomogram. The enhanced CT radiomics model can predict expression of FOXP3 and prognosis in patients with HNSCC. As such, FOXP3 may be used as a novel prognostic marker to improve individualized clinical diagnosis and treatment decisions.

叉头盒P3(FOXP3)已被确定为各类癌症的新型分子标记物。本研究评估了FOXP3在头颈部鳞状细胞癌(HNSCC)患者中的表达及其作为临床预后指标的潜力,并开发了基于增强计算机断层扫描(CT)成像的放射组学模型。研究人员从癌症基因组图谱中下载了483名HNSCC患者的数据用于FOXP3预后分析,并从癌症成像档案中下载了139名患者的增强CT图像,采用最大相关性、最小冗余和递归特征消除算法进行放射组学特征提取和处理。利用逻辑回归建立了预测 FOXP3 表达的模型。建立了放射组学评分(RS)、FOXP3和患者临床病理因素的预后评分系统,以预测患者的生存率。采用接收者操作特征曲线(ROC)下面积(AUC)、校准曲线和决策曲线分析(DCA)来评估模型的性能。此外,还分析了FOXP3与免疫微环境之间的关系,以及RS与免疫检查点相关基因之间的关联。分析结果显示,FOXP3 mRNA高表达的HNSCC患者总生存率更高。免疫浸润分析显示,FOXP3 与 CD4 + 和 CD8 + T 细胞及其他免疫细胞呈正相关。筛选出的 8 个最佳放射组学特征被用于构建放射组学模型。在 FOXP3 表达预测模型中,训练集和验证集的 AUC 值分别为 0.707 和 0.702。此外,校准曲线和 DCA 也证明了该模型具有积极的诊断作用。RS 与 ICOS、CTLA4 和 PDCD1 等免疫检查点相关基因相关。建立的预测提名图在 12 个月、24 个月和 36 个月时的 AUC 分别为 0.87、0.787 和 0.801,DCA 证明了提名图的高度临床适用性。增强型 CT 放射组学模型可以预测 HNSCC 患者的 FOXP3 表达和预后。因此,FOXP3 可作为一种新型预后标志物,用于改善个体化临床诊断和治疗决策。
{"title":"Radiomics Features on Enhanced Computed Tomography Predict FOXP3 Expression and Clinical Prognosis in Patients with Head and Neck Squamous Cell Carcinoma.","authors":"Yi Wang, Juan Ye, Kai Zhou, Nian Chen, Gang Huang, Guangyong Feng, Guihai Zhang, Xiaoxia Gou","doi":"10.1007/s10278-023-00910-0","DOIUrl":"10.1007/s10278-023-00910-0","url":null,"abstract":"<p><p>Forkhead box P3 (FOXP3) has been identified as a novel molecular marker in various types of cancer. The present study assessed the expression of FOXP3 in patients with head and neck squamous cell carcinoma (HNSCC) and its potential as a clinical prognostic indicator, and developed a radiomics model based on enhanced computed tomography (CT) imaging. Data from 483 patients with HNSCC were downloaded from the Cancer Genome Atlas for FOXP3 prognostic analysis and enhanced CT images from 139 patients included in the Cancer Imaging Archives, which were subjected to the maximum relevance and minimum redundancy and recursive feature elimination algorithms for radiomics feature extraction and processing. Logistic regression was used to build a model for predicting FOXP3 expression. A prognostic scoring system for radiomics score (RS), FOXP3, and patient clinicopathological factors was established to predict patient survival. The area under the receiver operating characteristic (ROC) curve (AUC) and calibration curve and decision curve analysis (DCA) were used to evaluate model performance. Furthermore, the relationship between FOXP3 and the immune microenvironment, as well as the association between RS and immune checkpoint-related genes, was analyzed. Results of analysis revealed that patients with HNSCC and high FOXP3 mRNA expression exhibited better overall survival. Immune infiltration analysis revealed that FOXP3 had a positive correlation with CD4<sup> +</sup> and CD8<sup> +</sup> T cells and other immune cells. The 8 best radiomics features were selected to construct the radiomics model. In the FOXP3 expression prediction model, the AUC values were 0.707 and 0.702 for the training and validation sets, respectively. Additionally, the calibration curve and DCA demonstrated the positive diagnostic utility of the model. RS was correlated with immune checkpoint-related genes such as ICOS, CTLA4, and PDCD1. A predictive nomogram was established, the AUCs were 0.87, 0.787, and 0.801 at 12, 24, and 36 months, respectively, and DCA demonstrated the high clinical applicability of the nomogram. The enhanced CT radiomics model can predict expression of FOXP3 and prognosis in patients with HNSCC. As such, FOXP3 may be used as a novel prognostic marker to improve individualized clinical diagnosis and treatment decisions.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300763/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139907299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Systematic Review of Retinal Blood Vessels Segmentation Based on AI-driven Technique. 基于人工智能技术的视网膜血管分割系统综述
Pub Date : 2024-08-01 Epub Date: 2024-03-04 DOI: 10.1007/s10278-024-01010-3
Prem Kumari Verma, Jagdeep Kaur

Image segmentation is a crucial task in computer vision and image processing, with numerous segmentation algorithms being found in the literature. It has important applications in scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, image compression, among others. In light of this, the widespread popularity of deep learning (DL) and machine learning has inspired the creation of fresh methods for segmenting images using DL and ML models respectively. We offer a thorough analysis of this recent literature, encompassing the range of ground-breaking initiatives in semantic and instance segmentation, including convolutional pixel-labeling networks, encoder-decoder architectures, multi-scale and pyramid-based methods, recurrent networks, visual attention models, and generative models in adversarial settings. We study the connections, benefits, and importance of various DL- and ML-based segmentation models; look at the most popular datasets; and evaluate results in this Literature.

图像分割是计算机视觉和图像处理中的一项重要任务,文献中有许多分割算法。它在场景理解、医学图像分析、机器人感知、视频监控、增强现实、图像压缩等方面有着重要的应用。有鉴于此,深度学习(DL)和机器学习的广泛流行激发了人们分别使用 DL 模型和 ML 模型创建图像分割新方法的热情。我们对这些最新文献进行了深入分析,涵盖了语义和实例分割方面的一系列突破性举措,包括卷积像素标记网络、编码器-解码器架构、基于多尺度和金字塔的方法、递归网络、视觉注意力模型以及对抗环境下的生成模型。我们研究了各种基于 DL 和 ML 的分割模型的联系、优势和重要性;研究了最流行的数据集;并评估了本文献中的结果。
{"title":"Systematic Review of Retinal Blood Vessels Segmentation Based on AI-driven Technique.","authors":"Prem Kumari Verma, Jagdeep Kaur","doi":"10.1007/s10278-024-01010-3","DOIUrl":"10.1007/s10278-024-01010-3","url":null,"abstract":"<p><p>Image segmentation is a crucial task in computer vision and image processing, with numerous segmentation algorithms being found in the literature. It has important applications in scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, image compression, among others. In light of this, the widespread popularity of deep learning (DL) and machine learning has inspired the creation of fresh methods for segmenting images using DL and ML models respectively. We offer a thorough analysis of this recent literature, encompassing the range of ground-breaking initiatives in semantic and instance segmentation, including convolutional pixel-labeling networks, encoder-decoder architectures, multi-scale and pyramid-based methods, recurrent networks, visual attention models, and generative models in adversarial settings. We study the connections, benefits, and importance of various DL- and ML-based segmentation models; look at the most popular datasets; and evaluate results in this Literature.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300804/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140029987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identification and Localization of Indolent and Aggressive Prostate Cancers Using Multilevel Bi-LSTM. 利用多层次 Bi-LSTM 识别和定位不稳定型和侵袭性前列腺癌
Pub Date : 2024-08-01 Epub Date: 2024-03-06 DOI: 10.1007/s10278-024-01030-z
Afnan M Alhassan

Identifying indolent and aggressive prostate cancers is a critical problem for optimal treatment. The existing approaches of prostate cancer detection are facing challenges as the techniques rely on ground truth labels with limited accuracy, and histological similarity, and do not consider the disease pathology characteristics, and indefinite differences in appearance between the cancerous and healthy tissue lead to many false positive and false negative interpretations. Hence, this research introduces a comprehensive framework designed to achieve accurate identification and localization of prostate cancers, irrespective of their aggressiveness. This is accomplished through the utilization of a sophisticated multilevel bidirectional long short-term memory (Bi-LSTM) model. The pre-processed images are subjected to multilevel feature map-based U-Net segmentation, bolstered by ResNet-101 and a channel-based attention module that improves the performance. Subsequently, segmented images undergo feature extraction, encompassing various feature types, including statistical features, a global hybrid-based feature map, and a ResNet-101 feature map that enhances the detection accuracy. The extracted features are fed to the multilevel Bi-LSTM model, further optimized through channel and spatial attention mechanisms that offer the effective localization and recognition of complex structures of cancer. Further, the framework represents a promising approach for enhancing the diagnosis and localization of prostate cancers, encompassing both indolent and aggressive cases. Rigorous testing on a distinct dataset demonstrates the model's effectiveness, with performance evaluated through key metrics which are reported as 96.72%, 96.17%, and 96.17% for accuracy, sensitivity, and specificity respectively utilizing the dataset 1. For dataset 2, the model achieves the accuracy, sensitivity, and specificity values of 94.41%, 93.10%, and 94.96% respectively. These results surpass the efficiency of alternative methods.

鉴别轻度和侵袭性前列腺癌是优化治疗的关键问题。现有的前列腺癌检测方法面临着挑战,因为这些技术依赖于准确性有限的地面实况标签和组织学相似性,没有考虑到疾病的病理特征,而且癌组织和健康组织在外观上的不确定差异会导致许多假阳性和假阴性解释。因此,这项研究引入了一个综合框架,旨在实现对前列腺癌的准确识别和定位,无论其侵袭性如何。这是通过利用复杂的多层次双向长短期记忆(Bi-LSTM)模型来实现的。预处理后的图像将进行基于多级特征图的 U-Net 分割,并通过 ResNet-101 和基于通道的注意力模块来提高性能。随后,对分割后的图像进行特征提取,包括各种特征类型,包括统计特征、基于全局混合的特征图和可提高检测精度的 ResNet-101 特征图。提取的特征被送入多级 Bi-LSTM 模型,并通过通道和空间注意机制进一步优化,从而对癌症的复杂结构进行有效定位和识别。此外,该框架还是一种很有前途的方法,可用于加强前列腺癌的诊断和定位,包括轻度和侵袭性病例。数据集 1 的准确率、灵敏度和特异性分别为 96.72%、96.17% 和 96.17%。对于数据集 2,该模型的准确率、灵敏度和特异性分别达到了 94.41%、93.10% 和 94.96%。这些结果超过了其他方法的效率。
{"title":"Identification and Localization of Indolent and Aggressive Prostate Cancers Using Multilevel Bi-LSTM.","authors":"Afnan M Alhassan","doi":"10.1007/s10278-024-01030-z","DOIUrl":"10.1007/s10278-024-01030-z","url":null,"abstract":"<p><p>Identifying indolent and aggressive prostate cancers is a critical problem for optimal treatment. The existing approaches of prostate cancer detection are facing challenges as the techniques rely on ground truth labels with limited accuracy, and histological similarity, and do not consider the disease pathology characteristics, and indefinite differences in appearance between the cancerous and healthy tissue lead to many false positive and false negative interpretations. Hence, this research introduces a comprehensive framework designed to achieve accurate identification and localization of prostate cancers, irrespective of their aggressiveness. This is accomplished through the utilization of a sophisticated multilevel bidirectional long short-term memory (Bi-LSTM) model. The pre-processed images are subjected to multilevel feature map-based U-Net segmentation, bolstered by ResNet-101 and a channel-based attention module that improves the performance. Subsequently, segmented images undergo feature extraction, encompassing various feature types, including statistical features, a global hybrid-based feature map, and a ResNet-101 feature map that enhances the detection accuracy. The extracted features are fed to the multilevel Bi-LSTM model, further optimized through channel and spatial attention mechanisms that offer the effective localization and recognition of complex structures of cancer. Further, the framework represents a promising approach for enhancing the diagnosis and localization of prostate cancers, encompassing both indolent and aggressive cases. Rigorous testing on a distinct dataset demonstrates the model's effectiveness, with performance evaluated through key metrics which are reported as 96.72%, 96.17%, and 96.17% for accuracy, sensitivity, and specificity respectively utilizing the dataset 1. For dataset 2, the model achieves the accuracy, sensitivity, and specificity values of 94.41%, 93.10%, and 94.96% respectively. These results surpass the efficiency of alternative methods.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300760/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140051399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards an EKG for SBO: A Neural Network for Detection and Characterization of Bowel Obstruction on CT. 为 SBO 设计心电图:用于 CT 肠梗阻检测和定性的神经网络
Pub Date : 2024-08-01 Epub Date: 2024-02-22 DOI: 10.1007/s10278-024-01023-y
Paul M Murphy

A neural network was developed to detect and characterize bowel obstruction, a common cause of acute abdominal pain. In this retrospective study, 202 CT scans of 165 patients with bowel obstruction from March to June 2022 were included and partitioned into training and test data sets. A multi-channel neural network was trained to segment the gastrointestinal tract, and to predict the diameter and the longitudinal position ("longitude") along the gastrointestinal tract using a novel embedding. Its performance was compared to manual segmentations using the Dice score, and to manual measurements of the diameter and longitude using intraclass correlation coefficients (ICC). ROC curves as well as sensitivity and specificity were calculated for diameters above a clinical threshold for obstruction, and for longitudes corresponding to small bowel. In the test data set, Dice score for segmentation of the gastrointestinal tract was 78 ± 8%. ICC between measured and predicted diameters was 0.72, indicating moderate agreement. ICC between measured and predicted longitude was 0.85, indicating good agreement. AUROC was 0.90 for detection of dilated bowel, and was 0.95 and 0.90 for differentiation of the proximal and distal gastrointestinal tract respectively. Overall sensitivity and specificity for dilated small bowel were 0.83 and 0.90. Since obstruction is diagnosed based on the diameter and longitude of the bowel, this neural network and embedding may enable detection and characterization of this important disease on CT.

我们开发了一种神经网络,用于检测和描述急性腹痛的常见原因--肠梗阻。在这项回顾性研究中,共纳入了 2022 年 3 月至 6 月期间 165 名肠梗阻患者的 202 张 CT 扫描图像,并将其分为训练数据集和测试数据集。研究人员对多通道神经网络进行了训练,以分割胃肠道,并使用一种新型嵌入方法预测胃肠道的直径和纵向位置("经度")。其性能与使用骰子评分的人工分段以及使用类内相关系数(ICC)的人工直径和经度测量结果进行了比较。计算了直径超过临床梗阻阈值时的 ROC 曲线以及灵敏度和特异性,以及与小肠相对应的经度。在测试数据集中,胃肠道分割的 Dice 得分为 78 ± 8%。测量直径与预测直径之间的 ICC 为 0.72,表明两者之间的一致性适中。测量经度和预测经度之间的 ICC 为 0.85,表明一致性良好。检测扩张肠道的 AUROC 为 0.90,区分近端和远端胃肠道的 AUROC 分别为 0.95 和 0.90。小肠扩张的总体敏感性和特异性分别为 0.83 和 0.90。由于梗阻是根据肠道的直径和经度来诊断的,因此这种神经网络和嵌入可能有助于在 CT 上检测和描述这种重要疾病。
{"title":"Towards an EKG for SBO: A Neural Network for Detection and Characterization of Bowel Obstruction on CT.","authors":"Paul M Murphy","doi":"10.1007/s10278-024-01023-y","DOIUrl":"10.1007/s10278-024-01023-y","url":null,"abstract":"<p><p>A neural network was developed to detect and characterize bowel obstruction, a common cause of acute abdominal pain. In this retrospective study, 202 CT scans of 165 patients with bowel obstruction from March to June 2022 were included and partitioned into training and test data sets. A multi-channel neural network was trained to segment the gastrointestinal tract, and to predict the diameter and the longitudinal position (\"longitude\") along the gastrointestinal tract using a novel embedding. Its performance was compared to manual segmentations using the Dice score, and to manual measurements of the diameter and longitude using intraclass correlation coefficients (ICC). ROC curves as well as sensitivity and specificity were calculated for diameters above a clinical threshold for obstruction, and for longitudes corresponding to small bowel. In the test data set, Dice score for segmentation of the gastrointestinal tract was 78 ± 8%. ICC between measured and predicted diameters was 0.72, indicating moderate agreement. ICC between measured and predicted longitude was 0.85, indicating good agreement. AUROC was 0.90 for detection of dilated bowel, and was 0.95 and 0.90 for differentiation of the proximal and distal gastrointestinal tract respectively. Overall sensitivity and specificity for dilated small bowel were 0.83 and 0.90. Since obstruction is diagnosed based on the diameter and longitude of the bowel, this neural network and embedding may enable detection and characterization of this important disease on CT.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300723/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139935092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CT-Based Evaluation of the Shape of the Diaphragm Using 3D Slicer. 使用 3D Slicer 通过 CT 评估横膈膜的形状。
Pub Date : 2024-08-01 Epub Date: 2024-03-11 DOI: 10.1007/s10278-024-01069-y
Olivier Taton, Alain Van Muylem, Dimitri Leduc, Pierre Alain Gevenois

The diaphragm is the main inspiratory muscle and separates the thorax and the abdomen. In COPD, the evaluation of the diaphragm shape is clinically important, especially in the case of hyperinflation. However, delineating the diaphragm remains a challenge as it cannot be seen entirely on CT scans. Therefore, the lungs, ribs, sternum, and lumbar vertebrae are used as surrogate landmarks to delineate the diaphragm. We herein describe a CT-based method for evaluating the shape of the diaphragm using 3D Slicer-a free software that allows delineation of the diaphragm landmarks-in ten COPD patients. Using the segmentation performed with 3D Slicer, the diaphragm shape was reconstructed with open-source Free Pascal Compiler. From this graduated model, the length of the muscle fibers, the radius of curvature, and the area of the diaphragm-the main determinants of its function-can be measured. Inter- and intra-user variabilities were evaluated with Bland and Altman plots and linear mixed models. Except for the coronal length (p = 0.049), there were not statistically significant inter- or intra-user differences (p values ranging from 0.326 to 0.910) suggesting that this method is reproducible and repeatable. In conclusion, 3D Slicer can be applied to CT scans for determining the shape of the diaphragm in COPD patients.

膈肌是主要的吸气肌,将胸腔和腹腔分隔开来。在慢性阻塞性肺病患者中,对横膈膜形状的评估具有重要的临床意义,尤其是在过度充气的情况下。然而,由于在 CT 扫描中无法完全看到横膈膜,因此横膈膜的划分仍然是一项挑战。因此,肺、肋骨、胸骨和腰椎被用作划定横膈膜的替代地标。我们在此介绍一种基于 CT 的方法,该方法使用 3D Slicer(一款免费软件,可在十名慢性阻塞性肺病患者身上划出膈肌地标)评估膈肌的形状。使用 3D Slicer 进行分割后,用开源的 Free Pascal 编译器重建膈肌形状。根据这个刻度模型,可以测量出横膈膜的肌纤维长度、曲率半径和面积,这些都是决定横膈膜功能的主要因素。使用布兰德和阿尔特曼图以及线性混合模型评估了用户间和用户内的变异性。除冠状长度(p = 0.049)外,用户间和用户内差异无统计学意义(p 值在 0.326 至 0.910 之间),这表明该方法具有再现性和可重复性。总之,3D Slicer 可用于 CT 扫描,以确定 COPD 患者的膈肌形状。
{"title":"CT-Based Evaluation of the Shape of the Diaphragm Using 3D Slicer.","authors":"Olivier Taton, Alain Van Muylem, Dimitri Leduc, Pierre Alain Gevenois","doi":"10.1007/s10278-024-01069-y","DOIUrl":"10.1007/s10278-024-01069-y","url":null,"abstract":"<p><p>The diaphragm is the main inspiratory muscle and separates the thorax and the abdomen. In COPD, the evaluation of the diaphragm shape is clinically important, especially in the case of hyperinflation. However, delineating the diaphragm remains a challenge as it cannot be seen entirely on CT scans. Therefore, the lungs, ribs, sternum, and lumbar vertebrae are used as surrogate landmarks to delineate the diaphragm. We herein describe a CT-based method for evaluating the shape of the diaphragm using 3D Slicer-a free software that allows delineation of the diaphragm landmarks-in ten COPD patients. Using the segmentation performed with 3D Slicer, the diaphragm shape was reconstructed with open-source Free Pascal Compiler. From this graduated model, the length of the muscle fibers, the radius of curvature, and the area of the diaphragm-the main determinants of its function-can be measured. Inter- and intra-user variabilities were evaluated with Bland and Altman plots and linear mixed models. Except for the coronal length (p = 0.049), there were not statistically significant inter- or intra-user differences (p values ranging from 0.326 to 0.910) suggesting that this method is reproducible and repeatable. In conclusion, 3D Slicer can be applied to CT scans for determining the shape of the diaphragm in COPD patients.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300794/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140103151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accuracy Analysis of 3D Bone Fracture Models: Effects of Computed Tomography (CT) Imaging and Image Segmentation. 三维骨折模型的准确性分析:计算机断层扫描 (CT) 成像和图像分割的影响。
Pub Date : 2024-08-01 Epub Date: 2024-03-14 DOI: 10.1007/s10278-024-00998-y
Martin Bittner-Frank, Andreas Strassl, Ewald Unger, Lena Hirtler, Barbara Eckhart, Markus Koenigshofer, Alexander Stoegner, Arastoo Nia, Domenik Popp, Franz Kainberger, Reinhard Windhager, Francesco Moscato, Emir Benca

The introduction of three-dimensional (3D) printed anatomical models has garnered interest in pre-operative planning, especially in orthopedic and trauma surgery. Identifying potential error sources and quantifying their effect on the model dimensional accuracy are crucial for the applicability and reliability of such models. In this study, twenty radii were extracted from anatomic forearm specimens and subjected to osteotomy to simulate a defined fracture of the distal radius (Colles' fracture). Various factors, including two different computed tomography (CT) technologies (energy-integrating detector (EID) and photon-counting detector (PCD)), four different CT scanners, two scan protocols (i.e., routine and high dosage), two different scan orientations, as well as two segmentation algorithms were considered to determine their effect on 3D model accuracy. Ground truth was established using 3D reconstructions of surface scans of the physical specimens. Results indicated that all investigated variables significantly impacted the 3D model accuracy (p < 0.001). However, the mean absolute deviation fell within the range of 0.03 ± 0.20 to 0.32 ± 0.23 mm, well below the 0.5 mm threshold necessary for pre-operative planning. Intra- and inter-operator variability demonstrated fair to excellent agreement for 3D model accuracy, with an intra-class correlation (ICC) of 0.43 to 0.92. This systematic investigation displayed dimensional deviations in the magnitude of sub-voxel imaging resolution for all variables. Major pitfalls included missed or overestimated bone regions during the segmentation process, necessitating additional manual editing of 3D models. In conclusion, this study demonstrates that 3D bone fracture models can be obtained with clinical routine scanners and scan protocols, utilizing a simple global segmentation threshold, thereby providing an accurate and reliable tool for pre-operative planning.

三维打印解剖模型的引入引起了人们对术前规划的兴趣,尤其是在骨科和创伤外科领域。识别潜在的误差源并量化其对模型尺寸精度的影响对于此类模型的适用性和可靠性至关重要。在这项研究中,我们从解剖前臂标本中提取了 20 个桡骨,并对其进行了截骨处理,以模拟桡骨远端明确的骨折(科莱斯骨折)。研究考虑了各种因素,包括两种不同的计算机断层扫描(CT)技术(能量积分探测器(EID)和光子计数探测器(PCD))、四种不同的 CT 扫描仪、两种扫描方案(即常规扫描和高剂量扫描)、两种不同的扫描方向以及两种分割算法,以确定它们对 3D 模型准确性的影响。使用实物标本表面扫描的三维重建建立了基本事实。结果表明,所有研究变量都对三维模型的准确性有显著影响(p
{"title":"Accuracy Analysis of 3D Bone Fracture Models: Effects of Computed Tomography (CT) Imaging and Image Segmentation.","authors":"Martin Bittner-Frank, Andreas Strassl, Ewald Unger, Lena Hirtler, Barbara Eckhart, Markus Koenigshofer, Alexander Stoegner, Arastoo Nia, Domenik Popp, Franz Kainberger, Reinhard Windhager, Francesco Moscato, Emir Benca","doi":"10.1007/s10278-024-00998-y","DOIUrl":"10.1007/s10278-024-00998-y","url":null,"abstract":"<p><p>The introduction of three-dimensional (3D) printed anatomical models has garnered interest in pre-operative planning, especially in orthopedic and trauma surgery. Identifying potential error sources and quantifying their effect on the model dimensional accuracy are crucial for the applicability and reliability of such models. In this study, twenty radii were extracted from anatomic forearm specimens and subjected to osteotomy to simulate a defined fracture of the distal radius (Colles' fracture). Various factors, including two different computed tomography (CT) technologies (energy-integrating detector (EID) and photon-counting detector (PCD)), four different CT scanners, two scan protocols (i.e., routine and high dosage), two different scan orientations, as well as two segmentation algorithms were considered to determine their effect on 3D model accuracy. Ground truth was established using 3D reconstructions of surface scans of the physical specimens. Results indicated that all investigated variables significantly impacted the 3D model accuracy (p < 0.001). However, the mean absolute deviation fell within the range of 0.03 ± 0.20 to 0.32 ± 0.23 mm, well below the 0.5 mm threshold necessary for pre-operative planning. Intra- and inter-operator variability demonstrated fair to excellent agreement for 3D model accuracy, with an intra-class correlation (ICC) of 0.43 to 0.92. This systematic investigation displayed dimensional deviations in the magnitude of sub-voxel imaging resolution for all variables. Major pitfalls included missed or overestimated bone regions during the segmentation process, necessitating additional manual editing of 3D models. In conclusion, this study demonstrates that 3D bone fracture models can be obtained with clinical routine scanners and scan protocols, utilizing a simple global segmentation threshold, thereby providing an accurate and reliable tool for pre-operative planning.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11300728/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140133743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of imaging informatics in medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1