首页 > 最新文献

Frontiers in radiology最新文献

英文 中文
Retrospective quantification of clinical abdominal DCE-MRI using pharmacokinetics-informed deep learning: a proof-of-concept study. 使用药代动力学信息深度学习对临床腹部DCE-MRI进行回顾性量化:一项概念验证研究。
Pub Date : 2023-09-04 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1168901
Chaowei Wu, Nan Wang, Srinivas Gaddam, Lixia Wang, Hui Han, Kyunghyun Sung, Anthony G Christodoulou, Yibin Xie, Stephen Pandol, Debiao Li

Introduction: Dynamic contrast-enhanced (DCE) MRI has important clinical value for early detection, accurate staging, and therapeutic monitoring of cancers. However, conventional multi-phasic abdominal DCE-MRI has limited temporal resolution and provides qualitative or semi-quantitative assessments of tissue vascularity. In this study, the feasibility of retrospectively quantifying multi-phasic abdominal DCE-MRI by using pharmacokinetics-informed deep learning to improve temporal resolution was investigated.

Method: Forty-five subjects consisting of healthy controls, pancreatic ductal adenocarcinoma (PDAC), and chronic pancreatitis (CP) were imaged with a 2-s temporal-resolution quantitative DCE sequence, from which 30-s temporal-resolution multi-phasic DCE-MRI was synthesized based on clinical protocol. A pharmacokinetics-informed neural network was trained to improve the temporal resolution of the multi-phasic DCE before the quantification of pharmacokinetic parameters. Through ten-fold cross-validation, the agreement between pharmacokinetic parameters estimated from synthesized multi-phasic DCE after deep learning inference was assessed against reference parameters from the corresponding quantitative DCE-MRI images. The ability of the deep learning estimated parameters to differentiate abnormal from normal tissues was assessed as well.

Results: The pharmacokinetic parameters estimated after deep learning have a high level of agreement with the reference values. In the cross-validation, all three pharmacokinetic parameters (transfer constant Ktrans, fractional extravascular extracellular volume ve, and rate constant kep) achieved intraclass correlation coefficient and R2 between 0.84-0.94, and low coefficients of variation (10.1%, 12.3%, and 5.6%, respectively) relative to the reference values. Significant differences were found between healthy pancreas, PDAC tumor and non-tumor, and CP pancreas.

Discussion: Retrospective quantification (RoQ) of clinical multi-phasic DCE-MRI is possible by deep learning. This technique has the potential to derive quantitative pharmacokinetic parameters from clinical multi-phasic DCE data for a more objective and precise assessment of cancer.

引言:动态增强(DCE)MRI对癌症的早期发现、准确分期和治疗监测具有重要的临床价值。然而,传统的多相腹部DCE-MRI具有有限的时间分辨率,并且提供了对组织血管性的定性或半定量评估。在本研究中,研究了通过药代动力学知情深度学习来提高时间分辨率来回顾性量化多相腹部DCE-MRI的可行性。方法:45名受试者,包括健康对照组、胰腺导管腺癌(PDAC)和慢性胰腺炎(CP),采用2-s时间分辨率定量DCE序列进行成像,根据临床方案合成30-s时间分辨率多相DCE-MRI。在量化药代动力学参数之前,训练药代动力学知情神经网络以提高多相DCE的时间分辨率。通过十倍交叉验证,将深度学习推断后合成的多相DCE估计的药代动力学参数与相应定量DCE-MRI图像的参考参数之间的一致性进行了评估。还评估了深度学习估计参数区分异常组织和正常组织的能力。结果:深度学习后估计的药代动力学参数与参考值高度一致。在交叉验证中,所有三个药代动力学参数(转移常数Ktrans、血管外细胞外体积分数ve和速率常数kep)实现了组内相关系数,R2在0.84-0.94之间,相对于参考值的变异系数较低(分别为10.1%、12.3%和5.6%)。健康胰腺、PDAC肿瘤和非肿瘤胰腺以及CP胰腺之间存在显著差异。讨论:通过深度学习可以对临床多阶段DCE-MRI进行回顾性量化(RoQ)。该技术有可能从临床多相DCE数据中获得定量药代动力学参数,以更客观、更准确地评估癌症。
{"title":"Retrospective quantification of clinical abdominal DCE-MRI using pharmacokinetics-informed deep learning: a proof-of-concept study.","authors":"Chaowei Wu,&nbsp;Nan Wang,&nbsp;Srinivas Gaddam,&nbsp;Lixia Wang,&nbsp;Hui Han,&nbsp;Kyunghyun Sung,&nbsp;Anthony G Christodoulou,&nbsp;Yibin Xie,&nbsp;Stephen Pandol,&nbsp;Debiao Li","doi":"10.3389/fradi.2023.1168901","DOIUrl":"https://doi.org/10.3389/fradi.2023.1168901","url":null,"abstract":"<p><strong>Introduction: </strong>Dynamic contrast-enhanced (DCE) MRI has important clinical value for early detection, accurate staging, and therapeutic monitoring of cancers. However, conventional multi-phasic abdominal DCE-MRI has limited temporal resolution and provides qualitative or semi-quantitative assessments of tissue vascularity. In this study, the feasibility of retrospectively quantifying multi-phasic abdominal DCE-MRI by using pharmacokinetics-informed deep learning to improve temporal resolution was investigated.</p><p><strong>Method: </strong>Forty-five subjects consisting of healthy controls, pancreatic ductal adenocarcinoma (PDAC), and chronic pancreatitis (CP) were imaged with a 2-s temporal-resolution quantitative DCE sequence, from which 30-s temporal-resolution multi-phasic DCE-MRI was synthesized based on clinical protocol. A pharmacokinetics-informed neural network was trained to improve the temporal resolution of the multi-phasic DCE before the quantification of pharmacokinetic parameters. Through ten-fold cross-validation, the agreement between pharmacokinetic parameters estimated from synthesized multi-phasic DCE after deep learning inference was assessed against reference parameters from the corresponding quantitative DCE-MRI images. The ability of the deep learning estimated parameters to differentiate abnormal from normal tissues was assessed as well.</p><p><strong>Results: </strong>The pharmacokinetic parameters estimated after deep learning have a high level of agreement with the reference values. In the cross-validation, all three pharmacokinetic parameters (transfer constant <math><msup><mi>K</mi><mrow><mrow><mi>trans</mi></mrow></mrow></msup></math>, fractional extravascular extracellular volume <math><msub><mi>v</mi><mi>e</mi></msub></math>, and rate constant <math><msub><mi>k</mi><mrow><mrow><mi>ep</mi></mrow></mrow></msub></math>) achieved intraclass correlation coefficient and <i>R</i><sup>2</sup> between 0.84-0.94, and low coefficients of variation (10.1%, 12.3%, and 5.6%, respectively) relative to the reference values. Significant differences were found between healthy pancreas, PDAC tumor and non-tumor, and CP pancreas.</p><p><strong>Discussion: </strong>Retrospective quantification (RoQ) of clinical multi-phasic DCE-MRI is possible by deep learning. This technique has the potential to derive quantitative pharmacokinetic parameters from clinical multi-phasic DCE data for a more objective and precise assessment of cancer.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1168901"},"PeriodicalIF":0.0,"publicationDate":"2023-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10507354/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41168695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial assessments in texture analysis: what the radiologist needs to know. 纹理分析中的空间评估:放射科医生须知。
Pub Date : 2023-08-24 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1240544
Bino A Varghese, Brandon K K Fields, Darryl H Hwang, Vinay A Duddalwar, George R Matcuk, Steven Y Cen

To date, studies investigating radiomics-based predictive models have tended to err on the side of data-driven or exploratory analysis of many thousands of extracted features. In particular, spatial assessments of texture have proven to be especially adept at assessing for features of intratumoral heterogeneity in oncologic imaging, which likewise may correspond with tumor biology and behavior. These spatial assessments can be generally classified as spatial filters, which detect areas of rapid change within the grayscale in order to enhance edges and/or textures within an image, or neighborhood-based methods, which quantify gray-level differences of neighboring pixels/voxels within a set distance. Given the high dimensionality of radiomics datasets, data dimensionality reduction methods have been proposed in an attempt to optimize model performance in machine learning studies; however, it should be noted that these approaches should only be applied to training data in order to avoid information leakage and model overfitting. While area under the curve of the receiver operating characteristic is perhaps the most commonly reported assessment of model performance, it is prone to overestimation when output classifications are unbalanced. In such cases, confusion matrices may be additionally reported, whereby diagnostic cut points for model predicted probability may hold more clinical significance to clinical colleagues with respect to related forms of diagnostic testing.

迄今为止,基于放射组学预测模型的研究往往偏向于数据驱动或对数千个提取特征进行探索性分析。特别是,纹理的空间评估已被证明特别擅长评估肿瘤成像中的瘤内异质性特征,这同样可能与肿瘤生物学和行为学相对应。这些空间评估一般可分为空间滤波器和基于邻域的方法,前者可检测灰度范围内的快速变化区域,以增强图像中的边缘和/或纹理;后者可量化设定距离内相邻像素/体素的灰度差异。鉴于放射组学数据集的高维性,人们提出了数据降维方法,试图优化机器学习研究中的模型性能;但需要注意的是,这些方法只能应用于训练数据,以避免信息泄露和模型过拟合。虽然接收者操作特征曲线下面积可能是最常报道的模型性能评估方法,但当输出分类不平衡时,它很容易被高估。在这种情况下,可能会额外报告混淆矩阵,据此,模型预测概率的诊断切点可能对临床同事来说,与相关形式的诊断测试相比,具有更多的临床意义。
{"title":"Spatial assessments in texture analysis: what the radiologist needs to know.","authors":"Bino A Varghese, Brandon K K Fields, Darryl H Hwang, Vinay A Duddalwar, George R Matcuk, Steven Y Cen","doi":"10.3389/fradi.2023.1240544","DOIUrl":"10.3389/fradi.2023.1240544","url":null,"abstract":"<p><p>To date, studies investigating radiomics-based predictive models have tended to err on the side of data-driven or exploratory analysis of many thousands of extracted features. In particular, spatial assessments of texture have proven to be especially adept at assessing for features of intratumoral heterogeneity in oncologic imaging, which likewise may correspond with tumor biology and behavior. These spatial assessments can be generally classified as spatial filters, which detect areas of rapid change within the grayscale in order to enhance edges and/or textures within an image, or neighborhood-based methods, which quantify gray-level differences of neighboring pixels/voxels within a set distance. Given the high dimensionality of radiomics datasets, data dimensionality reduction methods have been proposed in an attempt to optimize model performance in machine learning studies; however, it should be noted that these approaches should only be applied to training data in order to avoid information leakage and model overfitting. While area under the curve of the receiver operating characteristic is perhaps the most commonly reported assessment of model performance, it is prone to overestimation when output classifications are unbalanced. In such cases, confusion matrices may be additionally reported, whereby diagnostic cut points for model predicted probability may hold more clinical significance to clinical colleagues with respect to related forms of diagnostic testing.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1240544"},"PeriodicalIF":0.0,"publicationDate":"2023-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10484588/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10225205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning image segmentation approaches for malignant bone lesions: a systematic review and meta-analysis. 恶性骨病变的深度学习图像分割方法:系统综述与荟萃分析。
Pub Date : 2023-08-08 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1241651
Joseph M Rich, Lokesh N Bhardwaj, Aman Shah, Krish Gangal, Mohitha S Rapaka, Assad A Oberai, Brandon K K Fields, George R Matcuk, Vinay A Duddalwar

Introduction: Image segmentation is an important process for quantifying characteristics of malignant bone lesions, but this task is challenging and laborious for radiologists. Deep learning has shown promise in automating image segmentation in radiology, including for malignant bone lesions. The purpose of this review is to investigate deep learning-based image segmentation methods for malignant bone lesions on Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron-Emission Tomography/CT (PET/CT).

Method: The literature search of deep learning-based image segmentation of malignant bony lesions on CT and MRI was conducted in PubMed, Embase, Web of Science, and Scopus electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 41 original articles published between February 2017 and March 2023 were included in the review.

Results: The majority of papers studied MRI, followed by CT, PET/CT, and PET/MRI. There was relatively even distribution of papers studying primary vs. secondary malignancies, as well as utilizing 3-dimensional vs. 2-dimensional data. Many papers utilize custom built models as a modification or variation of U-Net. The most common metric for evaluation was the dice similarity coefficient (DSC). Most models achieved a DSC above 0.6, with medians for all imaging modalities between 0.85-0.9.

Discussion: Deep learning methods show promising ability to segment malignant osseous lesions on CT, MRI, and PET/CT. Some strategies which are commonly applied to help improve performance include data augmentation, utilization of large public datasets, preprocessing including denoising and cropping, and U-Net architecture modification. Future directions include overcoming dataset and annotation homogeneity and generalizing for clinical applicability.

简介图像分割是量化恶性骨病变特征的重要过程,但对于放射科医生来说,这项任务既具有挑战性又费力。深度学习在放射学图像自动分割方面大有可为,包括恶性骨病变。本综述旨在研究计算机断层扫描(CT)、磁共振成像(MRI)和正电子发射断层扫描/CT(PET/CT)上基于深度学习的恶性骨病变图像分割方法:根据系统综述和元分析首选报告项目(Preferred Reporting Items for Systematic Reviews and Meta-Analyses,PRISMA)指南,在PubMed、Embase、Web of Science和Scopus电子数据库中对基于深度学习的CT和MRI恶性骨病变图像分割进行了文献检索。共有41篇发表于2017年2月至2023年3月期间的原创文章被纳入综述:大多数论文研究的是 MRI,其次是 CT、PET/CT 和 PET/MRI。研究原发性与继发性恶性肿瘤以及利用三维与二维数据的论文分布相对均匀。许多论文利用定制模型作为 U-Net 的修改或变体。最常用的评估指标是骰子相似系数(DSC)。大多数模型的骰子相似系数都在 0.6 以上,所有成像模式的中位数都在 0.85-0.9 之间:深度学习方法在分割 CT、MRI 和 PET/CT 上的恶性骨质病变方面表现出良好的能力。为帮助提高性能,通常采用的一些策略包括数据增强、利用大型公共数据集、预处理(包括去噪和裁剪)以及 U-Net 架构修改。未来的研究方向包括克服数据集和注释的同质性,以及临床应用的通用性。
{"title":"Deep learning image segmentation approaches for malignant bone lesions: a systematic review and meta-analysis.","authors":"Joseph M Rich, Lokesh N Bhardwaj, Aman Shah, Krish Gangal, Mohitha S Rapaka, Assad A Oberai, Brandon K K Fields, George R Matcuk, Vinay A Duddalwar","doi":"10.3389/fradi.2023.1241651","DOIUrl":"10.3389/fradi.2023.1241651","url":null,"abstract":"<p><strong>Introduction: </strong>Image segmentation is an important process for quantifying characteristics of malignant bone lesions, but this task is challenging and laborious for radiologists. Deep learning has shown promise in automating image segmentation in radiology, including for malignant bone lesions. The purpose of this review is to investigate deep learning-based image segmentation methods for malignant bone lesions on Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron-Emission Tomography/CT (PET/CT).</p><p><strong>Method: </strong>The literature search of deep learning-based image segmentation of malignant bony lesions on CT and MRI was conducted in PubMed, Embase, Web of Science, and Scopus electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 41 original articles published between February 2017 and March 2023 were included in the review.</p><p><strong>Results: </strong>The majority of papers studied MRI, followed by CT, PET/CT, and PET/MRI. There was relatively even distribution of papers studying primary vs. secondary malignancies, as well as utilizing 3-dimensional vs. 2-dimensional data. Many papers utilize custom built models as a modification or variation of U-Net. The most common metric for evaluation was the dice similarity coefficient (DSC). Most models achieved a DSC above 0.6, with medians for all imaging modalities between 0.85-0.9.</p><p><strong>Discussion: </strong>Deep learning methods show promising ability to segment malignant osseous lesions on CT, MRI, and PET/CT. Some strategies which are commonly applied to help improve performance include data augmentation, utilization of large public datasets, preprocessing including denoising and cropping, and U-Net architecture modification. Future directions include overcoming dataset and annotation homogeneity and generalizing for clinical applicability.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1241651"},"PeriodicalIF":0.0,"publicationDate":"2023-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10442705/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10069334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Localization supervision of chest x-ray classifiers using label-specific eye-tracking annotation. 利用特定标签眼动跟踪注释对胸部 X 光分类器进行定位监督。
Pub Date : 2023-06-22 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1088068
Ricardo Bigolin Lanfredi, Joyce D Schroeder, Tolga Tasdizen

Convolutional neural networks (CNNs) have been successfully applied to chest x-ray (CXR) images. Moreover, annotated bounding boxes have been shown to improve the interpretability of a CNN in terms of localizing abnormalities. However, only a few relatively small CXR datasets containing bounding boxes are available, and collecting them is very costly. Opportunely, eye-tracking (ET) data can be collected during the clinical workflow of a radiologist. We use ET data recorded from radiologists while dictating CXR reports to train CNNs. We extract snippets from the ET data by associating them with the dictation of keywords and use them to supervise the localization of specific abnormalities. We show that this method can improve a model's interpretability without impacting its image-level classification.

卷积神经网络(CNN)已成功应用于胸部 X 光(CXR)图像。此外,注释边界框已被证明可提高 CNN 在定位异常方面的可解释性。然而,目前只有少数包含边界框的相对较小的 CXR 数据集,而且收集这些数据集的成本非常高。眼动跟踪(ET)数据可以在放射科医生的临床工作流程中收集。我们使用放射科医生在口述 CXR 报告时记录的 ET 数据来训练 CNN。我们从 ET 数据中提取片段,将它们与关键字的口述关联起来,并用它们来监督特定异常的定位。我们的研究表明,这种方法可以提高模型的可解释性,而不会影响其图像级分类。
{"title":"Localization supervision of chest x-ray classifiers using label-specific eye-tracking annotation.","authors":"Ricardo Bigolin Lanfredi, Joyce D Schroeder, Tolga Tasdizen","doi":"10.3389/fradi.2023.1088068","DOIUrl":"10.3389/fradi.2023.1088068","url":null,"abstract":"<p><p>Convolutional neural networks (CNNs) have been successfully applied to chest x-ray (CXR) images. Moreover, annotated bounding boxes have been shown to improve the interpretability of a CNN in terms of localizing abnormalities. However, only a few relatively small CXR datasets containing bounding boxes are available, and collecting them is very costly. Opportunely, eye-tracking (ET) data can be collected during the clinical workflow of a radiologist. We use ET data recorded from radiologists while dictating CXR reports to train CNNs. We extract snippets from the ET data by associating them with the dictation of keywords and use them to supervise the localization of specific abnormalities. We show that this method can improve a model's interpretability without impacting its image-level classification.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1088068"},"PeriodicalIF":0.0,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365091/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9930026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based left ventricular segmentation demonstrates improved performance on respiratory motion-resolved whole-heart reconstructions. 基于深度学习的左心室分割在呼吸运动分辨的全心脏重建中表现出改进的性能。
Pub Date : 2023-06-02 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1144004
Yitong Yang, Zahraw Shah, Athira J Jacob, Jackson Hair, Teodora Chitiboi, Tiziano Passerini, Jerome Yerly, Lorenzo Di Sopra, Davide Piccini, Zahra Hosseini, Puneet Sharma, Anurag Sahu, Matthias Stuber, John N Oshinski

Introduction: Deep learning (DL)-based segmentation has gained popularity for routine cardiac magnetic resonance (CMR) image analysis and in particular, delineation of left ventricular (LV) borders for LV volume determination. Free-breathing, self-navigated, whole-heart CMR exams provide high-resolution, isotropic coverage of the heart for assessment of cardiac anatomy including LV volume. The combination of whole-heart free-breathing CMR and DL-based LV segmentation has the potential to streamline the acquisition and analysis of clinical CMR exams. The purpose of this study was to compare the performance of a DL-based automatic LV segmentation network trained primarily on computed tomography (CT) images in two whole-heart CMR reconstruction methods: (1) an in-line respiratory motion-corrected (Mcorr) reconstruction and (2) an off-line, compressed sensing-based, multi-volume respiratory motion-resolved (Mres) reconstruction. Given that Mres images were shown to have greater image quality in previous studies than Mcorr images, we hypothesized that the LV volumes segmented from Mres images are closer to the manual expert-traced left ventricular endocardial border than the Mcorr images.

Method: This retrospective study used 15 patients who underwent clinically indicated 1.5 T CMR exams with a prototype ECG-gated 3D radial phyllotaxis balanced steady state free precession (bSSFP) sequence. For each reconstruction method, the absolute volume difference (AVD) of the automatically and manually segmented LV volumes was used as the primary quantity to investigate whether 3D DL-based LV segmentation generalized better on Mcorr or Mres 3D whole-heart images. Additionally, we assessed the 3D Dice similarity coefficient between the manual and automatic LV masks of each reconstructed 3D whole-heart image and the sharpness of the LV myocardium-blood pool interface. A two-tail paired Student's t-test (alpha = 0.05) was used to test the significance in this study.

Results & discussion: The AVD in the respiratory Mres reconstruction was lower than the AVD in the respiratory Mcorr reconstruction: 7.73 ± 6.54 ml vs. 20.0 ± 22.4 ml, respectively (n = 15, p-value = 0.03). The 3D Dice coefficient between the DL-segmented masks and the manually segmented masks was higher for Mres images than for Mcorr images: 0.90 ± 0.02 vs. 0.87 ± 0.03 respectively, with a p-value = 0.02. Sharpness on Mres images was higher than on Mcorr images: 0.15 ± 0.05 vs. 0.12 ± 0.04, respectively, with a p-value of 0.014 (n = 15).

Conclusion: We conclude that the DL-based 3D automatic LV segmentation network trained on CT images and fine-tuned on MR images generalized better on Mres images than on Mcorr images for quantifying LV volumes.

引言:基于深度学习(DL)的分割在常规心脏磁共振(CMR)图像分析中越来越受欢迎,尤其是在左心室(LV)边界的划定中,用于左心室容积的确定。自由呼吸、自主导航、全心CMR检查提供了高分辨率、各向同性的心脏覆盖范围,用于评估心脏解剖结构,包括左心室容积。全心自由呼吸CMR和基于DL的LV分割相结合,有可能简化临床CMR检查的采集和分析。本研究的目的是比较主要在计算机断层扫描(CT)图像上训练的基于DL的左心室自动分割网络在两种全心脏CMR重建方法中的性能:(1)在线呼吸运动校正(Mcorr)重建和(2)离线、基于压缩传感的多体积呼吸运动分辨(Mres)重建。鉴于Mres图像在先前的研究中显示出比Mcorr图像更高的图像质量,我们假设从Mres图像分割的左心室体积比Mcorr图像更接近手动专家追踪的左心室心内膜边界。方法:这项回顾性研究使用了15名患者,他们接受了临床指示的1.5 T CMR检查与原型心电图门控的三维径向叶序平衡稳态自由进动(bSFP)序列。对于每种重建方法,自动和手动分割的左心室体积的绝对体积差(AVD)被用作主要量,以研究基于3D DL的左心室分割是否在Mcorr或Mres 3D全心图像上更好地推广。此外,我们评估了每个重建的3D全心图像的手动和自动左心室掩模之间的3D Dice相似系数以及左心室-心肌-血池界面的清晰度。双尾配对Student t检验(α = 0.05)来检验其在本研究中的显著性。结果与讨论:呼吸Mres重建的AVD低于呼吸Mcorr重建的AVD:7.73 ± 6.54 ml与20.0 ± 22.4 ml(n = 15,p值 = 0.03)。对于Mres图像,DL分割掩模和手动分割掩模之间的3D骰子系数高于Mcorr图像:0.90 ± 0.02对0.87 ± 分别为0.03,具有p值 = 0.02。Mres图像的清晰度高于Mcorr图像:0.15 ± 0.05对0.12 ± 0.04,p值为0.014(n = 15) 结论:我们得出的结论是,在CT图像上训练并在MR图像上微调的基于DL的3D自动左心室分割网络在Mres图像上比在Mcorr图像上更好地推广用于量化左心室体积。
{"title":"Deep learning-based left ventricular segmentation demonstrates improved performance on respiratory motion-resolved whole-heart reconstructions.","authors":"Yitong Yang,&nbsp;Zahraw Shah,&nbsp;Athira J Jacob,&nbsp;Jackson Hair,&nbsp;Teodora Chitiboi,&nbsp;Tiziano Passerini,&nbsp;Jerome Yerly,&nbsp;Lorenzo Di Sopra,&nbsp;Davide Piccini,&nbsp;Zahra Hosseini,&nbsp;Puneet Sharma,&nbsp;Anurag Sahu,&nbsp;Matthias Stuber,&nbsp;John N Oshinski","doi":"10.3389/fradi.2023.1144004","DOIUrl":"10.3389/fradi.2023.1144004","url":null,"abstract":"<p><strong>Introduction: </strong>Deep learning (DL)-based segmentation has gained popularity for routine cardiac magnetic resonance (CMR) image analysis and in particular, delineation of left ventricular (LV) borders for LV volume determination. Free-breathing, self-navigated, whole-heart CMR exams provide high-resolution, isotropic coverage of the heart for assessment of cardiac anatomy including LV volume. The combination of whole-heart free-breathing CMR and DL-based LV segmentation has the potential to streamline the acquisition and analysis of clinical CMR exams. The purpose of this study was to compare the performance of a DL-based automatic LV segmentation network trained primarily on computed tomography (CT) images in two whole-heart CMR reconstruction methods: (1) an in-line respiratory motion-corrected (Mcorr) reconstruction and (2) an off-line, compressed sensing-based, multi-volume respiratory motion-resolved (Mres) reconstruction. Given that Mres images were shown to have greater image quality in previous studies than Mcorr images, we <i>hypothesized</i> that the LV volumes segmented from Mres images are closer to the manual expert-traced left ventricular endocardial border than the Mcorr images.</p><p><strong>Method: </strong>This retrospective study used 15 patients who underwent clinically indicated 1.5 T CMR exams with a prototype ECG-gated 3D radial phyllotaxis balanced steady state free precession (bSSFP) sequence. For each reconstruction method, the absolute volume difference (AVD) of the automatically and manually segmented LV volumes was used as the primary quantity to investigate whether 3D DL-based LV segmentation generalized better on Mcorr or Mres 3D whole-heart images. Additionally, we assessed the 3D Dice similarity coefficient between the manual and automatic LV masks of each reconstructed 3D whole-heart image and the sharpness of the LV myocardium-blood pool interface. A two-tail paired Student's <i>t</i>-test (alpha = 0.05) was used to test the significance in this study.</p><p><strong>Results & discussion: </strong>The AVD in the respiratory Mres reconstruction was lower than the AVD in the respiratory Mcorr reconstruction: 7.73 ± 6.54 ml vs. 20.0 ± 22.4 ml, respectively (<i>n</i> = 15, <i>p</i>-value = 0.03). The 3D Dice coefficient between the DL-segmented masks and the manually segmented masks was higher for Mres images than for Mcorr images: 0.90 ± 0.02 vs. 0.87 ± 0.03 respectively, with a <i>p</i>-value = 0.02. Sharpness on Mres images was higher than on Mcorr images: 0.15 ± 0.05 vs. 0.12 ± 0.04, respectively, with a <i>p</i>-value of 0.014 (<i>n</i> = 15).</p><p><strong>Conclusion: </strong>We conclude that the DL-based 3D automatic LV segmentation network trained on CT images and fine-tuned on MR images generalized better on Mres images than on Mcorr images for quantifying LV volumes.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1144004"},"PeriodicalIF":0.0,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365088/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10234001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital transformation of career landscapes in radiology: personal and professional implications. 放射学职业前景的数字化转型:对个人和职业的影响。
Pub Date : 2023-05-22 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1180699
Anjali Agrawal

Millennial radiology is marked by technical disruptions. Advances in internet, digital communications and computing technology, paved way for digitalized workflow orchestration of busy radiology departments. The COVID pandemic brought teleradiology to the forefront, highlighting its importance in maintaining continuity of radiological services, making it an integral component of the radiology practice. Increasing computing power and integrated multimodal data are driving incorporation of artificial intelligence at various stages of the radiology image and reporting cycle. These have and will continue to transform the career landscape in radiology, with more options for radiologists with varied interests and career goals. The ability to work from anywhere and anytime needs to be balanced with other aspects of life. Robust communication, internal and external collaboration, self-discipline, and self-motivation are key to achieving the desired balance while practicing radiology the unconventional way.

千禧年放射学的特点是技术颠覆。互联网、数字通信和计算技术的进步,为繁忙的放射科工作流程的数字化协调铺平了道路。COVID 大流行将远程放射学推到了风口浪尖,凸显了远程放射学在保持放射服务连续性方面的重要性,使其成为放射学实践中不可或缺的组成部分。不断增强的计算能力和集成的多模态数据推动了人工智能在放射图像和报告周期各个阶段的应用。这些已经并将继续改变放射学的职业前景,为具有不同兴趣和职业目标的放射科医生提供更多选择。随时随地工作的能力需要与生活的其他方面保持平衡。强有力的沟通、内部和外部协作、自律和自我激励是在以非传统方式从事放射学工作的同时实现理想平衡的关键。
{"title":"Digital transformation of career landscapes in radiology: personal and professional implications.","authors":"Anjali Agrawal","doi":"10.3389/fradi.2023.1180699","DOIUrl":"10.3389/fradi.2023.1180699","url":null,"abstract":"<p><p>Millennial radiology is marked by technical disruptions. Advances in internet, digital communications and computing technology, paved way for digitalized workflow orchestration of busy radiology departments. The COVID pandemic brought teleradiology to the forefront, highlighting its importance in maintaining continuity of radiological services, making it an integral component of the radiology practice. Increasing computing power and integrated multimodal data are driving incorporation of artificial intelligence at various stages of the radiology image and reporting cycle. These have and will continue to transform the career landscape in radiology, with more options for radiologists with varied interests and career goals. The ability to work from anywhere and anytime needs to be balanced with other aspects of life. Robust communication, internal and external collaboration, self-discipline, and self-motivation are key to achieving the desired balance while practicing radiology the unconventional way.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1180699"},"PeriodicalIF":0.0,"publicationDate":"2023-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10364979/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10233998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence in neuroradiology: a scoping review of some ethical challenges. 神经放射学中的人工智能:一些伦理挑战的范围审查。
Pub Date : 2023-05-15 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1149461
Pegah Khosravi, Mark Schweitzer

Artificial intelligence (AI) has great potential to increase accuracy and efficiency in many aspects of neuroradiology. It provides substantial opportunities for insights into brain pathophysiology, developing models to determine treatment decisions, and improving current prognostication as well as diagnostic algorithms. Concurrently, the autonomous use of AI models introduces ethical challenges regarding the scope of informed consent, risks associated with data privacy and protection, potential database biases, as well as responsibility and liability that might potentially arise. In this manuscript, we will first provide a brief overview of AI methods used in neuroradiology and segue into key methodological and ethical challenges. Specifically, we discuss the ethical principles affected by AI approaches to human neuroscience and provisions that might be imposed in this domain to ensure that the benefits of AI frameworks remain in alignment with ethics in research and healthcare in the future.

人工智能(AI)在神经放射学的许多方面具有提高准确性和效率的巨大潜力。它为深入了解脑病理生理学、开发确定治疗决策的模型、改进当前的预测和诊断算法提供了大量机会。与此同时,人工智能模型的自主使用带来了关于知情同意范围、与数据隐私和保护相关的风险、潜在的数据库偏差以及可能产生的责任和责任的道德挑战。在这份手稿中,我们将首先简要概述神经放射学中使用的人工智能方法,并进入关键的方法和伦理挑战。具体来说,我们讨论了受人工智能人类神经科学方法影响的伦理原则,以及可能在这一领域实施的规定,以确保人工智能框架的好处与未来研究和医疗保健中的伦理保持一致。
{"title":"Artificial intelligence in neuroradiology: a scoping review of some ethical challenges.","authors":"Pegah Khosravi, Mark Schweitzer","doi":"10.3389/fradi.2023.1149461","DOIUrl":"10.3389/fradi.2023.1149461","url":null,"abstract":"<p><p>Artificial intelligence (AI) has great potential to increase accuracy and efficiency in many aspects of neuroradiology. It provides substantial opportunities for insights into brain pathophysiology, developing models to determine treatment decisions, and improving current prognostication as well as diagnostic algorithms. Concurrently, the autonomous use of AI models introduces ethical challenges regarding the scope of informed consent, risks associated with data privacy and protection, potential database biases, as well as responsibility and liability that might potentially arise. In this manuscript, we will first provide a brief overview of AI methods used in neuroradiology and segue into key methodological and ethical challenges. Specifically, we discuss the ethical principles affected by AI approaches to human neuroscience and provisions that might be imposed in this domain to ensure that the benefits of AI frameworks remain in alignment with ethics in research and healthcare in the future.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1149461"},"PeriodicalIF":0.0,"publicationDate":"2023-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365008/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10234003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mouse brain MR super-resolution using a deep learning network trained with optical imaging data. 利用光学成像数据训练的深度学习网络实现小鼠大脑磁共振超分辨率。
Pub Date : 2023-05-15 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1155866
Zifei Liang, Jiangyang Zhang

Introduction: The resolution of magnetic resonance imaging is often limited at the millimeter level due to its inherent signal-to-noise disadvantage compared to other imaging modalities. Super-resolution (SR) of MRI data aims to enhance its resolution and diagnostic value. While deep learning-based SR has shown potential, its applications in MRI remain limited, especially for preclinical MRI, where large high-resolution MRI datasets for training are often lacking.

Methods: In this study, we first used high-resolution mouse brain auto-fluorescence (AF) data acquired using serial two-photon tomography (STPT) to examine the performance of deep learning-based SR for mouse brain images.

Results: We found that the best SR performance was obtained when the resolutions of training and target data were matched. We then applied the network trained using AF data to MRI data of the mouse brain, and found that the performance of the SR network depended on the tissue contrast presented in the MRI data. Using transfer learning and a limited set of high-resolution mouse brain MRI data, we were able to fine-tune the initial network trained using AF to enhance the resolution of MRI data.

Discussion: Our results suggest that deep learning SR networks trained using high-resolution data of a different modality can be applied to MRI data after transfer learning.

导言:与其他成像模式相比,磁共振成像的信噪比固有的劣势使其分辨率通常被限制在毫米级别。磁共振成像数据的超分辨率(SR)旨在提高其分辨率和诊断价值。虽然基于深度学习的 SR 已显示出潜力,但其在核磁共振成像中的应用仍然有限,尤其是在临床前核磁共振成像中,往往缺乏用于训练的大型高分辨率核磁共振成像数据集:在这项研究中,我们首先使用串行双光子断层扫描(STPT)获得的高分辨率小鼠大脑自动荧光(AF)数据,检验基于深度学习的 SR 在小鼠大脑图像中的性能:我们发现,当训练数据和目标数据的分辨率相匹配时,SR 性能最佳。然后,我们将使用 AF 数据训练的网络应用于小鼠大脑的 MRI 数据,发现 SR 网络的性能取决于 MRI 数据中呈现的组织对比度。利用迁移学习和有限的一组高分辨率小鼠脑部 MRI 数据,我们能够对使用 AF 数据训练的初始网络进行微调,以提高 MRI 数据的分辨率:我们的研究结果表明,使用不同模式的高分辨率数据训练的深度学习 SR 网络经过迁移学习后,可以应用于核磁共振成像数据。
{"title":"Mouse brain MR super-resolution using a deep learning network trained with optical imaging data.","authors":"Zifei Liang, Jiangyang Zhang","doi":"10.3389/fradi.2023.1155866","DOIUrl":"10.3389/fradi.2023.1155866","url":null,"abstract":"<p><strong>Introduction: </strong>The resolution of magnetic resonance imaging is often limited at the millimeter level due to its inherent signal-to-noise disadvantage compared to other imaging modalities. Super-resolution (SR) of MRI data aims to enhance its resolution and diagnostic value. While deep learning-based SR has shown potential, its applications in MRI remain limited, especially for preclinical MRI, where large high-resolution MRI datasets for training are often lacking.</p><p><strong>Methods: </strong>In this study, we first used high-resolution mouse brain auto-fluorescence (AF) data acquired using serial two-photon tomography (STPT) to examine the performance of deep learning-based SR for mouse brain images.</p><p><strong>Results: </strong>We found that the best SR performance was obtained when the resolutions of training and target data were matched. We then applied the network trained using AF data to MRI data of the mouse brain, and found that the performance of the SR network depended on the tissue contrast presented in the MRI data. Using transfer learning and a limited set of high-resolution mouse brain MRI data, we were able to fine-tune the initial network trained using AF to enhance the resolution of MRI data.</p><p><strong>Discussion: </strong>Our results suggest that deep learning SR networks trained using high-resolution data of a different modality can be applied to MRI data after transfer learning.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1155866"},"PeriodicalIF":0.0,"publicationDate":"2023-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365285/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10252062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
uRP: An integrated research platform for one-stop analysis of medical images. uRP:一站式医学图像分析综合研究平台。
Pub Date : 2023-04-18 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1153784
Jiaojiao Wu, Yuwei Xia, Xuechun Wang, Ying Wei, Aie Liu, Arun Innanje, Meng Zheng, Lei Chen, Jing Shi, Liye Wang, Yiqiang Zhan, Xiang Sean Zhou, Zhong Xue, Feng Shi, Dinggang Shen

Introduction: Medical image analysis is of tremendous importance in serving clinical diagnosis, treatment planning, as well as prognosis assessment. However, the image analysis process usually involves multiple modality-specific software and relies on rigorous manual operations, which is time-consuming and potentially low reproducible.

Methods: We present an integrated platform - uAI Research Portal (uRP), to achieve one-stop analyses of multimodal images such as CT, MRI, and PET for clinical research applications. The proposed uRP adopts a modularized architecture to be multifunctional, extensible, and customizable.

Results and discussion: The uRP shows 3 advantages, as it 1) spans a wealth of algorithms for image processing including semi-automatic delineation, automatic segmentation, registration, classification, quantitative analysis, and image visualization, to realize a one-stop analytic pipeline, 2) integrates a variety of functional modules, which can be directly applied, combined, or customized for specific application domains, such as brain, pneumonia, and knee joint analyses, 3) enables full-stack analysis of one disease, including diagnosis, treatment planning, and prognosis assessment, as well as full-spectrum coverage for multiple disease applications. With the continuous development and inclusion of advanced algorithms, we expect this platform to largely simplify the clinical scientific research process and promote more and better discoveries.

引言医学图像分析在临床诊断、治疗计划和预后评估方面具有重要意义。然而,图像分析过程通常涉及多种特定模式的软件,并依赖于严格的手工操作,耗时且可重复性可能较低:我们提出了一个集成平台--uAI Research Portal(uRP),以实现临床研究应用中对 CT、MRI 和 PET 等多模态图像的一站式分析。uRP采用模块化架构,具有多功能性、可扩展性和可定制性:uRP具有三大优势:1)横跨半自动划线、自动分割、配准、分类、定量分析、图像可视化等丰富的图像处理算法,实现一站式分析流水线;2)集成多种功能模块,可直接应用、组合或定制特定应用领域的功能模块,如脑、肺炎、膝关节分析等;3)实现一种疾病的全栈分析,包括诊断、治疗计划和预后评估,以及多种疾病应用的全谱覆盖。随着先进算法的不断发展和加入,我们期待该平台能在很大程度上简化临床科研流程,促进更多更好的发现。
{"title":"uRP: An integrated research platform for one-stop analysis of medical images.","authors":"Jiaojiao Wu, Yuwei Xia, Xuechun Wang, Ying Wei, Aie Liu, Arun Innanje, Meng Zheng, Lei Chen, Jing Shi, Liye Wang, Yiqiang Zhan, Xiang Sean Zhou, Zhong Xue, Feng Shi, Dinggang Shen","doi":"10.3389/fradi.2023.1153784","DOIUrl":"10.3389/fradi.2023.1153784","url":null,"abstract":"<p><strong>Introduction: </strong>Medical image analysis is of tremendous importance in serving clinical diagnosis, treatment planning, as well as prognosis assessment. However, the image analysis process usually involves multiple modality-specific software and relies on rigorous manual operations, which is time-consuming and potentially low reproducible.</p><p><strong>Methods: </strong>We present an integrated platform - uAI Research Portal (uRP), to achieve one-stop analyses of multimodal images such as CT, MRI, and PET for clinical research applications. The proposed uRP adopts a modularized architecture to be multifunctional, extensible, and customizable.</p><p><strong>Results and discussion: </strong>The uRP shows 3 advantages, as it 1) spans a wealth of algorithms for image processing including semi-automatic delineation, automatic segmentation, registration, classification, quantitative analysis, and image visualization, to realize a one-stop analytic pipeline, 2) integrates a variety of functional modules, which can be directly applied, combined, or customized for specific application domains, such as brain, pneumonia, and knee joint analyses, 3) enables full-stack analysis of one disease, including diagnosis, treatment planning, and prognosis assessment, as well as full-spectrum coverage for multiple disease applications. With the continuous development and inclusion of advanced algorithms, we expect this platform to largely simplify the clinical scientific research process and promote more and better discoveries.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1153784"},"PeriodicalIF":0.0,"publicationDate":"2023-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365282/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10233996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A radiomics approach to the diagnosis of femoroacetabular impingement. 放射组学方法诊断股髋臼撞击。
Pub Date : 2023-03-20 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1151258
Eros Montin, Richard Kijowski, Thomas Youm, Riccardo Lattanzi

Introduction: Femoroacetabular Impingement (FAI) is a hip pathology characterized by impingement of the femoral head-neck junction against the acetabular rim, due to abnormalities in bone morphology. FAI is normally diagnosed by manual evaluation of morphologic features on magnetic resonance imaging (MRI). In this study, we assess, for the first time, the feasibility of using radiomics to detect FAI by automatically extracting quantitative features from images.

Material and methods: 17 patients diagnosed with monolateral FAI underwent pre-surgical MR imaging, including a 3D Dixon sequence of the pelvis. An expert radiologist drew regions of interest on the water-only Dixon images outlining femur and acetabulum in both impingement (IJ) and healthy joints (HJ). 182 radiomic features were extracted for each hip. The dataset numerosity was increased by 60 times with an ad-hoc data augmentation tool. Features were subdivided by type and region in 24 subsets. For each, a univariate ANOVA F-value analysis was applied to find the 5 features most correlated with IJ based on p-value, for a total of 48 subsets. For each subset, a K-nearest neighbor model was trained to differentiate between IJ and HJ using the values of the radiomic features in the subset as input. The training was repeated 100 times, randomly subdividing the data with 75%/25% training/testing.

Results: The texture-based gray level features yielded the highest prediction max accuracy (0.972) with the smallest subset of features. This suggests that the gray image values are more homogeneously distributed in the HJ in comparison to IJ, which could be due to stress-related inflammation resulting from impingement.

Conclusions: We showed that radiomics can automatically distinguish IJ from HJ using water-only Dixon MRI. To our knowledge, this is the first application of radiomics for FAI diagnosis. We reported an accuracy greater than 97%, which is higher than the 90% accuracy for detecting FAI reported for standard diagnostic tests (90%). Our proposed radiomic analysis could be combined with methods for automated joint segmentation to rapidly identify patients with FAI, avoiding time-consuming radiological measurements of bone morphology.

简介:股髋臼撞击(FAI)是一种髋关节病理,其特征是由于骨形态异常导致股骨头颈交界处撞击髋臼缘。FAI通常通过人工评估磁共振成像(MRI)的形态学特征来诊断。在这项研究中,我们首次评估了使用放射组学通过自动从图像中提取定量特征来检测FAI的可行性。材料和方法:17例诊断为单侧FAI的患者行术前MR成像,包括骨盆三维Dixon序列。放射科专家在仅水的Dixon图像上画出了撞击关节(IJ)和健康关节(HJ)的股骨和髋臼的轮廓。每个髋关节提取182个放射学特征。使用临时数据增强工具,数据集数量增加了60倍。将特征按类型和区域细分为24个子集。对于每一个,采用单变量方差分析f值分析,根据p值找到与IJ最相关的5个特征,共48个子集。对于每个子集,使用子集中的放射特征值作为输入,训练k近邻模型来区分IJ和HJ。训练重复100次,随机将数据细分为75%/25%的训练/测试。结果:基于纹理的灰度特征以最小的特征子集获得了最高的预测准确率(0.972)。这表明,与IJ相比,HJ的灰度图像值分布更均匀,这可能是由于撞击引起的应激相关炎症。结论:我们发现放射组学可以使用仅水的Dixon MRI自动区分IJ和HJ。据我们所知,这是放射组学在FAI诊断中的首次应用。我们报告的准确率大于97%,高于标准诊断测试报告的90%的FAI检测准确率(90%)。我们提出的放射组学分析可以与自动关节分割方法相结合,以快速识别FAI患者,避免耗时的骨形态学放射测量。
{"title":"A radiomics approach to the diagnosis of femoroacetabular impingement.","authors":"Eros Montin, Richard Kijowski, Thomas Youm, Riccardo Lattanzi","doi":"10.3389/fradi.2023.1151258","DOIUrl":"10.3389/fradi.2023.1151258","url":null,"abstract":"<p><strong>Introduction: </strong>Femoroacetabular Impingement (FAI) is a hip pathology characterized by impingement of the femoral head-neck junction against the acetabular rim, due to abnormalities in bone morphology. FAI is normally diagnosed by manual evaluation of morphologic features on magnetic resonance imaging (MRI). In this study, we assess, for the first time, the feasibility of using radiomics to detect FAI by automatically extracting quantitative features from images.</p><p><strong>Material and methods: </strong>17 patients diagnosed with monolateral FAI underwent pre-surgical MR imaging, including a 3D Dixon sequence of the pelvis. An expert radiologist drew regions of interest on the water-only Dixon images outlining femur and acetabulum in both impingement (IJ) and healthy joints (HJ). 182 radiomic features were extracted for each hip. The dataset numerosity was increased by 60 times with an ad-hoc data augmentation tool. Features were subdivided by type and region in 24 subsets. For each, a univariate ANOVA <i>F</i>-value analysis was applied to find the 5 features most correlated with IJ based on <i>p</i>-value, for a total of 48 subsets. For each subset, a K-nearest neighbor model was trained to differentiate between IJ and HJ using the values of the radiomic features in the subset as input. The training was repeated 100 times, randomly subdividing the data with 75%/25% training/testing.</p><p><strong>Results: </strong>The texture-based gray level features yielded the highest prediction max accuracy (0.972) with the smallest subset of features. This suggests that the gray image values are more homogeneously distributed in the HJ in comparison to IJ, which could be due to stress-related inflammation resulting from impingement.</p><p><strong>Conclusions: </strong>We showed that radiomics can automatically distinguish IJ from HJ using water-only Dixon MRI. To our knowledge, this is the first application of radiomics for FAI diagnosis. We reported an accuracy greater than 97%, which is higher than the 90% accuracy for detecting FAI reported for standard diagnostic tests (90%). Our proposed radiomic analysis could be combined with methods for automated joint segmentation to rapidly identify patients with FAI, avoiding time-consuming radiological measurements of bone morphology.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1151258"},"PeriodicalIF":0.0,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365279/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10233997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Frontiers in radiology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1