首页 > 最新文献

Frontiers in radiology最新文献

英文 中文
AI in the Loop: functionalizing fold performance disagreement to monitor automated medical image segmentation workflows. AI in the Loop:功能化折叠性能差异,以监控自动医学图像分割工作流程。
Pub Date : 2023-09-15 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1223294
Harrison C Gottlich, Panagiotis Korfiatis, Adriana V Gregory, Timothy L Kline

Introduction: Methods that automatically flag poor performing predictions are drastically needed to safely implement machine learning workflows into clinical practice as well as to identify difficult cases during model training.

Methods: Disagreement between the fivefold cross-validation sub-models was quantified using dice scores between folds and summarized as a surrogate for model confidence. The summarized Interfold Dices were compared with thresholds informed by human interobserver values to determine whether final ensemble model performance should be manually reviewed.

Results: The method on all tasks efficiently flagged poor segmented images without consulting a reference standard. Using the median Interfold Dice for comparison, substantial dice score improvements after excluding flagged images was noted for the in-domain CT (0.85 ± 0.20 to 0.91 ± 0.08, 8/50 images flagged) and MR (0.76 ± 0.27 to 0.85 ± 0.09, 8/50 images flagged). Most impressively, there were dramatic dice score improvements in the simulated out-of-distribution task where the model was trained on a radical nephrectomy dataset with different contrast phases predicting a partial nephrectomy all cortico-medullary phase dataset (0.67 ± 0.36 to 0.89 ± 0.10, 122/300 images flagged).

Discussion: Comparing interfold sub-model disagreement against human interobserver values is an effective and efficient way to assess automated predictions when a reference standard is not available. This functionality provides a necessary safeguard to patient care important to safely implement automated medical image segmentation workflows.

简介:为了将机器学习工作流程安全地实施到临床实践中,以及在模型训练过程中识别困难案例,迫切需要自动标记表现不佳的预测的方法。方法:使用折叠之间的骰子分数来量化五重交叉验证子模型之间的差异,并将其总结为模型置信度的替代品。将总结的折叠间骰子与由人类观察者间值通知的阈值进行比较,以确定是否应手动审查最终的集成模型性能。结果:该方法在所有任务中都有效地标记了较差的分割图像,而无需参考标准。使用中位数Interfold Dice进行比较,发现在排除标记图像后,域内CT(0.85±0.20至0.91±0.08,标记8/50图像)和MR(0.76±0.27至0.85±0.09,标记8/5图像)的骰子得分显著提高。最令人印象深刻的是,在模拟的分布外任务中,骰子得分有了显著的提高,在该任务中,模型在具有不同对比度阶段的根治性肾切除术数据集上进行训练,预测部分肾切除术全皮质-髓质阶段数据集(标记0.67±0.36至0.89±0.10122/300个图像)当没有参考标准时,评估自动预测的有效和高效的方法。该功能为患者护理提供了必要的保障,这对安全实施自动化医疗图像分割工作流程非常重要。
{"title":"AI in the Loop: functionalizing fold performance disagreement to monitor automated medical image segmentation workflows.","authors":"Harrison C Gottlich, Panagiotis Korfiatis, Adriana V Gregory, Timothy L Kline","doi":"10.3389/fradi.2023.1223294","DOIUrl":"10.3389/fradi.2023.1223294","url":null,"abstract":"<p><strong>Introduction: </strong>Methods that automatically flag poor performing predictions are drastically needed to safely implement machine learning workflows into clinical practice as well as to identify difficult cases during model training.</p><p><strong>Methods: </strong>Disagreement between the fivefold cross-validation sub-models was quantified using dice scores between folds and summarized as a surrogate for model confidence. The summarized Interfold Dices were compared with thresholds informed by human interobserver values to determine whether final ensemble model performance should be manually reviewed.</p><p><strong>Results: </strong>The method on all tasks efficiently flagged poor segmented images without consulting a reference standard. Using the median Interfold Dice for comparison, substantial dice score improvements after excluding flagged images was noted for the in-domain CT (0.85 ± 0.20 to 0.91 ± 0.08, 8/50 images flagged) and MR (0.76 ± 0.27 to 0.85 ± 0.09, 8/50 images flagged). Most impressively, there were dramatic dice score improvements in the simulated out-of-distribution task where the model was trained on a radical nephrectomy dataset with different contrast phases predicting a partial nephrectomy all cortico-medullary phase dataset (0.67 ± 0.36 to 0.89 ± 0.10, 122/300 images flagged).</p><p><strong>Discussion: </strong>Comparing interfold sub-model disagreement against human interobserver values is an effective and efficient way to assess automated predictions when a reference standard is not available. This functionality provides a necessary safeguard to patient care important to safely implement automated medical image segmentation workflows.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1223294"},"PeriodicalIF":0.0,"publicationDate":"2023-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10540615/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41175918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High angular diffusion tensor imaging estimation from minimal evenly distributed diffusion gradient directions. 基于最小均匀分布扩散梯度方向的高角度扩散张量成像估计。
Pub Date : 2023-09-11 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1238566
Zihao Tang, Sheng Chen, Arkiev D'Souza, Dongnan Liu, Fernando Calamante, Michael Barnett, Weidong Cai, Chenyu Wang, Mariano Cabezas

Diffusion-weighted Imaging (DWI) is a non-invasive imaging technique based on Magnetic Resonance Imaging (MRI) principles to measure water diffusivity and reveal details of the underlying brain micro-structure. By fitting a tensor model to quantify the directionality of water diffusion a Diffusion Tensor Image (DTI) can be derived and scalar measures, such as fractional anisotropy (FA), can then be estimated from the DTI to summarise quantitative microstructural information for clinical studies. In particular, FA has been shown to be a useful research metric to identify tissue abnormalities in neurological disease (e.g. decreased anisotropy as a proxy for tissue damage). However, time constraints in clinical practice lead to low angular resolution diffusion imaging (LARDI) acquisitions that can cause inaccurate FA value estimates when compared to those generated from high angular resolution diffusion imaging (HARDI) acquisitions. In this work, we propose High Angular DTI Estimation Network (HADTI-Net) to estimate an enhanced DTI model from LARDI with a set of minimal and evenly distributed diffusion gradient directions. Extensive experiments have been conducted to show the reliability and generalisation of HADTI-Net to generate high angular DTI estimation from any minimal evenly distributed diffusion gradient directions and to explore the feasibility of applying a data-driven method for this task. The code repository of this work and other related works can be found at https://mri-synthesis.github.io/.

扩散加权成像(DWI)是一种基于磁共振成像(MRI)原理的非侵入性成像技术,用于测量水的扩散率并揭示潜在大脑微观结构的细节。通过拟合张量模型来量化水扩散的方向性,可以导出扩散张量图像(DTI),然后可以根据DTI估计标量测量,如分数各向异性(FA),以总结临床研究的定量微观结构信息。特别是,FA已被证明是一种有用的研究指标,可用于识别神经疾病中的组织异常(例如,作为组织损伤指标的各向异性降低)。然而,临床实践中的时间限制导致低角度分辨率扩散成像(LARDI)采集,与高角度分辨率扩散图像(HARDI)采集相比,这可能导致FA值估计不准确。在这项工作中,我们提出了高角度DTI估计网络(HADTI Net),以根据具有一组最小且均匀分布的扩散梯度方向的LARDI来估计增强的DTI模型。已经进行了大量的实验来证明HADTI Net的可靠性和通用性,以从任何最小均匀分布的扩散梯度方向生成高角度DTI估计,并探索将数据驱动方法应用于该任务的可行性。这部作品和其他相关作品的代码库可以在https://mri-synthesis.github.io/.
{"title":"High angular diffusion tensor imaging estimation from minimal evenly distributed diffusion gradient directions.","authors":"Zihao Tang,&nbsp;Sheng Chen,&nbsp;Arkiev D'Souza,&nbsp;Dongnan Liu,&nbsp;Fernando Calamante,&nbsp;Michael Barnett,&nbsp;Weidong Cai,&nbsp;Chenyu Wang,&nbsp;Mariano Cabezas","doi":"10.3389/fradi.2023.1238566","DOIUrl":"https://doi.org/10.3389/fradi.2023.1238566","url":null,"abstract":"<p><p>Diffusion-weighted Imaging (DWI) is a non-invasive imaging technique based on Magnetic Resonance Imaging (MRI) principles to measure water diffusivity and reveal details of the underlying brain micro-structure. By fitting a tensor model to quantify the directionality of water diffusion a Diffusion Tensor Image (DTI) can be derived and scalar measures, such as fractional anisotropy (FA), can then be estimated from the DTI to summarise quantitative microstructural information for clinical studies. In particular, FA has been shown to be a useful research metric to identify tissue abnormalities in neurological disease (e.g. decreased anisotropy as a proxy for tissue damage). However, time constraints in clinical practice lead to low angular resolution diffusion imaging (LARDI) acquisitions that can cause inaccurate FA value estimates when compared to those generated from high angular resolution diffusion imaging (HARDI) acquisitions. In this work, we propose High Angular DTI Estimation Network (HADTI-Net) to estimate an enhanced DTI model from LARDI with a set of minimal and evenly distributed diffusion gradient directions. Extensive experiments have been conducted to show the reliability and generalisation of HADTI-Net to generate high angular DTI estimation from any minimal evenly distributed diffusion gradient directions and to explore the feasibility of applying a data-driven method for this task. The code repository of this work and other related works can be found at https://mri-synthesis.github.io/.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1238566"},"PeriodicalIF":0.0,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10520249/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41163244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating automated longitudinal tumor measurements for glioblastoma response assessment. 评估胶质母细胞瘤反应评估的自动化纵向肿瘤测量。
Pub Date : 2023-09-07 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1211859
Yannick Suter, Michelle Notter, Raphael Meier, Tina Loosli, Philippe Schucht, Roland Wiest, Mauricio Reyes, Urspeter Knecht

Automated tumor segmentation tools for glioblastoma show promising performance. To apply these tools for automated response assessment, longitudinal segmentation, and tumor measurement, consistency is critical. This study aimed to determine whether BraTumIA and HD-GLIO are suited for this task. We evaluated two segmentation tools with respect to automated response assessment on the single-center retrospective LUMIERE dataset with 80 patients and a total of 502 post-operative time points. Volumetry and automated bi-dimensional measurements were compared with expert measurements following the Response Assessment in Neuro-Oncology (RANO) guidelines. The longitudinal trend agreement between the expert and methods was evaluated, and the RANO progression thresholds were tested against the expert-derived time-to-progression (TTP). The TTP and overall survival (OS) correlation was used to check the progression thresholds. We evaluated the automated detection and influence of non-measurable lesions. The tumor volume trend agreement calculated between segmentation volumes and the expert bi-dimensional measurements was high (HD-GLIO: 81.1%, BraTumIA: 79.7%). BraTumIA achieved the closest match to the expert TTP using the recommended RANO progression threshold. HD-GLIO-derived tumor volumes reached the highest correlation between TTP and OS (0.55). Both tools failed at an accurate lesion count across time. Manual false-positive removal and restricting to a maximum number of measurable lesions had no beneficial effect. Expert supervision and manual corrections are still necessary when applying the tested automated segmentation tools for automated response assessment. The longitudinal consistency of current segmentation tools needs further improvement. Validation of volumetric and bi-dimensional progression thresholds with multi-center studies is required to move toward volumetry-based response assessment.

胶质母细胞瘤的自动肿瘤分割工具显示出良好的性能。要将这些工具应用于自动反应评估、纵向分割和肿瘤测量,一致性至关重要。本研究旨在确定BraTumIA和HD-GLIO是否适合这项任务。我们在单中心回顾性LUMIERE数据集上评估了两种关于自动反应评估的分割工具,该数据集包含80名患者和总共502个术后时间点。根据神经肿瘤反应评估(RANO)指南,将容量测定和自动二维测量与专家测量进行比较。评估了专家和方法之间的纵向趋势一致性,并根据专家得出的进展时间(TTP)测试了RANO进展阈值。TTP与总生存期(OS)的相关性用于检查进展阈值。我们评估了不可测量病变的自动检测和影响。分割体积和专家二维测量之间计算出的肿瘤体积趋势一致性很高(HD-GLIO:81.1%,BraTumIA:79.7%)。BraTumIA使用推荐的RANO进展阈值实现了与专家TTP最接近的匹配。HD GLIO衍生的肿瘤体积在TTP和OS之间达到了最高的相关性(0.55)。两种工具都无法在一段时间内准确计数病变。手动去除假阳性并限制在最大数量的可测量病变范围内没有任何有益效果。在应用经过测试的自动分割工具进行自动反应评估时,专家监督和手动更正仍然是必要的。当前分割工具的纵向一致性需要进一步改进。需要通过多中心研究验证体积和二维进展阈值,以实现基于体积的反应评估。
{"title":"Evaluating automated longitudinal tumor measurements for glioblastoma response assessment.","authors":"Yannick Suter,&nbsp;Michelle Notter,&nbsp;Raphael Meier,&nbsp;Tina Loosli,&nbsp;Philippe Schucht,&nbsp;Roland Wiest,&nbsp;Mauricio Reyes,&nbsp;Urspeter Knecht","doi":"10.3389/fradi.2023.1211859","DOIUrl":"https://doi.org/10.3389/fradi.2023.1211859","url":null,"abstract":"<p><p>Automated tumor segmentation tools for glioblastoma show promising performance. To apply these tools for automated response assessment, longitudinal segmentation, and tumor measurement, consistency is critical. This study aimed to determine whether BraTumIA and HD-GLIO are suited for this task. We evaluated two segmentation tools with respect to automated response assessment on the single-center retrospective LUMIERE dataset with 80 patients and a total of 502 post-operative time points. Volumetry and automated bi-dimensional measurements were compared with expert measurements following the Response Assessment in Neuro-Oncology (RANO) guidelines. The longitudinal trend agreement between the expert and methods was evaluated, and the RANO progression thresholds were tested against the expert-derived time-to-progression (TTP). The TTP and overall survival (OS) correlation was used to check the progression thresholds. We evaluated the automated detection and influence of non-measurable lesions. The tumor volume trend agreement calculated between segmentation volumes and the expert bi-dimensional measurements was high (HD-GLIO: 81.1%, BraTumIA: 79.7%). BraTumIA achieved the closest match to the expert TTP using the recommended RANO progression threshold. HD-GLIO-derived tumor volumes reached the highest correlation between TTP and OS (0.55). Both tools failed at an accurate lesion count across time. Manual false-positive removal and restricting to a maximum number of measurable lesions had no beneficial effect. Expert supervision and manual corrections are still necessary when applying the tested automated segmentation tools for automated response assessment. The longitudinal consistency of current segmentation tools needs further improvement. Validation of volumetric and bi-dimensional progression thresholds with multi-center studies is required to move toward volumetry-based response assessment.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1211859"},"PeriodicalIF":0.0,"publicationDate":"2023-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10513769/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41177631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From coarse to fine: a deep 3D probability volume contours framework for tumour segmentation and dose painting in PET images. 从粗到细:用于PET图像中肿瘤分割和剂量绘制的深度3D概率体积轮廓框架。
Pub Date : 2023-09-05 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1225215
Wenhui Zhang, Surajit Ray

With the increasing integration of functional imaging techniques like Positron Emission Tomography (PET) into radiotherapy (RT) practices, a paradigm shift in cancer treatment methodologies is underway. A fundamental step in RT planning is the accurate segmentation of tumours based on clinical diagnosis. Furthermore, novel tumour control methods, such as intensity modulated radiation therapy (IMRT) dose painting, demand the precise delineation of multiple intensity value contours to ensure optimal tumour dose distribution. Recently, convolutional neural networks (CNNs) have made significant strides in 3D image segmentation tasks, most of which present the output map at a voxel-wise level. However, because of information loss in subsequent downsampling layers, they frequently fail to precisely identify precise object boundaries. Moreover, in the context of dose painting strategies, there is an imperative need for reliable and precise image segmentation techniques to delineate high recurrence-risk contours. To address these challenges, we introduce a 3D coarse-to-fine framework, integrating a CNN with a kernel smoothing-based probability volume contour approach (KsPC). This integrated approach generates contour-based segmentation volumes, mimicking expert-level precision and providing accurate probability contours crucial for optimizing dose painting/IMRT strategies. Our final model, named KsPC-Net, leverages a CNN backbone to automatically learn parameters in the kernel smoothing process, thereby obviating the need for user-supplied tuning parameters. The 3D KsPC-Net exploits the strength of KsPC to simultaneously identify object boundaries and generate corresponding probability volume contours, which can be trained within an end-to-end framework. The proposed model has demonstrated promising performance, surpassing state-of-the-art models when tested against the MICCAI 2021 challenge dataset (HECKTOR).

随着正电子发射断层扫描(PET)等功能成像技术日益融入放射治疗(RT)实践,癌症治疗方法的范式转变正在进行中。RT计划的一个基本步骤是根据临床诊断准确分割肿瘤。此外,新的肿瘤控制方法,如强度调制放射治疗(IMRT)剂量绘制,需要精确描绘多个强度值轮廓,以确保最佳的肿瘤剂量分布。最近,卷积神经网络(CNNs)在3D图像分割任务中取得了重大进展,其中大多数都在体素水平上呈现输出图。然而,由于后续下采样层中的信息丢失,它们经常无法准确识别精确的对象边界。此外,在剂量绘制策略的背景下,迫切需要可靠和精确的图像分割技术来描绘高复发风险轮廓。为了解决这些挑战,我们引入了一个3D从粗到细的框架,将CNN与基于核平滑的概率体积轮廓方法(KsPC)相结合。这种集成方法生成基于轮廓的分割体积,模拟专家级的精度,并提供精确的概率轮廓,这对于优化剂量绘制/IMRT策略至关重要。我们的最终模型名为KsPC-Net,它利用CNN主干来自动学习内核平滑过程中的参数,从而消除了对用户提供的调整参数的需求。3D KsPC-Net利用KsPC的强度来同时识别对象边界并生成相应的概率体积轮廓,这些轮廓可以在端到端的框架内进行训练。当与MICCAI 2021挑战数据集(HECKTOR)进行测试时,所提出的模型表现出了良好的性能,超过了最先进的模型。
{"title":"From coarse to fine: a deep 3D probability volume contours framework for tumour segmentation and dose painting in PET images.","authors":"Wenhui Zhang, Surajit Ray","doi":"10.3389/fradi.2023.1225215","DOIUrl":"10.3389/fradi.2023.1225215","url":null,"abstract":"<p><p>With the increasing integration of functional imaging techniques like Positron Emission Tomography (PET) into radiotherapy (RT) practices, a paradigm shift in cancer treatment methodologies is underway. A fundamental step in RT planning is the accurate segmentation of tumours based on clinical diagnosis. Furthermore, novel tumour control methods, such as intensity modulated radiation therapy (IMRT) dose painting, demand the precise delineation of multiple intensity value contours to ensure optimal tumour dose distribution. Recently, convolutional neural networks (CNNs) have made significant strides in 3D image segmentation tasks, most of which present the output map at a voxel-wise level. However, because of information loss in subsequent downsampling layers, they frequently fail to precisely identify precise object boundaries. Moreover, in the context of dose painting strategies, there is an imperative need for reliable and precise image segmentation techniques to delineate high recurrence-risk contours. To address these challenges, we introduce a 3D coarse-to-fine framework, integrating a CNN with a kernel smoothing-based probability volume contour approach (KsPC). This integrated approach generates contour-based segmentation volumes, mimicking expert-level precision and providing accurate probability contours crucial for optimizing dose painting/IMRT strategies. Our final model, named KsPC-Net, leverages a CNN backbone to automatically learn parameters in the kernel smoothing process, thereby obviating the need for user-supplied tuning parameters. The 3D KsPC-Net exploits the strength of KsPC to simultaneously identify object boundaries and generate corresponding probability volume contours, which can be trained within an end-to-end framework. The proposed model has demonstrated promising performance, surpassing state-of-the-art models when tested against the MICCAI 2021 challenge dataset (HECKTOR).</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1225215"},"PeriodicalIF":0.0,"publicationDate":"2023-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10512384/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41155957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retrospective quantification of clinical abdominal DCE-MRI using pharmacokinetics-informed deep learning: a proof-of-concept study. 使用药代动力学信息深度学习对临床腹部DCE-MRI进行回顾性量化:一项概念验证研究。
Pub Date : 2023-09-04 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1168901
Chaowei Wu, Nan Wang, Srinivas Gaddam, Lixia Wang, Hui Han, Kyunghyun Sung, Anthony G Christodoulou, Yibin Xie, Stephen Pandol, Debiao Li

Introduction: Dynamic contrast-enhanced (DCE) MRI has important clinical value for early detection, accurate staging, and therapeutic monitoring of cancers. However, conventional multi-phasic abdominal DCE-MRI has limited temporal resolution and provides qualitative or semi-quantitative assessments of tissue vascularity. In this study, the feasibility of retrospectively quantifying multi-phasic abdominal DCE-MRI by using pharmacokinetics-informed deep learning to improve temporal resolution was investigated.

Method: Forty-five subjects consisting of healthy controls, pancreatic ductal adenocarcinoma (PDAC), and chronic pancreatitis (CP) were imaged with a 2-s temporal-resolution quantitative DCE sequence, from which 30-s temporal-resolution multi-phasic DCE-MRI was synthesized based on clinical protocol. A pharmacokinetics-informed neural network was trained to improve the temporal resolution of the multi-phasic DCE before the quantification of pharmacokinetic parameters. Through ten-fold cross-validation, the agreement between pharmacokinetic parameters estimated from synthesized multi-phasic DCE after deep learning inference was assessed against reference parameters from the corresponding quantitative DCE-MRI images. The ability of the deep learning estimated parameters to differentiate abnormal from normal tissues was assessed as well.

Results: The pharmacokinetic parameters estimated after deep learning have a high level of agreement with the reference values. In the cross-validation, all three pharmacokinetic parameters (transfer constant Ktrans, fractional extravascular extracellular volume ve, and rate constant kep) achieved intraclass correlation coefficient and R2 between 0.84-0.94, and low coefficients of variation (10.1%, 12.3%, and 5.6%, respectively) relative to the reference values. Significant differences were found between healthy pancreas, PDAC tumor and non-tumor, and CP pancreas.

Discussion: Retrospective quantification (RoQ) of clinical multi-phasic DCE-MRI is possible by deep learning. This technique has the potential to derive quantitative pharmacokinetic parameters from clinical multi-phasic DCE data for a more objective and precise assessment of cancer.

引言:动态增强(DCE)MRI对癌症的早期发现、准确分期和治疗监测具有重要的临床价值。然而,传统的多相腹部DCE-MRI具有有限的时间分辨率,并且提供了对组织血管性的定性或半定量评估。在本研究中,研究了通过药代动力学知情深度学习来提高时间分辨率来回顾性量化多相腹部DCE-MRI的可行性。方法:45名受试者,包括健康对照组、胰腺导管腺癌(PDAC)和慢性胰腺炎(CP),采用2-s时间分辨率定量DCE序列进行成像,根据临床方案合成30-s时间分辨率多相DCE-MRI。在量化药代动力学参数之前,训练药代动力学知情神经网络以提高多相DCE的时间分辨率。通过十倍交叉验证,将深度学习推断后合成的多相DCE估计的药代动力学参数与相应定量DCE-MRI图像的参考参数之间的一致性进行了评估。还评估了深度学习估计参数区分异常组织和正常组织的能力。结果:深度学习后估计的药代动力学参数与参考值高度一致。在交叉验证中,所有三个药代动力学参数(转移常数Ktrans、血管外细胞外体积分数ve和速率常数kep)实现了组内相关系数,R2在0.84-0.94之间,相对于参考值的变异系数较低(分别为10.1%、12.3%和5.6%)。健康胰腺、PDAC肿瘤和非肿瘤胰腺以及CP胰腺之间存在显著差异。讨论:通过深度学习可以对临床多阶段DCE-MRI进行回顾性量化(RoQ)。该技术有可能从临床多相DCE数据中获得定量药代动力学参数,以更客观、更准确地评估癌症。
{"title":"Retrospective quantification of clinical abdominal DCE-MRI using pharmacokinetics-informed deep learning: a proof-of-concept study.","authors":"Chaowei Wu,&nbsp;Nan Wang,&nbsp;Srinivas Gaddam,&nbsp;Lixia Wang,&nbsp;Hui Han,&nbsp;Kyunghyun Sung,&nbsp;Anthony G Christodoulou,&nbsp;Yibin Xie,&nbsp;Stephen Pandol,&nbsp;Debiao Li","doi":"10.3389/fradi.2023.1168901","DOIUrl":"https://doi.org/10.3389/fradi.2023.1168901","url":null,"abstract":"<p><strong>Introduction: </strong>Dynamic contrast-enhanced (DCE) MRI has important clinical value for early detection, accurate staging, and therapeutic monitoring of cancers. However, conventional multi-phasic abdominal DCE-MRI has limited temporal resolution and provides qualitative or semi-quantitative assessments of tissue vascularity. In this study, the feasibility of retrospectively quantifying multi-phasic abdominal DCE-MRI by using pharmacokinetics-informed deep learning to improve temporal resolution was investigated.</p><p><strong>Method: </strong>Forty-five subjects consisting of healthy controls, pancreatic ductal adenocarcinoma (PDAC), and chronic pancreatitis (CP) were imaged with a 2-s temporal-resolution quantitative DCE sequence, from which 30-s temporal-resolution multi-phasic DCE-MRI was synthesized based on clinical protocol. A pharmacokinetics-informed neural network was trained to improve the temporal resolution of the multi-phasic DCE before the quantification of pharmacokinetic parameters. Through ten-fold cross-validation, the agreement between pharmacokinetic parameters estimated from synthesized multi-phasic DCE after deep learning inference was assessed against reference parameters from the corresponding quantitative DCE-MRI images. The ability of the deep learning estimated parameters to differentiate abnormal from normal tissues was assessed as well.</p><p><strong>Results: </strong>The pharmacokinetic parameters estimated after deep learning have a high level of agreement with the reference values. In the cross-validation, all three pharmacokinetic parameters (transfer constant <math><msup><mi>K</mi><mrow><mrow><mi>trans</mi></mrow></mrow></msup></math>, fractional extravascular extracellular volume <math><msub><mi>v</mi><mi>e</mi></msub></math>, and rate constant <math><msub><mi>k</mi><mrow><mrow><mi>ep</mi></mrow></mrow></msub></math>) achieved intraclass correlation coefficient and <i>R</i><sup>2</sup> between 0.84-0.94, and low coefficients of variation (10.1%, 12.3%, and 5.6%, respectively) relative to the reference values. Significant differences were found between healthy pancreas, PDAC tumor and non-tumor, and CP pancreas.</p><p><strong>Discussion: </strong>Retrospective quantification (RoQ) of clinical multi-phasic DCE-MRI is possible by deep learning. This technique has the potential to derive quantitative pharmacokinetic parameters from clinical multi-phasic DCE data for a more objective and precise assessment of cancer.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1168901"},"PeriodicalIF":0.0,"publicationDate":"2023-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10507354/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41168695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial assessments in texture analysis: what the radiologist needs to know. 纹理分析中的空间评估:放射科医生须知。
Pub Date : 2023-08-24 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1240544
Bino A Varghese, Brandon K K Fields, Darryl H Hwang, Vinay A Duddalwar, George R Matcuk, Steven Y Cen

To date, studies investigating radiomics-based predictive models have tended to err on the side of data-driven or exploratory analysis of many thousands of extracted features. In particular, spatial assessments of texture have proven to be especially adept at assessing for features of intratumoral heterogeneity in oncologic imaging, which likewise may correspond with tumor biology and behavior. These spatial assessments can be generally classified as spatial filters, which detect areas of rapid change within the grayscale in order to enhance edges and/or textures within an image, or neighborhood-based methods, which quantify gray-level differences of neighboring pixels/voxels within a set distance. Given the high dimensionality of radiomics datasets, data dimensionality reduction methods have been proposed in an attempt to optimize model performance in machine learning studies; however, it should be noted that these approaches should only be applied to training data in order to avoid information leakage and model overfitting. While area under the curve of the receiver operating characteristic is perhaps the most commonly reported assessment of model performance, it is prone to overestimation when output classifications are unbalanced. In such cases, confusion matrices may be additionally reported, whereby diagnostic cut points for model predicted probability may hold more clinical significance to clinical colleagues with respect to related forms of diagnostic testing.

迄今为止,基于放射组学预测模型的研究往往偏向于数据驱动或对数千个提取特征进行探索性分析。特别是,纹理的空间评估已被证明特别擅长评估肿瘤成像中的瘤内异质性特征,这同样可能与肿瘤生物学和行为学相对应。这些空间评估一般可分为空间滤波器和基于邻域的方法,前者可检测灰度范围内的快速变化区域,以增强图像中的边缘和/或纹理;后者可量化设定距离内相邻像素/体素的灰度差异。鉴于放射组学数据集的高维性,人们提出了数据降维方法,试图优化机器学习研究中的模型性能;但需要注意的是,这些方法只能应用于训练数据,以避免信息泄露和模型过拟合。虽然接收者操作特征曲线下面积可能是最常报道的模型性能评估方法,但当输出分类不平衡时,它很容易被高估。在这种情况下,可能会额外报告混淆矩阵,据此,模型预测概率的诊断切点可能对临床同事来说,与相关形式的诊断测试相比,具有更多的临床意义。
{"title":"Spatial assessments in texture analysis: what the radiologist needs to know.","authors":"Bino A Varghese, Brandon K K Fields, Darryl H Hwang, Vinay A Duddalwar, George R Matcuk, Steven Y Cen","doi":"10.3389/fradi.2023.1240544","DOIUrl":"10.3389/fradi.2023.1240544","url":null,"abstract":"<p><p>To date, studies investigating radiomics-based predictive models have tended to err on the side of data-driven or exploratory analysis of many thousands of extracted features. In particular, spatial assessments of texture have proven to be especially adept at assessing for features of intratumoral heterogeneity in oncologic imaging, which likewise may correspond with tumor biology and behavior. These spatial assessments can be generally classified as spatial filters, which detect areas of rapid change within the grayscale in order to enhance edges and/or textures within an image, or neighborhood-based methods, which quantify gray-level differences of neighboring pixels/voxels within a set distance. Given the high dimensionality of radiomics datasets, data dimensionality reduction methods have been proposed in an attempt to optimize model performance in machine learning studies; however, it should be noted that these approaches should only be applied to training data in order to avoid information leakage and model overfitting. While area under the curve of the receiver operating characteristic is perhaps the most commonly reported assessment of model performance, it is prone to overestimation when output classifications are unbalanced. In such cases, confusion matrices may be additionally reported, whereby diagnostic cut points for model predicted probability may hold more clinical significance to clinical colleagues with respect to related forms of diagnostic testing.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1240544"},"PeriodicalIF":0.0,"publicationDate":"2023-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10484588/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10225205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning image segmentation approaches for malignant bone lesions: a systematic review and meta-analysis. 恶性骨病变的深度学习图像分割方法:系统综述与荟萃分析。
Pub Date : 2023-08-08 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1241651
Joseph M Rich, Lokesh N Bhardwaj, Aman Shah, Krish Gangal, Mohitha S Rapaka, Assad A Oberai, Brandon K K Fields, George R Matcuk, Vinay A Duddalwar

Introduction: Image segmentation is an important process for quantifying characteristics of malignant bone lesions, but this task is challenging and laborious for radiologists. Deep learning has shown promise in automating image segmentation in radiology, including for malignant bone lesions. The purpose of this review is to investigate deep learning-based image segmentation methods for malignant bone lesions on Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron-Emission Tomography/CT (PET/CT).

Method: The literature search of deep learning-based image segmentation of malignant bony lesions on CT and MRI was conducted in PubMed, Embase, Web of Science, and Scopus electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 41 original articles published between February 2017 and March 2023 were included in the review.

Results: The majority of papers studied MRI, followed by CT, PET/CT, and PET/MRI. There was relatively even distribution of papers studying primary vs. secondary malignancies, as well as utilizing 3-dimensional vs. 2-dimensional data. Many papers utilize custom built models as a modification or variation of U-Net. The most common metric for evaluation was the dice similarity coefficient (DSC). Most models achieved a DSC above 0.6, with medians for all imaging modalities between 0.85-0.9.

Discussion: Deep learning methods show promising ability to segment malignant osseous lesions on CT, MRI, and PET/CT. Some strategies which are commonly applied to help improve performance include data augmentation, utilization of large public datasets, preprocessing including denoising and cropping, and U-Net architecture modification. Future directions include overcoming dataset and annotation homogeneity and generalizing for clinical applicability.

简介图像分割是量化恶性骨病变特征的重要过程,但对于放射科医生来说,这项任务既具有挑战性又费力。深度学习在放射学图像自动分割方面大有可为,包括恶性骨病变。本综述旨在研究计算机断层扫描(CT)、磁共振成像(MRI)和正电子发射断层扫描/CT(PET/CT)上基于深度学习的恶性骨病变图像分割方法:根据系统综述和元分析首选报告项目(Preferred Reporting Items for Systematic Reviews and Meta-Analyses,PRISMA)指南,在PubMed、Embase、Web of Science和Scopus电子数据库中对基于深度学习的CT和MRI恶性骨病变图像分割进行了文献检索。共有41篇发表于2017年2月至2023年3月期间的原创文章被纳入综述:大多数论文研究的是 MRI,其次是 CT、PET/CT 和 PET/MRI。研究原发性与继发性恶性肿瘤以及利用三维与二维数据的论文分布相对均匀。许多论文利用定制模型作为 U-Net 的修改或变体。最常用的评估指标是骰子相似系数(DSC)。大多数模型的骰子相似系数都在 0.6 以上,所有成像模式的中位数都在 0.85-0.9 之间:深度学习方法在分割 CT、MRI 和 PET/CT 上的恶性骨质病变方面表现出良好的能力。为帮助提高性能,通常采用的一些策略包括数据增强、利用大型公共数据集、预处理(包括去噪和裁剪)以及 U-Net 架构修改。未来的研究方向包括克服数据集和注释的同质性,以及临床应用的通用性。
{"title":"Deep learning image segmentation approaches for malignant bone lesions: a systematic review and meta-analysis.","authors":"Joseph M Rich, Lokesh N Bhardwaj, Aman Shah, Krish Gangal, Mohitha S Rapaka, Assad A Oberai, Brandon K K Fields, George R Matcuk, Vinay A Duddalwar","doi":"10.3389/fradi.2023.1241651","DOIUrl":"10.3389/fradi.2023.1241651","url":null,"abstract":"<p><strong>Introduction: </strong>Image segmentation is an important process for quantifying characteristics of malignant bone lesions, but this task is challenging and laborious for radiologists. Deep learning has shown promise in automating image segmentation in radiology, including for malignant bone lesions. The purpose of this review is to investigate deep learning-based image segmentation methods for malignant bone lesions on Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron-Emission Tomography/CT (PET/CT).</p><p><strong>Method: </strong>The literature search of deep learning-based image segmentation of malignant bony lesions on CT and MRI was conducted in PubMed, Embase, Web of Science, and Scopus electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 41 original articles published between February 2017 and March 2023 were included in the review.</p><p><strong>Results: </strong>The majority of papers studied MRI, followed by CT, PET/CT, and PET/MRI. There was relatively even distribution of papers studying primary vs. secondary malignancies, as well as utilizing 3-dimensional vs. 2-dimensional data. Many papers utilize custom built models as a modification or variation of U-Net. The most common metric for evaluation was the dice similarity coefficient (DSC). Most models achieved a DSC above 0.6, with medians for all imaging modalities between 0.85-0.9.</p><p><strong>Discussion: </strong>Deep learning methods show promising ability to segment malignant osseous lesions on CT, MRI, and PET/CT. Some strategies which are commonly applied to help improve performance include data augmentation, utilization of large public datasets, preprocessing including denoising and cropping, and U-Net architecture modification. Future directions include overcoming dataset and annotation homogeneity and generalizing for clinical applicability.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1241651"},"PeriodicalIF":0.0,"publicationDate":"2023-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10442705/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10069334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Localization supervision of chest x-ray classifiers using label-specific eye-tracking annotation. 利用特定标签眼动跟踪注释对胸部 X 光分类器进行定位监督。
Pub Date : 2023-06-22 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1088068
Ricardo Bigolin Lanfredi, Joyce D Schroeder, Tolga Tasdizen

Convolutional neural networks (CNNs) have been successfully applied to chest x-ray (CXR) images. Moreover, annotated bounding boxes have been shown to improve the interpretability of a CNN in terms of localizing abnormalities. However, only a few relatively small CXR datasets containing bounding boxes are available, and collecting them is very costly. Opportunely, eye-tracking (ET) data can be collected during the clinical workflow of a radiologist. We use ET data recorded from radiologists while dictating CXR reports to train CNNs. We extract snippets from the ET data by associating them with the dictation of keywords and use them to supervise the localization of specific abnormalities. We show that this method can improve a model's interpretability without impacting its image-level classification.

卷积神经网络(CNN)已成功应用于胸部 X 光(CXR)图像。此外,注释边界框已被证明可提高 CNN 在定位异常方面的可解释性。然而,目前只有少数包含边界框的相对较小的 CXR 数据集,而且收集这些数据集的成本非常高。眼动跟踪(ET)数据可以在放射科医生的临床工作流程中收集。我们使用放射科医生在口述 CXR 报告时记录的 ET 数据来训练 CNN。我们从 ET 数据中提取片段,将它们与关键字的口述关联起来,并用它们来监督特定异常的定位。我们的研究表明,这种方法可以提高模型的可解释性,而不会影响其图像级分类。
{"title":"Localization supervision of chest x-ray classifiers using label-specific eye-tracking annotation.","authors":"Ricardo Bigolin Lanfredi, Joyce D Schroeder, Tolga Tasdizen","doi":"10.3389/fradi.2023.1088068","DOIUrl":"10.3389/fradi.2023.1088068","url":null,"abstract":"<p><p>Convolutional neural networks (CNNs) have been successfully applied to chest x-ray (CXR) images. Moreover, annotated bounding boxes have been shown to improve the interpretability of a CNN in terms of localizing abnormalities. However, only a few relatively small CXR datasets containing bounding boxes are available, and collecting them is very costly. Opportunely, eye-tracking (ET) data can be collected during the clinical workflow of a radiologist. We use ET data recorded from radiologists while dictating CXR reports to train CNNs. We extract snippets from the ET data by associating them with the dictation of keywords and use them to supervise the localization of specific abnormalities. We show that this method can improve a model's interpretability without impacting its image-level classification.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1088068"},"PeriodicalIF":0.0,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365091/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9930026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based left ventricular segmentation demonstrates improved performance on respiratory motion-resolved whole-heart reconstructions. 基于深度学习的左心室分割在呼吸运动分辨的全心脏重建中表现出改进的性能。
Pub Date : 2023-06-02 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1144004
Yitong Yang, Zahraw Shah, Athira J Jacob, Jackson Hair, Teodora Chitiboi, Tiziano Passerini, Jerome Yerly, Lorenzo Di Sopra, Davide Piccini, Zahra Hosseini, Puneet Sharma, Anurag Sahu, Matthias Stuber, John N Oshinski

Introduction: Deep learning (DL)-based segmentation has gained popularity for routine cardiac magnetic resonance (CMR) image analysis and in particular, delineation of left ventricular (LV) borders for LV volume determination. Free-breathing, self-navigated, whole-heart CMR exams provide high-resolution, isotropic coverage of the heart for assessment of cardiac anatomy including LV volume. The combination of whole-heart free-breathing CMR and DL-based LV segmentation has the potential to streamline the acquisition and analysis of clinical CMR exams. The purpose of this study was to compare the performance of a DL-based automatic LV segmentation network trained primarily on computed tomography (CT) images in two whole-heart CMR reconstruction methods: (1) an in-line respiratory motion-corrected (Mcorr) reconstruction and (2) an off-line, compressed sensing-based, multi-volume respiratory motion-resolved (Mres) reconstruction. Given that Mres images were shown to have greater image quality in previous studies than Mcorr images, we hypothesized that the LV volumes segmented from Mres images are closer to the manual expert-traced left ventricular endocardial border than the Mcorr images.

Method: This retrospective study used 15 patients who underwent clinically indicated 1.5 T CMR exams with a prototype ECG-gated 3D radial phyllotaxis balanced steady state free precession (bSSFP) sequence. For each reconstruction method, the absolute volume difference (AVD) of the automatically and manually segmented LV volumes was used as the primary quantity to investigate whether 3D DL-based LV segmentation generalized better on Mcorr or Mres 3D whole-heart images. Additionally, we assessed the 3D Dice similarity coefficient between the manual and automatic LV masks of each reconstructed 3D whole-heart image and the sharpness of the LV myocardium-blood pool interface. A two-tail paired Student's t-test (alpha = 0.05) was used to test the significance in this study.

Results & discussion: The AVD in the respiratory Mres reconstruction was lower than the AVD in the respiratory Mcorr reconstruction: 7.73 ± 6.54 ml vs. 20.0 ± 22.4 ml, respectively (n = 15, p-value = 0.03). The 3D Dice coefficient between the DL-segmented masks and the manually segmented masks was higher for Mres images than for Mcorr images: 0.90 ± 0.02 vs. 0.87 ± 0.03 respectively, with a p-value = 0.02. Sharpness on Mres images was higher than on Mcorr images: 0.15 ± 0.05 vs. 0.12 ± 0.04, respectively, with a p-value of 0.014 (n = 15).

Conclusion: We conclude that the DL-based 3D automatic LV segmentation network trained on CT images and fine-tuned on MR images generalized better on Mres images than on Mcorr images for quantifying LV volumes.

引言:基于深度学习(DL)的分割在常规心脏磁共振(CMR)图像分析中越来越受欢迎,尤其是在左心室(LV)边界的划定中,用于左心室容积的确定。自由呼吸、自主导航、全心CMR检查提供了高分辨率、各向同性的心脏覆盖范围,用于评估心脏解剖结构,包括左心室容积。全心自由呼吸CMR和基于DL的LV分割相结合,有可能简化临床CMR检查的采集和分析。本研究的目的是比较主要在计算机断层扫描(CT)图像上训练的基于DL的左心室自动分割网络在两种全心脏CMR重建方法中的性能:(1)在线呼吸运动校正(Mcorr)重建和(2)离线、基于压缩传感的多体积呼吸运动分辨(Mres)重建。鉴于Mres图像在先前的研究中显示出比Mcorr图像更高的图像质量,我们假设从Mres图像分割的左心室体积比Mcorr图像更接近手动专家追踪的左心室心内膜边界。方法:这项回顾性研究使用了15名患者,他们接受了临床指示的1.5 T CMR检查与原型心电图门控的三维径向叶序平衡稳态自由进动(bSFP)序列。对于每种重建方法,自动和手动分割的左心室体积的绝对体积差(AVD)被用作主要量,以研究基于3D DL的左心室分割是否在Mcorr或Mres 3D全心图像上更好地推广。此外,我们评估了每个重建的3D全心图像的手动和自动左心室掩模之间的3D Dice相似系数以及左心室-心肌-血池界面的清晰度。双尾配对Student t检验(α = 0.05)来检验其在本研究中的显著性。结果与讨论:呼吸Mres重建的AVD低于呼吸Mcorr重建的AVD:7.73 ± 6.54 ml与20.0 ± 22.4 ml(n = 15,p值 = 0.03)。对于Mres图像,DL分割掩模和手动分割掩模之间的3D骰子系数高于Mcorr图像:0.90 ± 0.02对0.87 ± 分别为0.03,具有p值 = 0.02。Mres图像的清晰度高于Mcorr图像:0.15 ± 0.05对0.12 ± 0.04,p值为0.014(n = 15) 结论:我们得出的结论是,在CT图像上训练并在MR图像上微调的基于DL的3D自动左心室分割网络在Mres图像上比在Mcorr图像上更好地推广用于量化左心室体积。
{"title":"Deep learning-based left ventricular segmentation demonstrates improved performance on respiratory motion-resolved whole-heart reconstructions.","authors":"Yitong Yang,&nbsp;Zahraw Shah,&nbsp;Athira J Jacob,&nbsp;Jackson Hair,&nbsp;Teodora Chitiboi,&nbsp;Tiziano Passerini,&nbsp;Jerome Yerly,&nbsp;Lorenzo Di Sopra,&nbsp;Davide Piccini,&nbsp;Zahra Hosseini,&nbsp;Puneet Sharma,&nbsp;Anurag Sahu,&nbsp;Matthias Stuber,&nbsp;John N Oshinski","doi":"10.3389/fradi.2023.1144004","DOIUrl":"10.3389/fradi.2023.1144004","url":null,"abstract":"<p><strong>Introduction: </strong>Deep learning (DL)-based segmentation has gained popularity for routine cardiac magnetic resonance (CMR) image analysis and in particular, delineation of left ventricular (LV) borders for LV volume determination. Free-breathing, self-navigated, whole-heart CMR exams provide high-resolution, isotropic coverage of the heart for assessment of cardiac anatomy including LV volume. The combination of whole-heart free-breathing CMR and DL-based LV segmentation has the potential to streamline the acquisition and analysis of clinical CMR exams. The purpose of this study was to compare the performance of a DL-based automatic LV segmentation network trained primarily on computed tomography (CT) images in two whole-heart CMR reconstruction methods: (1) an in-line respiratory motion-corrected (Mcorr) reconstruction and (2) an off-line, compressed sensing-based, multi-volume respiratory motion-resolved (Mres) reconstruction. Given that Mres images were shown to have greater image quality in previous studies than Mcorr images, we <i>hypothesized</i> that the LV volumes segmented from Mres images are closer to the manual expert-traced left ventricular endocardial border than the Mcorr images.</p><p><strong>Method: </strong>This retrospective study used 15 patients who underwent clinically indicated 1.5 T CMR exams with a prototype ECG-gated 3D radial phyllotaxis balanced steady state free precession (bSSFP) sequence. For each reconstruction method, the absolute volume difference (AVD) of the automatically and manually segmented LV volumes was used as the primary quantity to investigate whether 3D DL-based LV segmentation generalized better on Mcorr or Mres 3D whole-heart images. Additionally, we assessed the 3D Dice similarity coefficient between the manual and automatic LV masks of each reconstructed 3D whole-heart image and the sharpness of the LV myocardium-blood pool interface. A two-tail paired Student's <i>t</i>-test (alpha = 0.05) was used to test the significance in this study.</p><p><strong>Results & discussion: </strong>The AVD in the respiratory Mres reconstruction was lower than the AVD in the respiratory Mcorr reconstruction: 7.73 ± 6.54 ml vs. 20.0 ± 22.4 ml, respectively (<i>n</i> = 15, <i>p</i>-value = 0.03). The 3D Dice coefficient between the DL-segmented masks and the manually segmented masks was higher for Mres images than for Mcorr images: 0.90 ± 0.02 vs. 0.87 ± 0.03 respectively, with a <i>p</i>-value = 0.02. Sharpness on Mres images was higher than on Mcorr images: 0.15 ± 0.05 vs. 0.12 ± 0.04, respectively, with a <i>p</i>-value of 0.014 (<i>n</i> = 15).</p><p><strong>Conclusion: </strong>We conclude that the DL-based 3D automatic LV segmentation network trained on CT images and fine-tuned on MR images generalized better on Mres images than on Mcorr images for quantifying LV volumes.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1144004"},"PeriodicalIF":0.0,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365088/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10234001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital transformation of career landscapes in radiology: personal and professional implications. 放射学职业前景的数字化转型:对个人和职业的影响。
Pub Date : 2023-05-22 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1180699
Anjali Agrawal

Millennial radiology is marked by technical disruptions. Advances in internet, digital communications and computing technology, paved way for digitalized workflow orchestration of busy radiology departments. The COVID pandemic brought teleradiology to the forefront, highlighting its importance in maintaining continuity of radiological services, making it an integral component of the radiology practice. Increasing computing power and integrated multimodal data are driving incorporation of artificial intelligence at various stages of the radiology image and reporting cycle. These have and will continue to transform the career landscape in radiology, with more options for radiologists with varied interests and career goals. The ability to work from anywhere and anytime needs to be balanced with other aspects of life. Robust communication, internal and external collaboration, self-discipline, and self-motivation are key to achieving the desired balance while practicing radiology the unconventional way.

千禧年放射学的特点是技术颠覆。互联网、数字通信和计算技术的进步,为繁忙的放射科工作流程的数字化协调铺平了道路。COVID 大流行将远程放射学推到了风口浪尖,凸显了远程放射学在保持放射服务连续性方面的重要性,使其成为放射学实践中不可或缺的组成部分。不断增强的计算能力和集成的多模态数据推动了人工智能在放射图像和报告周期各个阶段的应用。这些已经并将继续改变放射学的职业前景,为具有不同兴趣和职业目标的放射科医生提供更多选择。随时随地工作的能力需要与生活的其他方面保持平衡。强有力的沟通、内部和外部协作、自律和自我激励是在以非传统方式从事放射学工作的同时实现理想平衡的关键。
{"title":"Digital transformation of career landscapes in radiology: personal and professional implications.","authors":"Anjali Agrawal","doi":"10.3389/fradi.2023.1180699","DOIUrl":"10.3389/fradi.2023.1180699","url":null,"abstract":"<p><p>Millennial radiology is marked by technical disruptions. Advances in internet, digital communications and computing technology, paved way for digitalized workflow orchestration of busy radiology departments. The COVID pandemic brought teleradiology to the forefront, highlighting its importance in maintaining continuity of radiological services, making it an integral component of the radiology practice. Increasing computing power and integrated multimodal data are driving incorporation of artificial intelligence at various stages of the radiology image and reporting cycle. These have and will continue to transform the career landscape in radiology, with more options for radiologists with varied interests and career goals. The ability to work from anywhere and anytime needs to be balanced with other aspects of life. Robust communication, internal and external collaboration, self-discipline, and self-motivation are key to achieving the desired balance while practicing radiology the unconventional way.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1180699"},"PeriodicalIF":0.0,"publicationDate":"2023-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10364979/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10233998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in radiology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1