首页 > 最新文献

Frontiers in radiology最新文献

英文 中文
From coarse to fine: a deep 3D probability volume contours framework for tumour segmentation and dose painting in PET images. 从粗到细:用于PET图像中肿瘤分割和剂量绘制的深度3D概率体积轮廓框架。
Pub Date : 2023-09-05 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1225215
Wenhui Zhang, Surajit Ray

With the increasing integration of functional imaging techniques like Positron Emission Tomography (PET) into radiotherapy (RT) practices, a paradigm shift in cancer treatment methodologies is underway. A fundamental step in RT planning is the accurate segmentation of tumours based on clinical diagnosis. Furthermore, novel tumour control methods, such as intensity modulated radiation therapy (IMRT) dose painting, demand the precise delineation of multiple intensity value contours to ensure optimal tumour dose distribution. Recently, convolutional neural networks (CNNs) have made significant strides in 3D image segmentation tasks, most of which present the output map at a voxel-wise level. However, because of information loss in subsequent downsampling layers, they frequently fail to precisely identify precise object boundaries. Moreover, in the context of dose painting strategies, there is an imperative need for reliable and precise image segmentation techniques to delineate high recurrence-risk contours. To address these challenges, we introduce a 3D coarse-to-fine framework, integrating a CNN with a kernel smoothing-based probability volume contour approach (KsPC). This integrated approach generates contour-based segmentation volumes, mimicking expert-level precision and providing accurate probability contours crucial for optimizing dose painting/IMRT strategies. Our final model, named KsPC-Net, leverages a CNN backbone to automatically learn parameters in the kernel smoothing process, thereby obviating the need for user-supplied tuning parameters. The 3D KsPC-Net exploits the strength of KsPC to simultaneously identify object boundaries and generate corresponding probability volume contours, which can be trained within an end-to-end framework. The proposed model has demonstrated promising performance, surpassing state-of-the-art models when tested against the MICCAI 2021 challenge dataset (HECKTOR).

随着正电子发射断层扫描(PET)等功能成像技术日益融入放射治疗(RT)实践,癌症治疗方法的范式转变正在进行中。RT计划的一个基本步骤是根据临床诊断准确分割肿瘤。此外,新的肿瘤控制方法,如强度调制放射治疗(IMRT)剂量绘制,需要精确描绘多个强度值轮廓,以确保最佳的肿瘤剂量分布。最近,卷积神经网络(CNNs)在3D图像分割任务中取得了重大进展,其中大多数都在体素水平上呈现输出图。然而,由于后续下采样层中的信息丢失,它们经常无法准确识别精确的对象边界。此外,在剂量绘制策略的背景下,迫切需要可靠和精确的图像分割技术来描绘高复发风险轮廓。为了解决这些挑战,我们引入了一个3D从粗到细的框架,将CNN与基于核平滑的概率体积轮廓方法(KsPC)相结合。这种集成方法生成基于轮廓的分割体积,模拟专家级的精度,并提供精确的概率轮廓,这对于优化剂量绘制/IMRT策略至关重要。我们的最终模型名为KsPC-Net,它利用CNN主干来自动学习内核平滑过程中的参数,从而消除了对用户提供的调整参数的需求。3D KsPC-Net利用KsPC的强度来同时识别对象边界并生成相应的概率体积轮廓,这些轮廓可以在端到端的框架内进行训练。当与MICCAI 2021挑战数据集(HECKTOR)进行测试时,所提出的模型表现出了良好的性能,超过了最先进的模型。
{"title":"From coarse to fine: a deep 3D probability volume contours framework for tumour segmentation and dose painting in PET images.","authors":"Wenhui Zhang,&nbsp;Surajit Ray","doi":"10.3389/fradi.2023.1225215","DOIUrl":"https://doi.org/10.3389/fradi.2023.1225215","url":null,"abstract":"<p><p>With the increasing integration of functional imaging techniques like Positron Emission Tomography (PET) into radiotherapy (RT) practices, a paradigm shift in cancer treatment methodologies is underway. A fundamental step in RT planning is the accurate segmentation of tumours based on clinical diagnosis. Furthermore, novel tumour control methods, such as intensity modulated radiation therapy (IMRT) dose painting, demand the precise delineation of multiple intensity value contours to ensure optimal tumour dose distribution. Recently, convolutional neural networks (CNNs) have made significant strides in 3D image segmentation tasks, most of which present the output map at a voxel-wise level. However, because of information loss in subsequent downsampling layers, they frequently fail to precisely identify precise object boundaries. Moreover, in the context of dose painting strategies, there is an imperative need for reliable and precise image segmentation techniques to delineate high recurrence-risk contours. To address these challenges, we introduce a 3D coarse-to-fine framework, integrating a CNN with a kernel smoothing-based probability volume contour approach (KsPC). This integrated approach generates contour-based segmentation volumes, mimicking expert-level precision and providing accurate probability contours crucial for optimizing dose painting/IMRT strategies. Our final model, named KsPC-Net, leverages a CNN backbone to automatically learn parameters in the kernel smoothing process, thereby obviating the need for user-supplied tuning parameters. The 3D KsPC-Net exploits the strength of KsPC to simultaneously identify object boundaries and generate corresponding probability volume contours, which can be trained within an end-to-end framework. The proposed model has demonstrated promising performance, surpassing state-of-the-art models when tested against the MICCAI 2021 challenge dataset (HECKTOR).</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10512384/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41155957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retrospective quantification of clinical abdominal DCE-MRI using pharmacokinetics-informed deep learning: a proof-of-concept study. 使用药代动力学信息深度学习对临床腹部DCE-MRI进行回顾性量化:一项概念验证研究。
Pub Date : 2023-09-04 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1168901
Chaowei Wu, Nan Wang, Srinivas Gaddam, Lixia Wang, Hui Han, Kyunghyun Sung, Anthony G Christodoulou, Yibin Xie, Stephen Pandol, Debiao Li

Introduction: Dynamic contrast-enhanced (DCE) MRI has important clinical value for early detection, accurate staging, and therapeutic monitoring of cancers. However, conventional multi-phasic abdominal DCE-MRI has limited temporal resolution and provides qualitative or semi-quantitative assessments of tissue vascularity. In this study, the feasibility of retrospectively quantifying multi-phasic abdominal DCE-MRI by using pharmacokinetics-informed deep learning to improve temporal resolution was investigated.

Method: Forty-five subjects consisting of healthy controls, pancreatic ductal adenocarcinoma (PDAC), and chronic pancreatitis (CP) were imaged with a 2-s temporal-resolution quantitative DCE sequence, from which 30-s temporal-resolution multi-phasic DCE-MRI was synthesized based on clinical protocol. A pharmacokinetics-informed neural network was trained to improve the temporal resolution of the multi-phasic DCE before the quantification of pharmacokinetic parameters. Through ten-fold cross-validation, the agreement between pharmacokinetic parameters estimated from synthesized multi-phasic DCE after deep learning inference was assessed against reference parameters from the corresponding quantitative DCE-MRI images. The ability of the deep learning estimated parameters to differentiate abnormal from normal tissues was assessed as well.

Results: The pharmacokinetic parameters estimated after deep learning have a high level of agreement with the reference values. In the cross-validation, all three pharmacokinetic parameters (transfer constant Ktrans, fractional extravascular extracellular volume ve, and rate constant kep) achieved intraclass correlation coefficient and R2 between 0.84-0.94, and low coefficients of variation (10.1%, 12.3%, and 5.6%, respectively) relative to the reference values. Significant differences were found between healthy pancreas, PDAC tumor and non-tumor, and CP pancreas.

Discussion: Retrospective quantification (RoQ) of clinical multi-phasic DCE-MRI is possible by deep learning. This technique has the potential to derive quantitative pharmacokinetic parameters from clinical multi-phasic DCE data for a more objective and precise assessment of cancer.

引言:动态增强(DCE)MRI对癌症的早期发现、准确分期和治疗监测具有重要的临床价值。然而,传统的多相腹部DCE-MRI具有有限的时间分辨率,并且提供了对组织血管性的定性或半定量评估。在本研究中,研究了通过药代动力学知情深度学习来提高时间分辨率来回顾性量化多相腹部DCE-MRI的可行性。方法:45名受试者,包括健康对照组、胰腺导管腺癌(PDAC)和慢性胰腺炎(CP),采用2-s时间分辨率定量DCE序列进行成像,根据临床方案合成30-s时间分辨率多相DCE-MRI。在量化药代动力学参数之前,训练药代动力学知情神经网络以提高多相DCE的时间分辨率。通过十倍交叉验证,将深度学习推断后合成的多相DCE估计的药代动力学参数与相应定量DCE-MRI图像的参考参数之间的一致性进行了评估。还评估了深度学习估计参数区分异常组织和正常组织的能力。结果:深度学习后估计的药代动力学参数与参考值高度一致。在交叉验证中,所有三个药代动力学参数(转移常数Ktrans、血管外细胞外体积分数ve和速率常数kep)实现了组内相关系数,R2在0.84-0.94之间,相对于参考值的变异系数较低(分别为10.1%、12.3%和5.6%)。健康胰腺、PDAC肿瘤和非肿瘤胰腺以及CP胰腺之间存在显著差异。讨论:通过深度学习可以对临床多阶段DCE-MRI进行回顾性量化(RoQ)。该技术有可能从临床多相DCE数据中获得定量药代动力学参数,以更客观、更准确地评估癌症。
{"title":"Retrospective quantification of clinical abdominal DCE-MRI using pharmacokinetics-informed deep learning: a proof-of-concept study.","authors":"Chaowei Wu,&nbsp;Nan Wang,&nbsp;Srinivas Gaddam,&nbsp;Lixia Wang,&nbsp;Hui Han,&nbsp;Kyunghyun Sung,&nbsp;Anthony G Christodoulou,&nbsp;Yibin Xie,&nbsp;Stephen Pandol,&nbsp;Debiao Li","doi":"10.3389/fradi.2023.1168901","DOIUrl":"https://doi.org/10.3389/fradi.2023.1168901","url":null,"abstract":"<p><strong>Introduction: </strong>Dynamic contrast-enhanced (DCE) MRI has important clinical value for early detection, accurate staging, and therapeutic monitoring of cancers. However, conventional multi-phasic abdominal DCE-MRI has limited temporal resolution and provides qualitative or semi-quantitative assessments of tissue vascularity. In this study, the feasibility of retrospectively quantifying multi-phasic abdominal DCE-MRI by using pharmacokinetics-informed deep learning to improve temporal resolution was investigated.</p><p><strong>Method: </strong>Forty-five subjects consisting of healthy controls, pancreatic ductal adenocarcinoma (PDAC), and chronic pancreatitis (CP) were imaged with a 2-s temporal-resolution quantitative DCE sequence, from which 30-s temporal-resolution multi-phasic DCE-MRI was synthesized based on clinical protocol. A pharmacokinetics-informed neural network was trained to improve the temporal resolution of the multi-phasic DCE before the quantification of pharmacokinetic parameters. Through ten-fold cross-validation, the agreement between pharmacokinetic parameters estimated from synthesized multi-phasic DCE after deep learning inference was assessed against reference parameters from the corresponding quantitative DCE-MRI images. The ability of the deep learning estimated parameters to differentiate abnormal from normal tissues was assessed as well.</p><p><strong>Results: </strong>The pharmacokinetic parameters estimated after deep learning have a high level of agreement with the reference values. In the cross-validation, all three pharmacokinetic parameters (transfer constant <math><msup><mi>K</mi><mrow><mrow><mi>trans</mi></mrow></mrow></msup></math>, fractional extravascular extracellular volume <math><msub><mi>v</mi><mi>e</mi></msub></math>, and rate constant <math><msub><mi>k</mi><mrow><mrow><mi>ep</mi></mrow></mrow></msub></math>) achieved intraclass correlation coefficient and <i>R</i><sup>2</sup> between 0.84-0.94, and low coefficients of variation (10.1%, 12.3%, and 5.6%, respectively) relative to the reference values. Significant differences were found between healthy pancreas, PDAC tumor and non-tumor, and CP pancreas.</p><p><strong>Discussion: </strong>Retrospective quantification (RoQ) of clinical multi-phasic DCE-MRI is possible by deep learning. This technique has the potential to derive quantitative pharmacokinetic parameters from clinical multi-phasic DCE data for a more objective and precise assessment of cancer.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10507354/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41168695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial assessments in texture analysis: what the radiologist needs to know. 纹理分析中的空间评估:放射科医生须知。
Pub Date : 2023-08-24 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1240544
Bino A Varghese, Brandon K K Fields, Darryl H Hwang, Vinay A Duddalwar, George R Matcuk, Steven Y Cen

To date, studies investigating radiomics-based predictive models have tended to err on the side of data-driven or exploratory analysis of many thousands of extracted features. In particular, spatial assessments of texture have proven to be especially adept at assessing for features of intratumoral heterogeneity in oncologic imaging, which likewise may correspond with tumor biology and behavior. These spatial assessments can be generally classified as spatial filters, which detect areas of rapid change within the grayscale in order to enhance edges and/or textures within an image, or neighborhood-based methods, which quantify gray-level differences of neighboring pixels/voxels within a set distance. Given the high dimensionality of radiomics datasets, data dimensionality reduction methods have been proposed in an attempt to optimize model performance in machine learning studies; however, it should be noted that these approaches should only be applied to training data in order to avoid information leakage and model overfitting. While area under the curve of the receiver operating characteristic is perhaps the most commonly reported assessment of model performance, it is prone to overestimation when output classifications are unbalanced. In such cases, confusion matrices may be additionally reported, whereby diagnostic cut points for model predicted probability may hold more clinical significance to clinical colleagues with respect to related forms of diagnostic testing.

迄今为止,基于放射组学预测模型的研究往往偏向于数据驱动或对数千个提取特征进行探索性分析。特别是,纹理的空间评估已被证明特别擅长评估肿瘤成像中的瘤内异质性特征,这同样可能与肿瘤生物学和行为学相对应。这些空间评估一般可分为空间滤波器和基于邻域的方法,前者可检测灰度范围内的快速变化区域,以增强图像中的边缘和/或纹理;后者可量化设定距离内相邻像素/体素的灰度差异。鉴于放射组学数据集的高维性,人们提出了数据降维方法,试图优化机器学习研究中的模型性能;但需要注意的是,这些方法只能应用于训练数据,以避免信息泄露和模型过拟合。虽然接收者操作特征曲线下面积可能是最常报道的模型性能评估方法,但当输出分类不平衡时,它很容易被高估。在这种情况下,可能会额外报告混淆矩阵,据此,模型预测概率的诊断切点可能对临床同事来说,与相关形式的诊断测试相比,具有更多的临床意义。
{"title":"Spatial assessments in texture analysis: what the radiologist needs to know.","authors":"Bino A Varghese, Brandon K K Fields, Darryl H Hwang, Vinay A Duddalwar, George R Matcuk, Steven Y Cen","doi":"10.3389/fradi.2023.1240544","DOIUrl":"10.3389/fradi.2023.1240544","url":null,"abstract":"<p><p>To date, studies investigating radiomics-based predictive models have tended to err on the side of data-driven or exploratory analysis of many thousands of extracted features. In particular, spatial assessments of texture have proven to be especially adept at assessing for features of intratumoral heterogeneity in oncologic imaging, which likewise may correspond with tumor biology and behavior. These spatial assessments can be generally classified as spatial filters, which detect areas of rapid change within the grayscale in order to enhance edges and/or textures within an image, or neighborhood-based methods, which quantify gray-level differences of neighboring pixels/voxels within a set distance. Given the high dimensionality of radiomics datasets, data dimensionality reduction methods have been proposed in an attempt to optimize model performance in machine learning studies; however, it should be noted that these approaches should only be applied to training data in order to avoid information leakage and model overfitting. While area under the curve of the receiver operating characteristic is perhaps the most commonly reported assessment of model performance, it is prone to overestimation when output classifications are unbalanced. In such cases, confusion matrices may be additionally reported, whereby diagnostic cut points for model predicted probability may hold more clinical significance to clinical colleagues with respect to related forms of diagnostic testing.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10484588/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10225205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning image segmentation approaches for malignant bone lesions: a systematic review and meta-analysis. 恶性骨病变的深度学习图像分割方法:系统综述与荟萃分析。
Pub Date : 2023-08-08 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1241651
Joseph M Rich, Lokesh N Bhardwaj, Aman Shah, Krish Gangal, Mohitha S Rapaka, Assad A Oberai, Brandon K K Fields, George R Matcuk, Vinay A Duddalwar

Introduction: Image segmentation is an important process for quantifying characteristics of malignant bone lesions, but this task is challenging and laborious for radiologists. Deep learning has shown promise in automating image segmentation in radiology, including for malignant bone lesions. The purpose of this review is to investigate deep learning-based image segmentation methods for malignant bone lesions on Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron-Emission Tomography/CT (PET/CT).

Method: The literature search of deep learning-based image segmentation of malignant bony lesions on CT and MRI was conducted in PubMed, Embase, Web of Science, and Scopus electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 41 original articles published between February 2017 and March 2023 were included in the review.

Results: The majority of papers studied MRI, followed by CT, PET/CT, and PET/MRI. There was relatively even distribution of papers studying primary vs. secondary malignancies, as well as utilizing 3-dimensional vs. 2-dimensional data. Many papers utilize custom built models as a modification or variation of U-Net. The most common metric for evaluation was the dice similarity coefficient (DSC). Most models achieved a DSC above 0.6, with medians for all imaging modalities between 0.85-0.9.

Discussion: Deep learning methods show promising ability to segment malignant osseous lesions on CT, MRI, and PET/CT. Some strategies which are commonly applied to help improve performance include data augmentation, utilization of large public datasets, preprocessing including denoising and cropping, and U-Net architecture modification. Future directions include overcoming dataset and annotation homogeneity and generalizing for clinical applicability.

简介图像分割是量化恶性骨病变特征的重要过程,但对于放射科医生来说,这项任务既具有挑战性又费力。深度学习在放射学图像自动分割方面大有可为,包括恶性骨病变。本综述旨在研究计算机断层扫描(CT)、磁共振成像(MRI)和正电子发射断层扫描/CT(PET/CT)上基于深度学习的恶性骨病变图像分割方法:根据系统综述和元分析首选报告项目(Preferred Reporting Items for Systematic Reviews and Meta-Analyses,PRISMA)指南,在PubMed、Embase、Web of Science和Scopus电子数据库中对基于深度学习的CT和MRI恶性骨病变图像分割进行了文献检索。共有41篇发表于2017年2月至2023年3月期间的原创文章被纳入综述:大多数论文研究的是 MRI,其次是 CT、PET/CT 和 PET/MRI。研究原发性与继发性恶性肿瘤以及利用三维与二维数据的论文分布相对均匀。许多论文利用定制模型作为 U-Net 的修改或变体。最常用的评估指标是骰子相似系数(DSC)。大多数模型的骰子相似系数都在 0.6 以上,所有成像模式的中位数都在 0.85-0.9 之间:深度学习方法在分割 CT、MRI 和 PET/CT 上的恶性骨质病变方面表现出良好的能力。为帮助提高性能,通常采用的一些策略包括数据增强、利用大型公共数据集、预处理(包括去噪和裁剪)以及 U-Net 架构修改。未来的研究方向包括克服数据集和注释的同质性,以及临床应用的通用性。
{"title":"Deep learning image segmentation approaches for malignant bone lesions: a systematic review and meta-analysis.","authors":"Joseph M Rich, Lokesh N Bhardwaj, Aman Shah, Krish Gangal, Mohitha S Rapaka, Assad A Oberai, Brandon K K Fields, George R Matcuk, Vinay A Duddalwar","doi":"10.3389/fradi.2023.1241651","DOIUrl":"10.3389/fradi.2023.1241651","url":null,"abstract":"<p><strong>Introduction: </strong>Image segmentation is an important process for quantifying characteristics of malignant bone lesions, but this task is challenging and laborious for radiologists. Deep learning has shown promise in automating image segmentation in radiology, including for malignant bone lesions. The purpose of this review is to investigate deep learning-based image segmentation methods for malignant bone lesions on Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron-Emission Tomography/CT (PET/CT).</p><p><strong>Method: </strong>The literature search of deep learning-based image segmentation of malignant bony lesions on CT and MRI was conducted in PubMed, Embase, Web of Science, and Scopus electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 41 original articles published between February 2017 and March 2023 were included in the review.</p><p><strong>Results: </strong>The majority of papers studied MRI, followed by CT, PET/CT, and PET/MRI. There was relatively even distribution of papers studying primary vs. secondary malignancies, as well as utilizing 3-dimensional vs. 2-dimensional data. Many papers utilize custom built models as a modification or variation of U-Net. The most common metric for evaluation was the dice similarity coefficient (DSC). Most models achieved a DSC above 0.6, with medians for all imaging modalities between 0.85-0.9.</p><p><strong>Discussion: </strong>Deep learning methods show promising ability to segment malignant osseous lesions on CT, MRI, and PET/CT. Some strategies which are commonly applied to help improve performance include data augmentation, utilization of large public datasets, preprocessing including denoising and cropping, and U-Net architecture modification. Future directions include overcoming dataset and annotation homogeneity and generalizing for clinical applicability.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10442705/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10069334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based left ventricular segmentation demonstrates improved performance on respiratory motion-resolved whole-heart reconstructions. 基于深度学习的左心室分割在呼吸运动分辨的全心脏重建中表现出改进的性能。
Pub Date : 2023-06-02 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1144004
Yitong Yang, Zahraw Shah, Athira J Jacob, Jackson Hair, Teodora Chitiboi, Tiziano Passerini, Jerome Yerly, Lorenzo Di Sopra, Davide Piccini, Zahra Hosseini, Puneet Sharma, Anurag Sahu, Matthias Stuber, John N Oshinski

Introduction: Deep learning (DL)-based segmentation has gained popularity for routine cardiac magnetic resonance (CMR) image analysis and in particular, delineation of left ventricular (LV) borders for LV volume determination. Free-breathing, self-navigated, whole-heart CMR exams provide high-resolution, isotropic coverage of the heart for assessment of cardiac anatomy including LV volume. The combination of whole-heart free-breathing CMR and DL-based LV segmentation has the potential to streamline the acquisition and analysis of clinical CMR exams. The purpose of this study was to compare the performance of a DL-based automatic LV segmentation network trained primarily on computed tomography (CT) images in two whole-heart CMR reconstruction methods: (1) an in-line respiratory motion-corrected (Mcorr) reconstruction and (2) an off-line, compressed sensing-based, multi-volume respiratory motion-resolved (Mres) reconstruction. Given that Mres images were shown to have greater image quality in previous studies than Mcorr images, we hypothesized that the LV volumes segmented from Mres images are closer to the manual expert-traced left ventricular endocardial border than the Mcorr images.

Method: This retrospective study used 15 patients who underwent clinically indicated 1.5 T CMR exams with a prototype ECG-gated 3D radial phyllotaxis balanced steady state free precession (bSSFP) sequence. For each reconstruction method, the absolute volume difference (AVD) of the automatically and manually segmented LV volumes was used as the primary quantity to investigate whether 3D DL-based LV segmentation generalized better on Mcorr or Mres 3D whole-heart images. Additionally, we assessed the 3D Dice similarity coefficient between the manual and automatic LV masks of each reconstructed 3D whole-heart image and the sharpness of the LV myocardium-blood pool interface. A two-tail paired Student's t-test (alpha = 0.05) was used to test the significance in this study.

Results & discussion: The AVD in the respiratory Mres reconstruction was lower than the AVD in the respiratory Mcorr reconstruction: 7.73 ± 6.54 ml vs. 20.0 ± 22.4 ml, respectively (n = 15, p-value = 0.03). The 3D Dice coefficient between the DL-segmented masks and the manually segmented masks was higher for Mres images than for Mcorr images: 0.90 ± 0.02 vs. 0.87 ± 0.03 respectively, with a p-value = 0.02. Sharpness on Mres images was higher than on Mcorr images: 0.15 ± 0.05 vs. 0.12 ± 0.04, respectively, with a p-value of 0.014 (n = 15).

Conclusion: We conclude that the DL-based 3D automatic LV segmentation network trained on CT images and fine-tuned on MR images generalized better on Mres images than on Mcorr images for quantifying LV volumes.

引言:基于深度学习(DL)的分割在常规心脏磁共振(CMR)图像分析中越来越受欢迎,尤其是在左心室(LV)边界的划定中,用于左心室容积的确定。自由呼吸、自主导航、全心CMR检查提供了高分辨率、各向同性的心脏覆盖范围,用于评估心脏解剖结构,包括左心室容积。全心自由呼吸CMR和基于DL的LV分割相结合,有可能简化临床CMR检查的采集和分析。本研究的目的是比较主要在计算机断层扫描(CT)图像上训练的基于DL的左心室自动分割网络在两种全心脏CMR重建方法中的性能:(1)在线呼吸运动校正(Mcorr)重建和(2)离线、基于压缩传感的多体积呼吸运动分辨(Mres)重建。鉴于Mres图像在先前的研究中显示出比Mcorr图像更高的图像质量,我们假设从Mres图像分割的左心室体积比Mcorr图像更接近手动专家追踪的左心室心内膜边界。方法:这项回顾性研究使用了15名患者,他们接受了临床指示的1.5 T CMR检查与原型心电图门控的三维径向叶序平衡稳态自由进动(bSFP)序列。对于每种重建方法,自动和手动分割的左心室体积的绝对体积差(AVD)被用作主要量,以研究基于3D DL的左心室分割是否在Mcorr或Mres 3D全心图像上更好地推广。此外,我们评估了每个重建的3D全心图像的手动和自动左心室掩模之间的3D Dice相似系数以及左心室-心肌-血池界面的清晰度。双尾配对Student t检验(α = 0.05)来检验其在本研究中的显著性。结果与讨论:呼吸Mres重建的AVD低于呼吸Mcorr重建的AVD:7.73 ± 6.54 ml与20.0 ± 22.4 ml(n = 15,p值 = 0.03)。对于Mres图像,DL分割掩模和手动分割掩模之间的3D骰子系数高于Mcorr图像:0.90 ± 0.02对0.87 ± 分别为0.03,具有p值 = 0.02。Mres图像的清晰度高于Mcorr图像:0.15 ± 0.05对0.12 ± 0.04,p值为0.014(n = 15) 结论:我们得出的结论是,在CT图像上训练并在MR图像上微调的基于DL的3D自动左心室分割网络在Mres图像上比在Mcorr图像上更好地推广用于量化左心室体积。
{"title":"Deep learning-based left ventricular segmentation demonstrates improved performance on respiratory motion-resolved whole-heart reconstructions.","authors":"Yitong Yang,&nbsp;Zahraw Shah,&nbsp;Athira J Jacob,&nbsp;Jackson Hair,&nbsp;Teodora Chitiboi,&nbsp;Tiziano Passerini,&nbsp;Jerome Yerly,&nbsp;Lorenzo Di Sopra,&nbsp;Davide Piccini,&nbsp;Zahra Hosseini,&nbsp;Puneet Sharma,&nbsp;Anurag Sahu,&nbsp;Matthias Stuber,&nbsp;John N Oshinski","doi":"10.3389/fradi.2023.1144004","DOIUrl":"10.3389/fradi.2023.1144004","url":null,"abstract":"<p><strong>Introduction: </strong>Deep learning (DL)-based segmentation has gained popularity for routine cardiac magnetic resonance (CMR) image analysis and in particular, delineation of left ventricular (LV) borders for LV volume determination. Free-breathing, self-navigated, whole-heart CMR exams provide high-resolution, isotropic coverage of the heart for assessment of cardiac anatomy including LV volume. The combination of whole-heart free-breathing CMR and DL-based LV segmentation has the potential to streamline the acquisition and analysis of clinical CMR exams. The purpose of this study was to compare the performance of a DL-based automatic LV segmentation network trained primarily on computed tomography (CT) images in two whole-heart CMR reconstruction methods: (1) an in-line respiratory motion-corrected (Mcorr) reconstruction and (2) an off-line, compressed sensing-based, multi-volume respiratory motion-resolved (Mres) reconstruction. Given that Mres images were shown to have greater image quality in previous studies than Mcorr images, we <i>hypothesized</i> that the LV volumes segmented from Mres images are closer to the manual expert-traced left ventricular endocardial border than the Mcorr images.</p><p><strong>Method: </strong>This retrospective study used 15 patients who underwent clinically indicated 1.5 T CMR exams with a prototype ECG-gated 3D radial phyllotaxis balanced steady state free precession (bSSFP) sequence. For each reconstruction method, the absolute volume difference (AVD) of the automatically and manually segmented LV volumes was used as the primary quantity to investigate whether 3D DL-based LV segmentation generalized better on Mcorr or Mres 3D whole-heart images. Additionally, we assessed the 3D Dice similarity coefficient between the manual and automatic LV masks of each reconstructed 3D whole-heart image and the sharpness of the LV myocardium-blood pool interface. A two-tail paired Student's <i>t</i>-test (alpha = 0.05) was used to test the significance in this study.</p><p><strong>Results & discussion: </strong>The AVD in the respiratory Mres reconstruction was lower than the AVD in the respiratory Mcorr reconstruction: 7.73 ± 6.54 ml vs. 20.0 ± 22.4 ml, respectively (<i>n</i> = 15, <i>p</i>-value = 0.03). The 3D Dice coefficient between the DL-segmented masks and the manually segmented masks was higher for Mres images than for Mcorr images: 0.90 ± 0.02 vs. 0.87 ± 0.03 respectively, with a <i>p</i>-value = 0.02. Sharpness on Mres images was higher than on Mcorr images: 0.15 ± 0.05 vs. 0.12 ± 0.04, respectively, with a <i>p</i>-value of 0.014 (<i>n</i> = 15).</p><p><strong>Conclusion: </strong>We conclude that the DL-based 3D automatic LV segmentation network trained on CT images and fine-tuned on MR images generalized better on Mres images than on Mcorr images for quantifying LV volumes.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365088/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10234001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A radiomics approach to the diagnosis of femoroacetabular impingement. 放射组学方法诊断股髋臼撞击。
Pub Date : 2023-03-20 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1151258
Eros Montin, Richard Kijowski, Thomas Youm, Riccardo Lattanzi

Introduction: Femoroacetabular Impingement (FAI) is a hip pathology characterized by impingement of the femoral head-neck junction against the acetabular rim, due to abnormalities in bone morphology. FAI is normally diagnosed by manual evaluation of morphologic features on magnetic resonance imaging (MRI). In this study, we assess, for the first time, the feasibility of using radiomics to detect FAI by automatically extracting quantitative features from images.

Material and methods: 17 patients diagnosed with monolateral FAI underwent pre-surgical MR imaging, including a 3D Dixon sequence of the pelvis. An expert radiologist drew regions of interest on the water-only Dixon images outlining femur and acetabulum in both impingement (IJ) and healthy joints (HJ). 182 radiomic features were extracted for each hip. The dataset numerosity was increased by 60 times with an ad-hoc data augmentation tool. Features were subdivided by type and region in 24 subsets. For each, a univariate ANOVA F-value analysis was applied to find the 5 features most correlated with IJ based on p-value, for a total of 48 subsets. For each subset, a K-nearest neighbor model was trained to differentiate between IJ and HJ using the values of the radiomic features in the subset as input. The training was repeated 100 times, randomly subdividing the data with 75%/25% training/testing.

Results: The texture-based gray level features yielded the highest prediction max accuracy (0.972) with the smallest subset of features. This suggests that the gray image values are more homogeneously distributed in the HJ in comparison to IJ, which could be due to stress-related inflammation resulting from impingement.

Conclusions: We showed that radiomics can automatically distinguish IJ from HJ using water-only Dixon MRI. To our knowledge, this is the first application of radiomics for FAI diagnosis. We reported an accuracy greater than 97%, which is higher than the 90% accuracy for detecting FAI reported for standard diagnostic tests (90%). Our proposed radiomic analysis could be combined with methods for automated joint segmentation to rapidly identify patients with FAI, avoiding time-consuming radiological measurements of bone morphology.

简介:股髋臼撞击(FAI)是一种髋关节病理,其特征是由于骨形态异常导致股骨头颈交界处撞击髋臼缘。FAI通常通过人工评估磁共振成像(MRI)的形态学特征来诊断。在这项研究中,我们首次评估了使用放射组学通过自动从图像中提取定量特征来检测FAI的可行性。材料和方法:17例诊断为单侧FAI的患者行术前MR成像,包括骨盆三维Dixon序列。放射科专家在仅水的Dixon图像上画出了撞击关节(IJ)和健康关节(HJ)的股骨和髋臼的轮廓。每个髋关节提取182个放射学特征。使用临时数据增强工具,数据集数量增加了60倍。将特征按类型和区域细分为24个子集。对于每一个,采用单变量方差分析f值分析,根据p值找到与IJ最相关的5个特征,共48个子集。对于每个子集,使用子集中的放射特征值作为输入,训练k近邻模型来区分IJ和HJ。训练重复100次,随机将数据细分为75%/25%的训练/测试。结果:基于纹理的灰度特征以最小的特征子集获得了最高的预测准确率(0.972)。这表明,与IJ相比,HJ的灰度图像值分布更均匀,这可能是由于撞击引起的应激相关炎症。结论:我们发现放射组学可以使用仅水的Dixon MRI自动区分IJ和HJ。据我们所知,这是放射组学在FAI诊断中的首次应用。我们报告的准确率大于97%,高于标准诊断测试报告的90%的FAI检测准确率(90%)。我们提出的放射组学分析可以与自动关节分割方法相结合,以快速识别FAI患者,避免耗时的骨形态学放射测量。
{"title":"A radiomics approach to the diagnosis of femoroacetabular impingement.","authors":"Eros Montin, Richard Kijowski, Thomas Youm, Riccardo Lattanzi","doi":"10.3389/fradi.2023.1151258","DOIUrl":"10.3389/fradi.2023.1151258","url":null,"abstract":"<p><strong>Introduction: </strong>Femoroacetabular Impingement (FAI) is a hip pathology characterized by impingement of the femoral head-neck junction against the acetabular rim, due to abnormalities in bone morphology. FAI is normally diagnosed by manual evaluation of morphologic features on magnetic resonance imaging (MRI). In this study, we assess, for the first time, the feasibility of using radiomics to detect FAI by automatically extracting quantitative features from images.</p><p><strong>Material and methods: </strong>17 patients diagnosed with monolateral FAI underwent pre-surgical MR imaging, including a 3D Dixon sequence of the pelvis. An expert radiologist drew regions of interest on the water-only Dixon images outlining femur and acetabulum in both impingement (IJ) and healthy joints (HJ). 182 radiomic features were extracted for each hip. The dataset numerosity was increased by 60 times with an ad-hoc data augmentation tool. Features were subdivided by type and region in 24 subsets. For each, a univariate ANOVA <i>F</i>-value analysis was applied to find the 5 features most correlated with IJ based on <i>p</i>-value, for a total of 48 subsets. For each subset, a K-nearest neighbor model was trained to differentiate between IJ and HJ using the values of the radiomic features in the subset as input. The training was repeated 100 times, randomly subdividing the data with 75%/25% training/testing.</p><p><strong>Results: </strong>The texture-based gray level features yielded the highest prediction max accuracy (0.972) with the smallest subset of features. This suggests that the gray image values are more homogeneously distributed in the HJ in comparison to IJ, which could be due to stress-related inflammation resulting from impingement.</p><p><strong>Conclusions: </strong>We showed that radiomics can automatically distinguish IJ from HJ using water-only Dixon MRI. To our knowledge, this is the first application of radiomics for FAI diagnosis. We reported an accuracy greater than 97%, which is higher than the 90% accuracy for detecting FAI reported for standard diagnostic tests (90%). Our proposed radiomic analysis could be combined with methods for automated joint segmentation to rapidly identify patients with FAI, avoiding time-consuming radiological measurements of bone morphology.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365279/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10233997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
How should studies using AI be reported? lessons from a systematic review in cardiac MRI. 如何报告使用人工智能的研究?
Pub Date : 2023-01-30 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1112841
Ahmed Maiter, Mahan Salehi, Andrew J Swift, Samer Alabed

Recent years have seen a dramatic increase in studies presenting artificial intelligence (AI) tools for cardiac imaging. Amongst these are AI tools that undertake segmentation of structures on cardiac MRI (CMR), an essential step in obtaining clinically relevant functional information. The quality of reporting of these studies carries significant implications for advancement of the field and the translation of AI tools to clinical practice. We recently undertook a systematic review to evaluate the quality of reporting of studies presenting automated approaches to segmentation in cardiac MRI (Alabed et al. 2022 Quality of reporting in AI cardiac MRI segmentation studies-a systematic review and recommendations for future studies. Frontiers in Cardiovascular Medicine 9:956811). 209 studies were assessed for compliance with the Checklist for AI in Medical Imaging (CLAIM), a framework for reporting. We found variable-and sometimes poor-quality of reporting and identified significant and frequently missing information in publications. Compliance with CLAIM was high for descriptions of models (100%, IQR 80%-100%), but lower than expected for descriptions of study design (71%, IQR 63-86%), datasets used in training and testing (63%, IQR 50%-67%) and model performance (60%, IQR 50%-70%). Here, we present a summary of our key findings, aimed at general readers who may not be experts in AI, and use them as a framework to discuss the factors determining quality of reporting, making recommendations for improving the reporting of research in this field. We aim to assist researchers in presenting their work and readers in their appraisal of evidence. Finally, we emphasise the need for close scrutiny of studies presenting AI tools, even in the face of the excitement surrounding AI in cardiac imaging.

近年来,有关心脏成像人工智能(AI)工具的研究急剧增加。其中包括对心脏核磁共振成像(CMR)结构进行分割的人工智能工具,这是获取临床相关功能信息的重要步骤。这些研究的报告质量对该领域的发展以及将人工智能工具转化为临床实践具有重要影响。我们最近开展了一项系统综述,以评估心脏磁共振成像中自动分割方法研究的报告质量(Alabed et al.心血管医学前沿 9:956811)。我们根据医学影像人工智能检查表(CLAIM)这一报告框架对 209 项研究进行了评估。我们发现报告的质量参差不齐,有时甚至很差,并在出版物中发现了大量且经常缺失的信息。模型描述对 CLAIM 的符合率很高(100%,IQR 80%-100%),但研究设计描述(71%,IQR 63-86%)、训练和测试所用数据集(63%,IQR 50%-67%)和模型性能(60%,IQR 50%-70%)的符合率低于预期。在此,我们针对可能不是人工智能专家的普通读者总结了我们的主要发现,并以此为框架讨论了决定报告质量的因素,为改进该领域的研究报告提出了建议。我们旨在帮助研究人员介绍他们的工作,并帮助读者评估证据。最后,我们强调,即使人工智能在心脏成像领域的应用令人兴奋,也需要对介绍人工智能工具的研究进行严格审查。
{"title":"How should studies using AI be reported? lessons from a systematic review in cardiac MRI.","authors":"Ahmed Maiter, Mahan Salehi, Andrew J Swift, Samer Alabed","doi":"10.3389/fradi.2023.1112841","DOIUrl":"10.3389/fradi.2023.1112841","url":null,"abstract":"<p><p>Recent years have seen a dramatic increase in studies presenting artificial intelligence (AI) tools for cardiac imaging. Amongst these are AI tools that undertake segmentation of structures on cardiac MRI (CMR), an essential step in obtaining clinically relevant functional information. The quality of reporting of these studies carries significant implications for advancement of the field and the translation of AI tools to clinical practice. We recently undertook a systematic review to evaluate the quality of reporting of studies presenting automated approaches to segmentation in cardiac MRI (Alabed et al. 2022 Quality of reporting in AI cardiac MRI segmentation studies-a systematic review and recommendations for future studies. <i>Frontiers in Cardiovascular Medicine</i> 9:956811). 209 studies were assessed for compliance with the Checklist for AI in Medical Imaging (CLAIM), a framework for reporting. We found variable-and sometimes poor-quality of reporting and identified significant and frequently missing information in publications. Compliance with CLAIM was high for descriptions of models (100%, IQR 80%-100%), but lower than expected for descriptions of study design (71%, IQR 63-86%), datasets used in training and testing (63%, IQR 50%-67%) and model performance (60%, IQR 50%-70%). Here, we present a summary of our key findings, aimed at general readers who may not be experts in AI, and use them as a framework to discuss the factors determining quality of reporting, making recommendations for improving the reporting of research in this field. We aim to assist researchers in presenting their work and readers in their appraisal of evidence. Finally, we emphasise the need for close scrutiny of studies presenting AI tools, even in the face of the excitement surrounding AI in cardiac imaging.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10364997/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10252059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The promise and limitations of artificial intelligence in musculoskeletal imaging. 人工智能在肌肉骨骼成像中的前景和局限性。
Pub Date : 2023-01-01 DOI: 10.3389/fradi.2023.1242902
Patrick Debs, Laura M Fayad

With the recent developments in deep learning and the rapid growth of convolutional neural networks, artificial intelligence has shown promise as a tool that can transform several aspects of the musculoskeletal imaging cycle. Its applications can involve both interpretive and non-interpretive tasks such as the ordering of imaging, scheduling, protocoling, image acquisition, report generation and communication of findings. However, artificial intelligence tools still face a number of challenges that can hinder effective implementation into clinical practice. The purpose of this review is to explore both the successes and limitations of artificial intelligence applications throughout the muscuskeletal imaging cycle and to highlight how these applications can help enhance the service radiologists deliver to their patients, resulting in increased efficiency as well as improved patient and provider satisfaction.

随着深度学习的最新发展和卷积神经网络的快速发展,人工智能已经显示出作为一种可以改变肌肉骨骼成像周期几个方面的工具的前景。它的应用可以涉及解释性和非解释性任务,如成像排序、调度、协议、图像采集、报告生成和结果交流。然而,人工智能工具仍然面临着许多挑战,这些挑战可能会阻碍其在临床实践中的有效实施。本综述的目的是探讨人工智能应用在整个肌肉骨骼成像周期中的成功和局限性,并强调这些应用如何帮助放射科医生提高对患者的服务,从而提高效率,提高患者和提供者的满意度。
{"title":"The promise and limitations of artificial intelligence in musculoskeletal imaging.","authors":"Patrick Debs,&nbsp;Laura M Fayad","doi":"10.3389/fradi.2023.1242902","DOIUrl":"https://doi.org/10.3389/fradi.2023.1242902","url":null,"abstract":"<p><p>With the recent developments in deep learning and the rapid growth of convolutional neural networks, artificial intelligence has shown promise as a tool that can transform several aspects of the musculoskeletal imaging cycle. Its applications can involve both interpretive and non-interpretive tasks such as the ordering of imaging, scheduling, protocoling, image acquisition, report generation and communication of findings. However, artificial intelligence tools still face a number of challenges that can hinder effective implementation into clinical practice. The purpose of this review is to explore both the successes and limitations of artificial intelligence applications throughout the muscuskeletal imaging cycle and to highlight how these applications can help enhance the service radiologists deliver to their patients, resulting in increased efficiency as well as improved patient and provider satisfaction.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10440743/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10048687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Case report: ultrasound assisted catheter directed thrombolysis of an embolic partial occlusion of the superior mesenteric artery. 病例报告:超声辅助导管定向溶栓治疗肠系膜上动脉栓塞性部分闭塞。
Pub Date : 2023-01-01 DOI: 10.3389/fradi.2023.1167901
Simone Bongiovanni, Marco Bozzolo, Simone Amabile, Enrico Peano, Alberto Balderi

Acute mesenteric ischemia (AMI) is a severe medical condition defined by insufficient vascular supply to the small bowel through mesenteric vessels, resulting in necrosis and eventual gangrene of bowel walls. We present the case of a 64-year-old man with recrudescence of prolonged epigastric pain at rest of few hours duration, cold sweating and episodes of vomiting. A computed tomography scan of his abdomen revealed multiple filling defects in the mid-distal part of the superior mesenteric artery (SMA) and the proximal part of jejunal branches, associated with small intestine walls thickening, suggesting SMA thromboembolism and initial intestinal ischemia. Considering the absence of signs of peritonitis at the abdominal examination and the presence of multiple arterial emboli was decided to perform an endovascular treatment with ultrasound assisted catheter-directed thrombolysis with EkoSonic Endovascular System-EKOS, which resulted in complete dissolution of the multiple emboli and improved blood flow into the intestine wall. The day after the procedure the patient's pain improved significantly and 5 days after he was discharged home asymptomatic on warfarin anticoagulation. After 1 year of follow-up the patient is fine with no further episodes of mesenteric ischemia or other embolisms.

急性肠系膜缺血(AMI)是一种严重的医学疾病,由肠系膜血管向小肠的血管供应不足导致肠壁坏死和最终坏疽。我们提出的情况下,64岁的男子复发延长胃脘痛休息数小时的持续时间,冷汗和呕吐发作。腹部计算机断层扫描显示,肠系膜上动脉(SMA)中远端和空肠分支近端存在多发充盈缺损,并伴有小肠壁增厚,提示SMA血栓栓塞和初始肠缺血。考虑到腹部检查未见腹膜炎征象,且存在多动脉栓塞,我们决定采用超声辅助导管溶栓,使用EkoSonic血管内系统- ekos进行血管内治疗,导致多栓子完全溶解,改善了血液流入肠壁。术后1天患者疼痛明显改善,出院5天后无华法林抗凝症状。经过1年的随访,患者很好,没有再发生肠系膜缺血或其他栓塞。
{"title":"Case report: ultrasound assisted catheter directed thrombolysis of an embolic partial occlusion of the superior mesenteric artery.","authors":"Simone Bongiovanni,&nbsp;Marco Bozzolo,&nbsp;Simone Amabile,&nbsp;Enrico Peano,&nbsp;Alberto Balderi","doi":"10.3389/fradi.2023.1167901","DOIUrl":"https://doi.org/10.3389/fradi.2023.1167901","url":null,"abstract":"<p><p>Acute mesenteric ischemia (AMI) is a severe medical condition defined by insufficient vascular supply to the small bowel through mesenteric vessels, resulting in necrosis and eventual gangrene of bowel walls. We present the case of a 64-year-old man with recrudescence of prolonged epigastric pain at rest of few hours duration, cold sweating and episodes of vomiting. A computed tomography scan of his abdomen revealed multiple filling defects in the mid-distal part of the superior mesenteric artery (SMA) and the proximal part of jejunal branches, associated with small intestine walls thickening, suggesting SMA thromboembolism and initial intestinal ischemia. Considering the absence of signs of peritonitis at the abdominal examination and the presence of multiple arterial emboli was decided to perform an endovascular treatment with ultrasound assisted catheter-directed thrombolysis with EkoSonic Endovascular System-EKOS, which resulted in complete dissolution of the multiple emboli and improved blood flow into the intestine wall. The day after the procedure the patient's pain improved significantly and 5 days after he was discharged home asymptomatic on warfarin anticoagulation. After 1 year of follow-up the patient is fine with no further episodes of mesenteric ischemia or other embolisms.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365118/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10252060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Localization supervision of chest x-ray classifiers using label-specific eye-tracking annotation. 使用标签特定眼动跟踪注释的胸部x线分类器的定位监督。
Pub Date : 2023-01-01 DOI: 10.3389/fradi.2023.1088068
Ricardo Bigolin Lanfredi, Joyce D Schroeder, Tolga Tasdizen

Convolutional neural networks (CNNs) have been successfully applied to chest x-ray (CXR) images. Moreover, annotated bounding boxes have been shown to improve the interpretability of a CNN in terms of localizing abnormalities. However, only a few relatively small CXR datasets containing bounding boxes are available, and collecting them is very costly. Opportunely, eye-tracking (ET) data can be collected during the clinical workflow of a radiologist. We use ET data recorded from radiologists while dictating CXR reports to train CNNs. We extract snippets from the ET data by associating them with the dictation of keywords and use them to supervise the localization of specific abnormalities. We show that this method can improve a model's interpretability without impacting its image-level classification.

卷积神经网络(cnn)已成功应用于胸部x射线(CXR)图像。此外,标注的边界框已被证明可以提高CNN在定位异常方面的可解释性。但是,只有少数包含边界框的相对较小的CXR数据集可用,并且收集它们的成本非常高。碰巧的是,眼动追踪(ET)数据可以在放射科医生的临床工作流程中收集。我们使用放射科医生记录的ET数据,同时口授CXR报告来训练cnn。我们从ET数据中提取片段,将它们与关键词的听写联系起来,并使用它们来监督特定异常的定位。我们证明了这种方法可以在不影响图像级分类的情况下提高模型的可解释性。
{"title":"Localization supervision of chest x-ray classifiers using label-specific eye-tracking annotation.","authors":"Ricardo Bigolin Lanfredi,&nbsp;Joyce D Schroeder,&nbsp;Tolga Tasdizen","doi":"10.3389/fradi.2023.1088068","DOIUrl":"https://doi.org/10.3389/fradi.2023.1088068","url":null,"abstract":"<p><p>Convolutional neural networks (CNNs) have been successfully applied to chest x-ray (CXR) images. Moreover, annotated bounding boxes have been shown to improve the interpretability of a CNN in terms of localizing abnormalities. However, only a few relatively small CXR datasets containing bounding boxes are available, and collecting them is very costly. Opportunely, eye-tracking (ET) data can be collected during the clinical workflow of a radiologist. We use ET data recorded from radiologists while dictating CXR reports to train CNNs. We extract snippets from the ET data by associating them with the dictation of keywords and use them to supervise the localization of specific abnormalities. We show that this method can improve a model's interpretability without impacting its image-level classification.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365091/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9930026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in radiology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1