首页 > 最新文献

Frontiers in radiology最新文献

英文 中文
Deep learning image segmentation approaches for malignant bone lesions: a systematic review and meta-analysis. 恶性骨病变的深度学习图像分割方法:系统综述与荟萃分析。
Pub Date : 2023-08-08 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1241651
Joseph M Rich, Lokesh N Bhardwaj, Aman Shah, Krish Gangal, Mohitha S Rapaka, Assad A Oberai, Brandon K K Fields, George R Matcuk, Vinay A Duddalwar

Introduction: Image segmentation is an important process for quantifying characteristics of malignant bone lesions, but this task is challenging and laborious for radiologists. Deep learning has shown promise in automating image segmentation in radiology, including for malignant bone lesions. The purpose of this review is to investigate deep learning-based image segmentation methods for malignant bone lesions on Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron-Emission Tomography/CT (PET/CT).

Method: The literature search of deep learning-based image segmentation of malignant bony lesions on CT and MRI was conducted in PubMed, Embase, Web of Science, and Scopus electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 41 original articles published between February 2017 and March 2023 were included in the review.

Results: The majority of papers studied MRI, followed by CT, PET/CT, and PET/MRI. There was relatively even distribution of papers studying primary vs. secondary malignancies, as well as utilizing 3-dimensional vs. 2-dimensional data. Many papers utilize custom built models as a modification or variation of U-Net. The most common metric for evaluation was the dice similarity coefficient (DSC). Most models achieved a DSC above 0.6, with medians for all imaging modalities between 0.85-0.9.

Discussion: Deep learning methods show promising ability to segment malignant osseous lesions on CT, MRI, and PET/CT. Some strategies which are commonly applied to help improve performance include data augmentation, utilization of large public datasets, preprocessing including denoising and cropping, and U-Net architecture modification. Future directions include overcoming dataset and annotation homogeneity and generalizing for clinical applicability.

简介图像分割是量化恶性骨病变特征的重要过程,但对于放射科医生来说,这项任务既具有挑战性又费力。深度学习在放射学图像自动分割方面大有可为,包括恶性骨病变。本综述旨在研究计算机断层扫描(CT)、磁共振成像(MRI)和正电子发射断层扫描/CT(PET/CT)上基于深度学习的恶性骨病变图像分割方法:根据系统综述和元分析首选报告项目(Preferred Reporting Items for Systematic Reviews and Meta-Analyses,PRISMA)指南,在PubMed、Embase、Web of Science和Scopus电子数据库中对基于深度学习的CT和MRI恶性骨病变图像分割进行了文献检索。共有41篇发表于2017年2月至2023年3月期间的原创文章被纳入综述:大多数论文研究的是 MRI,其次是 CT、PET/CT 和 PET/MRI。研究原发性与继发性恶性肿瘤以及利用三维与二维数据的论文分布相对均匀。许多论文利用定制模型作为 U-Net 的修改或变体。最常用的评估指标是骰子相似系数(DSC)。大多数模型的骰子相似系数都在 0.6 以上,所有成像模式的中位数都在 0.85-0.9 之间:深度学习方法在分割 CT、MRI 和 PET/CT 上的恶性骨质病变方面表现出良好的能力。为帮助提高性能,通常采用的一些策略包括数据增强、利用大型公共数据集、预处理(包括去噪和裁剪)以及 U-Net 架构修改。未来的研究方向包括克服数据集和注释的同质性,以及临床应用的通用性。
{"title":"Deep learning image segmentation approaches for malignant bone lesions: a systematic review and meta-analysis.","authors":"Joseph M Rich, Lokesh N Bhardwaj, Aman Shah, Krish Gangal, Mohitha S Rapaka, Assad A Oberai, Brandon K K Fields, George R Matcuk, Vinay A Duddalwar","doi":"10.3389/fradi.2023.1241651","DOIUrl":"10.3389/fradi.2023.1241651","url":null,"abstract":"<p><strong>Introduction: </strong>Image segmentation is an important process for quantifying characteristics of malignant bone lesions, but this task is challenging and laborious for radiologists. Deep learning has shown promise in automating image segmentation in radiology, including for malignant bone lesions. The purpose of this review is to investigate deep learning-based image segmentation methods for malignant bone lesions on Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron-Emission Tomography/CT (PET/CT).</p><p><strong>Method: </strong>The literature search of deep learning-based image segmentation of malignant bony lesions on CT and MRI was conducted in PubMed, Embase, Web of Science, and Scopus electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 41 original articles published between February 2017 and March 2023 were included in the review.</p><p><strong>Results: </strong>The majority of papers studied MRI, followed by CT, PET/CT, and PET/MRI. There was relatively even distribution of papers studying primary vs. secondary malignancies, as well as utilizing 3-dimensional vs. 2-dimensional data. Many papers utilize custom built models as a modification or variation of U-Net. The most common metric for evaluation was the dice similarity coefficient (DSC). Most models achieved a DSC above 0.6, with medians for all imaging modalities between 0.85-0.9.</p><p><strong>Discussion: </strong>Deep learning methods show promising ability to segment malignant osseous lesions on CT, MRI, and PET/CT. Some strategies which are commonly applied to help improve performance include data augmentation, utilization of large public datasets, preprocessing including denoising and cropping, and U-Net architecture modification. Future directions include overcoming dataset and annotation homogeneity and generalizing for clinical applicability.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10442705/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10069334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Localization supervision of chest x-ray classifiers using label-specific eye-tracking annotation. 利用特定标签眼动跟踪注释对胸部 X 光分类器进行定位监督。
Pub Date : 2023-06-22 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1088068
Ricardo Bigolin Lanfredi, Joyce D Schroeder, Tolga Tasdizen

Convolutional neural networks (CNNs) have been successfully applied to chest x-ray (CXR) images. Moreover, annotated bounding boxes have been shown to improve the interpretability of a CNN in terms of localizing abnormalities. However, only a few relatively small CXR datasets containing bounding boxes are available, and collecting them is very costly. Opportunely, eye-tracking (ET) data can be collected during the clinical workflow of a radiologist. We use ET data recorded from radiologists while dictating CXR reports to train CNNs. We extract snippets from the ET data by associating them with the dictation of keywords and use them to supervise the localization of specific abnormalities. We show that this method can improve a model's interpretability without impacting its image-level classification.

卷积神经网络(CNN)已成功应用于胸部 X 光(CXR)图像。此外,注释边界框已被证明可提高 CNN 在定位异常方面的可解释性。然而,目前只有少数包含边界框的相对较小的 CXR 数据集,而且收集这些数据集的成本非常高。眼动跟踪(ET)数据可以在放射科医生的临床工作流程中收集。我们使用放射科医生在口述 CXR 报告时记录的 ET 数据来训练 CNN。我们从 ET 数据中提取片段,将它们与关键字的口述关联起来,并用它们来监督特定异常的定位。我们的研究表明,这种方法可以提高模型的可解释性,而不会影响其图像级分类。
{"title":"Localization supervision of chest x-ray classifiers using label-specific eye-tracking annotation.","authors":"Ricardo Bigolin Lanfredi, Joyce D Schroeder, Tolga Tasdizen","doi":"10.3389/fradi.2023.1088068","DOIUrl":"10.3389/fradi.2023.1088068","url":null,"abstract":"<p><p>Convolutional neural networks (CNNs) have been successfully applied to chest x-ray (CXR) images. Moreover, annotated bounding boxes have been shown to improve the interpretability of a CNN in terms of localizing abnormalities. However, only a few relatively small CXR datasets containing bounding boxes are available, and collecting them is very costly. Opportunely, eye-tracking (ET) data can be collected during the clinical workflow of a radiologist. We use ET data recorded from radiologists while dictating CXR reports to train CNNs. We extract snippets from the ET data by associating them with the dictation of keywords and use them to supervise the localization of specific abnormalities. We show that this method can improve a model's interpretability without impacting its image-level classification.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365091/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9930026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based left ventricular segmentation demonstrates improved performance on respiratory motion-resolved whole-heart reconstructions. 基于深度学习的左心室分割在呼吸运动分辨的全心脏重建中表现出改进的性能。
Pub Date : 2023-06-02 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1144004
Yitong Yang, Zahraw Shah, Athira J Jacob, Jackson Hair, Teodora Chitiboi, Tiziano Passerini, Jerome Yerly, Lorenzo Di Sopra, Davide Piccini, Zahra Hosseini, Puneet Sharma, Anurag Sahu, Matthias Stuber, John N Oshinski

Introduction: Deep learning (DL)-based segmentation has gained popularity for routine cardiac magnetic resonance (CMR) image analysis and in particular, delineation of left ventricular (LV) borders for LV volume determination. Free-breathing, self-navigated, whole-heart CMR exams provide high-resolution, isotropic coverage of the heart for assessment of cardiac anatomy including LV volume. The combination of whole-heart free-breathing CMR and DL-based LV segmentation has the potential to streamline the acquisition and analysis of clinical CMR exams. The purpose of this study was to compare the performance of a DL-based automatic LV segmentation network trained primarily on computed tomography (CT) images in two whole-heart CMR reconstruction methods: (1) an in-line respiratory motion-corrected (Mcorr) reconstruction and (2) an off-line, compressed sensing-based, multi-volume respiratory motion-resolved (Mres) reconstruction. Given that Mres images were shown to have greater image quality in previous studies than Mcorr images, we hypothesized that the LV volumes segmented from Mres images are closer to the manual expert-traced left ventricular endocardial border than the Mcorr images.

Method: This retrospective study used 15 patients who underwent clinically indicated 1.5 T CMR exams with a prototype ECG-gated 3D radial phyllotaxis balanced steady state free precession (bSSFP) sequence. For each reconstruction method, the absolute volume difference (AVD) of the automatically and manually segmented LV volumes was used as the primary quantity to investigate whether 3D DL-based LV segmentation generalized better on Mcorr or Mres 3D whole-heart images. Additionally, we assessed the 3D Dice similarity coefficient between the manual and automatic LV masks of each reconstructed 3D whole-heart image and the sharpness of the LV myocardium-blood pool interface. A two-tail paired Student's t-test (alpha = 0.05) was used to test the significance in this study.

Results & discussion: The AVD in the respiratory Mres reconstruction was lower than the AVD in the respiratory Mcorr reconstruction: 7.73 ± 6.54 ml vs. 20.0 ± 22.4 ml, respectively (n = 15, p-value = 0.03). The 3D Dice coefficient between the DL-segmented masks and the manually segmented masks was higher for Mres images than for Mcorr images: 0.90 ± 0.02 vs. 0.87 ± 0.03 respectively, with a p-value = 0.02. Sharpness on Mres images was higher than on Mcorr images: 0.15 ± 0.05 vs. 0.12 ± 0.04, respectively, with a p-value of 0.014 (n = 15).

Conclusion: We conclude that the DL-based 3D automatic LV segmentation network trained on CT images and fine-tuned on MR images generalized better on Mres images than on Mcorr images for quantifying LV volumes.

引言:基于深度学习(DL)的分割在常规心脏磁共振(CMR)图像分析中越来越受欢迎,尤其是在左心室(LV)边界的划定中,用于左心室容积的确定。自由呼吸、自主导航、全心CMR检查提供了高分辨率、各向同性的心脏覆盖范围,用于评估心脏解剖结构,包括左心室容积。全心自由呼吸CMR和基于DL的LV分割相结合,有可能简化临床CMR检查的采集和分析。本研究的目的是比较主要在计算机断层扫描(CT)图像上训练的基于DL的左心室自动分割网络在两种全心脏CMR重建方法中的性能:(1)在线呼吸运动校正(Mcorr)重建和(2)离线、基于压缩传感的多体积呼吸运动分辨(Mres)重建。鉴于Mres图像在先前的研究中显示出比Mcorr图像更高的图像质量,我们假设从Mres图像分割的左心室体积比Mcorr图像更接近手动专家追踪的左心室心内膜边界。方法:这项回顾性研究使用了15名患者,他们接受了临床指示的1.5 T CMR检查与原型心电图门控的三维径向叶序平衡稳态自由进动(bSFP)序列。对于每种重建方法,自动和手动分割的左心室体积的绝对体积差(AVD)被用作主要量,以研究基于3D DL的左心室分割是否在Mcorr或Mres 3D全心图像上更好地推广。此外,我们评估了每个重建的3D全心图像的手动和自动左心室掩模之间的3D Dice相似系数以及左心室-心肌-血池界面的清晰度。双尾配对Student t检验(α = 0.05)来检验其在本研究中的显著性。结果与讨论:呼吸Mres重建的AVD低于呼吸Mcorr重建的AVD:7.73 ± 6.54 ml与20.0 ± 22.4 ml(n = 15,p值 = 0.03)。对于Mres图像,DL分割掩模和手动分割掩模之间的3D骰子系数高于Mcorr图像:0.90 ± 0.02对0.87 ± 分别为0.03,具有p值 = 0.02。Mres图像的清晰度高于Mcorr图像:0.15 ± 0.05对0.12 ± 0.04,p值为0.014(n = 15) 结论:我们得出的结论是,在CT图像上训练并在MR图像上微调的基于DL的3D自动左心室分割网络在Mres图像上比在Mcorr图像上更好地推广用于量化左心室体积。
{"title":"Deep learning-based left ventricular segmentation demonstrates improved performance on respiratory motion-resolved whole-heart reconstructions.","authors":"Yitong Yang,&nbsp;Zahraw Shah,&nbsp;Athira J Jacob,&nbsp;Jackson Hair,&nbsp;Teodora Chitiboi,&nbsp;Tiziano Passerini,&nbsp;Jerome Yerly,&nbsp;Lorenzo Di Sopra,&nbsp;Davide Piccini,&nbsp;Zahra Hosseini,&nbsp;Puneet Sharma,&nbsp;Anurag Sahu,&nbsp;Matthias Stuber,&nbsp;John N Oshinski","doi":"10.3389/fradi.2023.1144004","DOIUrl":"10.3389/fradi.2023.1144004","url":null,"abstract":"<p><strong>Introduction: </strong>Deep learning (DL)-based segmentation has gained popularity for routine cardiac magnetic resonance (CMR) image analysis and in particular, delineation of left ventricular (LV) borders for LV volume determination. Free-breathing, self-navigated, whole-heart CMR exams provide high-resolution, isotropic coverage of the heart for assessment of cardiac anatomy including LV volume. The combination of whole-heart free-breathing CMR and DL-based LV segmentation has the potential to streamline the acquisition and analysis of clinical CMR exams. The purpose of this study was to compare the performance of a DL-based automatic LV segmentation network trained primarily on computed tomography (CT) images in two whole-heart CMR reconstruction methods: (1) an in-line respiratory motion-corrected (Mcorr) reconstruction and (2) an off-line, compressed sensing-based, multi-volume respiratory motion-resolved (Mres) reconstruction. Given that Mres images were shown to have greater image quality in previous studies than Mcorr images, we <i>hypothesized</i> that the LV volumes segmented from Mres images are closer to the manual expert-traced left ventricular endocardial border than the Mcorr images.</p><p><strong>Method: </strong>This retrospective study used 15 patients who underwent clinically indicated 1.5 T CMR exams with a prototype ECG-gated 3D radial phyllotaxis balanced steady state free precession (bSSFP) sequence. For each reconstruction method, the absolute volume difference (AVD) of the automatically and manually segmented LV volumes was used as the primary quantity to investigate whether 3D DL-based LV segmentation generalized better on Mcorr or Mres 3D whole-heart images. Additionally, we assessed the 3D Dice similarity coefficient between the manual and automatic LV masks of each reconstructed 3D whole-heart image and the sharpness of the LV myocardium-blood pool interface. A two-tail paired Student's <i>t</i>-test (alpha = 0.05) was used to test the significance in this study.</p><p><strong>Results & discussion: </strong>The AVD in the respiratory Mres reconstruction was lower than the AVD in the respiratory Mcorr reconstruction: 7.73 ± 6.54 ml vs. 20.0 ± 22.4 ml, respectively (<i>n</i> = 15, <i>p</i>-value = 0.03). The 3D Dice coefficient between the DL-segmented masks and the manually segmented masks was higher for Mres images than for Mcorr images: 0.90 ± 0.02 vs. 0.87 ± 0.03 respectively, with a <i>p</i>-value = 0.02. Sharpness on Mres images was higher than on Mcorr images: 0.15 ± 0.05 vs. 0.12 ± 0.04, respectively, with a <i>p</i>-value of 0.014 (<i>n</i> = 15).</p><p><strong>Conclusion: </strong>We conclude that the DL-based 3D automatic LV segmentation network trained on CT images and fine-tuned on MR images generalized better on Mres images than on Mcorr images for quantifying LV volumes.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365088/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10234001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital transformation of career landscapes in radiology: personal and professional implications. 放射学职业前景的数字化转型:对个人和职业的影响。
Pub Date : 2023-05-22 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1180699
Anjali Agrawal

Millennial radiology is marked by technical disruptions. Advances in internet, digital communications and computing technology, paved way for digitalized workflow orchestration of busy radiology departments. The COVID pandemic brought teleradiology to the forefront, highlighting its importance in maintaining continuity of radiological services, making it an integral component of the radiology practice. Increasing computing power and integrated multimodal data are driving incorporation of artificial intelligence at various stages of the radiology image and reporting cycle. These have and will continue to transform the career landscape in radiology, with more options for radiologists with varied interests and career goals. The ability to work from anywhere and anytime needs to be balanced with other aspects of life. Robust communication, internal and external collaboration, self-discipline, and self-motivation are key to achieving the desired balance while practicing radiology the unconventional way.

千禧年放射学的特点是技术颠覆。互联网、数字通信和计算技术的进步,为繁忙的放射科工作流程的数字化协调铺平了道路。COVID 大流行将远程放射学推到了风口浪尖,凸显了远程放射学在保持放射服务连续性方面的重要性,使其成为放射学实践中不可或缺的组成部分。不断增强的计算能力和集成的多模态数据推动了人工智能在放射图像和报告周期各个阶段的应用。这些已经并将继续改变放射学的职业前景,为具有不同兴趣和职业目标的放射科医生提供更多选择。随时随地工作的能力需要与生活的其他方面保持平衡。强有力的沟通、内部和外部协作、自律和自我激励是在以非传统方式从事放射学工作的同时实现理想平衡的关键。
{"title":"Digital transformation of career landscapes in radiology: personal and professional implications.","authors":"Anjali Agrawal","doi":"10.3389/fradi.2023.1180699","DOIUrl":"10.3389/fradi.2023.1180699","url":null,"abstract":"<p><p>Millennial radiology is marked by technical disruptions. Advances in internet, digital communications and computing technology, paved way for digitalized workflow orchestration of busy radiology departments. The COVID pandemic brought teleradiology to the forefront, highlighting its importance in maintaining continuity of radiological services, making it an integral component of the radiology practice. Increasing computing power and integrated multimodal data are driving incorporation of artificial intelligence at various stages of the radiology image and reporting cycle. These have and will continue to transform the career landscape in radiology, with more options for radiologists with varied interests and career goals. The ability to work from anywhere and anytime needs to be balanced with other aspects of life. Robust communication, internal and external collaboration, self-discipline, and self-motivation are key to achieving the desired balance while practicing radiology the unconventional way.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10364979/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10233998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mouse brain MR super-resolution using a deep learning network trained with optical imaging data. 利用光学成像数据训练的深度学习网络实现小鼠大脑磁共振超分辨率。
Pub Date : 2023-05-15 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1155866
Zifei Liang, Jiangyang Zhang

Introduction: The resolution of magnetic resonance imaging is often limited at the millimeter level due to its inherent signal-to-noise disadvantage compared to other imaging modalities. Super-resolution (SR) of MRI data aims to enhance its resolution and diagnostic value. While deep learning-based SR has shown potential, its applications in MRI remain limited, especially for preclinical MRI, where large high-resolution MRI datasets for training are often lacking.

Methods: In this study, we first used high-resolution mouse brain auto-fluorescence (AF) data acquired using serial two-photon tomography (STPT) to examine the performance of deep learning-based SR for mouse brain images.

Results: We found that the best SR performance was obtained when the resolutions of training and target data were matched. We then applied the network trained using AF data to MRI data of the mouse brain, and found that the performance of the SR network depended on the tissue contrast presented in the MRI data. Using transfer learning and a limited set of high-resolution mouse brain MRI data, we were able to fine-tune the initial network trained using AF to enhance the resolution of MRI data.

Discussion: Our results suggest that deep learning SR networks trained using high-resolution data of a different modality can be applied to MRI data after transfer learning.

导言:与其他成像模式相比,磁共振成像的信噪比固有的劣势使其分辨率通常被限制在毫米级别。磁共振成像数据的超分辨率(SR)旨在提高其分辨率和诊断价值。虽然基于深度学习的 SR 已显示出潜力,但其在核磁共振成像中的应用仍然有限,尤其是在临床前核磁共振成像中,往往缺乏用于训练的大型高分辨率核磁共振成像数据集:在这项研究中,我们首先使用串行双光子断层扫描(STPT)获得的高分辨率小鼠大脑自动荧光(AF)数据,检验基于深度学习的 SR 在小鼠大脑图像中的性能:我们发现,当训练数据和目标数据的分辨率相匹配时,SR 性能最佳。然后,我们将使用 AF 数据训练的网络应用于小鼠大脑的 MRI 数据,发现 SR 网络的性能取决于 MRI 数据中呈现的组织对比度。利用迁移学习和有限的一组高分辨率小鼠脑部 MRI 数据,我们能够对使用 AF 数据训练的初始网络进行微调,以提高 MRI 数据的分辨率:我们的研究结果表明,使用不同模式的高分辨率数据训练的深度学习 SR 网络经过迁移学习后,可以应用于核磁共振成像数据。
{"title":"Mouse brain MR super-resolution using a deep learning network trained with optical imaging data.","authors":"Zifei Liang, Jiangyang Zhang","doi":"10.3389/fradi.2023.1155866","DOIUrl":"10.3389/fradi.2023.1155866","url":null,"abstract":"<p><strong>Introduction: </strong>The resolution of magnetic resonance imaging is often limited at the millimeter level due to its inherent signal-to-noise disadvantage compared to other imaging modalities. Super-resolution (SR) of MRI data aims to enhance its resolution and diagnostic value. While deep learning-based SR has shown potential, its applications in MRI remain limited, especially for preclinical MRI, where large high-resolution MRI datasets for training are often lacking.</p><p><strong>Methods: </strong>In this study, we first used high-resolution mouse brain auto-fluorescence (AF) data acquired using serial two-photon tomography (STPT) to examine the performance of deep learning-based SR for mouse brain images.</p><p><strong>Results: </strong>We found that the best SR performance was obtained when the resolutions of training and target data were matched. We then applied the network trained using AF data to MRI data of the mouse brain, and found that the performance of the SR network depended on the tissue contrast presented in the MRI data. Using transfer learning and a limited set of high-resolution mouse brain MRI data, we were able to fine-tune the initial network trained using AF to enhance the resolution of MRI data.</p><p><strong>Discussion: </strong>Our results suggest that deep learning SR networks trained using high-resolution data of a different modality can be applied to MRI data after transfer learning.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365285/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10252062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
uRP: An integrated research platform for one-stop analysis of medical images. uRP:一站式医学图像分析综合研究平台。
Pub Date : 2023-04-18 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1153784
Jiaojiao Wu, Yuwei Xia, Xuechun Wang, Ying Wei, Aie Liu, Arun Innanje, Meng Zheng, Lei Chen, Jing Shi, Liye Wang, Yiqiang Zhan, Xiang Sean Zhou, Zhong Xue, Feng Shi, Dinggang Shen

Introduction: Medical image analysis is of tremendous importance in serving clinical diagnosis, treatment planning, as well as prognosis assessment. However, the image analysis process usually involves multiple modality-specific software and relies on rigorous manual operations, which is time-consuming and potentially low reproducible.

Methods: We present an integrated platform - uAI Research Portal (uRP), to achieve one-stop analyses of multimodal images such as CT, MRI, and PET for clinical research applications. The proposed uRP adopts a modularized architecture to be multifunctional, extensible, and customizable.

Results and discussion: The uRP shows 3 advantages, as it 1) spans a wealth of algorithms for image processing including semi-automatic delineation, automatic segmentation, registration, classification, quantitative analysis, and image visualization, to realize a one-stop analytic pipeline, 2) integrates a variety of functional modules, which can be directly applied, combined, or customized for specific application domains, such as brain, pneumonia, and knee joint analyses, 3) enables full-stack analysis of one disease, including diagnosis, treatment planning, and prognosis assessment, as well as full-spectrum coverage for multiple disease applications. With the continuous development and inclusion of advanced algorithms, we expect this platform to largely simplify the clinical scientific research process and promote more and better discoveries.

引言医学图像分析在临床诊断、治疗计划和预后评估方面具有重要意义。然而,图像分析过程通常涉及多种特定模式的软件,并依赖于严格的手工操作,耗时且可重复性可能较低:我们提出了一个集成平台--uAI Research Portal(uRP),以实现临床研究应用中对 CT、MRI 和 PET 等多模态图像的一站式分析。uRP采用模块化架构,具有多功能性、可扩展性和可定制性:uRP具有三大优势:1)横跨半自动划线、自动分割、配准、分类、定量分析、图像可视化等丰富的图像处理算法,实现一站式分析流水线;2)集成多种功能模块,可直接应用、组合或定制特定应用领域的功能模块,如脑、肺炎、膝关节分析等;3)实现一种疾病的全栈分析,包括诊断、治疗计划和预后评估,以及多种疾病应用的全谱覆盖。随着先进算法的不断发展和加入,我们期待该平台能在很大程度上简化临床科研流程,促进更多更好的发现。
{"title":"uRP: An integrated research platform for one-stop analysis of medical images.","authors":"Jiaojiao Wu, Yuwei Xia, Xuechun Wang, Ying Wei, Aie Liu, Arun Innanje, Meng Zheng, Lei Chen, Jing Shi, Liye Wang, Yiqiang Zhan, Xiang Sean Zhou, Zhong Xue, Feng Shi, Dinggang Shen","doi":"10.3389/fradi.2023.1153784","DOIUrl":"10.3389/fradi.2023.1153784","url":null,"abstract":"<p><strong>Introduction: </strong>Medical image analysis is of tremendous importance in serving clinical diagnosis, treatment planning, as well as prognosis assessment. However, the image analysis process usually involves multiple modality-specific software and relies on rigorous manual operations, which is time-consuming and potentially low reproducible.</p><p><strong>Methods: </strong>We present an integrated platform - uAI Research Portal (uRP), to achieve one-stop analyses of multimodal images such as CT, MRI, and PET for clinical research applications. The proposed uRP adopts a modularized architecture to be multifunctional, extensible, and customizable.</p><p><strong>Results and discussion: </strong>The uRP shows 3 advantages, as it 1) spans a wealth of algorithms for image processing including semi-automatic delineation, automatic segmentation, registration, classification, quantitative analysis, and image visualization, to realize a one-stop analytic pipeline, 2) integrates a variety of functional modules, which can be directly applied, combined, or customized for specific application domains, such as brain, pneumonia, and knee joint analyses, 3) enables full-stack analysis of one disease, including diagnosis, treatment planning, and prognosis assessment, as well as full-spectrum coverage for multiple disease applications. With the continuous development and inclusion of advanced algorithms, we expect this platform to largely simplify the clinical scientific research process and promote more and better discoveries.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365282/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10233996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A radiomics approach to the diagnosis of femoroacetabular impingement. 放射组学方法诊断股髋臼撞击。
Pub Date : 2023-03-20 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1151258
Eros Montin, Richard Kijowski, Thomas Youm, Riccardo Lattanzi

Introduction: Femoroacetabular Impingement (FAI) is a hip pathology characterized by impingement of the femoral head-neck junction against the acetabular rim, due to abnormalities in bone morphology. FAI is normally diagnosed by manual evaluation of morphologic features on magnetic resonance imaging (MRI). In this study, we assess, for the first time, the feasibility of using radiomics to detect FAI by automatically extracting quantitative features from images.

Material and methods: 17 patients diagnosed with monolateral FAI underwent pre-surgical MR imaging, including a 3D Dixon sequence of the pelvis. An expert radiologist drew regions of interest on the water-only Dixon images outlining femur and acetabulum in both impingement (IJ) and healthy joints (HJ). 182 radiomic features were extracted for each hip. The dataset numerosity was increased by 60 times with an ad-hoc data augmentation tool. Features were subdivided by type and region in 24 subsets. For each, a univariate ANOVA F-value analysis was applied to find the 5 features most correlated with IJ based on p-value, for a total of 48 subsets. For each subset, a K-nearest neighbor model was trained to differentiate between IJ and HJ using the values of the radiomic features in the subset as input. The training was repeated 100 times, randomly subdividing the data with 75%/25% training/testing.

Results: The texture-based gray level features yielded the highest prediction max accuracy (0.972) with the smallest subset of features. This suggests that the gray image values are more homogeneously distributed in the HJ in comparison to IJ, which could be due to stress-related inflammation resulting from impingement.

Conclusions: We showed that radiomics can automatically distinguish IJ from HJ using water-only Dixon MRI. To our knowledge, this is the first application of radiomics for FAI diagnosis. We reported an accuracy greater than 97%, which is higher than the 90% accuracy for detecting FAI reported for standard diagnostic tests (90%). Our proposed radiomic analysis could be combined with methods for automated joint segmentation to rapidly identify patients with FAI, avoiding time-consuming radiological measurements of bone morphology.

简介:股髋臼撞击(FAI)是一种髋关节病理,其特征是由于骨形态异常导致股骨头颈交界处撞击髋臼缘。FAI通常通过人工评估磁共振成像(MRI)的形态学特征来诊断。在这项研究中,我们首次评估了使用放射组学通过自动从图像中提取定量特征来检测FAI的可行性。材料和方法:17例诊断为单侧FAI的患者行术前MR成像,包括骨盆三维Dixon序列。放射科专家在仅水的Dixon图像上画出了撞击关节(IJ)和健康关节(HJ)的股骨和髋臼的轮廓。每个髋关节提取182个放射学特征。使用临时数据增强工具,数据集数量增加了60倍。将特征按类型和区域细分为24个子集。对于每一个,采用单变量方差分析f值分析,根据p值找到与IJ最相关的5个特征,共48个子集。对于每个子集,使用子集中的放射特征值作为输入,训练k近邻模型来区分IJ和HJ。训练重复100次,随机将数据细分为75%/25%的训练/测试。结果:基于纹理的灰度特征以最小的特征子集获得了最高的预测准确率(0.972)。这表明,与IJ相比,HJ的灰度图像值分布更均匀,这可能是由于撞击引起的应激相关炎症。结论:我们发现放射组学可以使用仅水的Dixon MRI自动区分IJ和HJ。据我们所知,这是放射组学在FAI诊断中的首次应用。我们报告的准确率大于97%,高于标准诊断测试报告的90%的FAI检测准确率(90%)。我们提出的放射组学分析可以与自动关节分割方法相结合,以快速识别FAI患者,避免耗时的骨形态学放射测量。
{"title":"A radiomics approach to the diagnosis of femoroacetabular impingement.","authors":"Eros Montin, Richard Kijowski, Thomas Youm, Riccardo Lattanzi","doi":"10.3389/fradi.2023.1151258","DOIUrl":"10.3389/fradi.2023.1151258","url":null,"abstract":"<p><strong>Introduction: </strong>Femoroacetabular Impingement (FAI) is a hip pathology characterized by impingement of the femoral head-neck junction against the acetabular rim, due to abnormalities in bone morphology. FAI is normally diagnosed by manual evaluation of morphologic features on magnetic resonance imaging (MRI). In this study, we assess, for the first time, the feasibility of using radiomics to detect FAI by automatically extracting quantitative features from images.</p><p><strong>Material and methods: </strong>17 patients diagnosed with monolateral FAI underwent pre-surgical MR imaging, including a 3D Dixon sequence of the pelvis. An expert radiologist drew regions of interest on the water-only Dixon images outlining femur and acetabulum in both impingement (IJ) and healthy joints (HJ). 182 radiomic features were extracted for each hip. The dataset numerosity was increased by 60 times with an ad-hoc data augmentation tool. Features were subdivided by type and region in 24 subsets. For each, a univariate ANOVA <i>F</i>-value analysis was applied to find the 5 features most correlated with IJ based on <i>p</i>-value, for a total of 48 subsets. For each subset, a K-nearest neighbor model was trained to differentiate between IJ and HJ using the values of the radiomic features in the subset as input. The training was repeated 100 times, randomly subdividing the data with 75%/25% training/testing.</p><p><strong>Results: </strong>The texture-based gray level features yielded the highest prediction max accuracy (0.972) with the smallest subset of features. This suggests that the gray image values are more homogeneously distributed in the HJ in comparison to IJ, which could be due to stress-related inflammation resulting from impingement.</p><p><strong>Conclusions: </strong>We showed that radiomics can automatically distinguish IJ from HJ using water-only Dixon MRI. To our knowledge, this is the first application of radiomics for FAI diagnosis. We reported an accuracy greater than 97%, which is higher than the 90% accuracy for detecting FAI reported for standard diagnostic tests (90%). Our proposed radiomic analysis could be combined with methods for automated joint segmentation to rapidly identify patients with FAI, avoiding time-consuming radiological measurements of bone morphology.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365279/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10233997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
How should studies using AI be reported? lessons from a systematic review in cardiac MRI. 如何报告使用人工智能的研究?
Pub Date : 2023-01-30 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1112841
Ahmed Maiter, Mahan Salehi, Andrew J Swift, Samer Alabed

Recent years have seen a dramatic increase in studies presenting artificial intelligence (AI) tools for cardiac imaging. Amongst these are AI tools that undertake segmentation of structures on cardiac MRI (CMR), an essential step in obtaining clinically relevant functional information. The quality of reporting of these studies carries significant implications for advancement of the field and the translation of AI tools to clinical practice. We recently undertook a systematic review to evaluate the quality of reporting of studies presenting automated approaches to segmentation in cardiac MRI (Alabed et al. 2022 Quality of reporting in AI cardiac MRI segmentation studies-a systematic review and recommendations for future studies. Frontiers in Cardiovascular Medicine 9:956811). 209 studies were assessed for compliance with the Checklist for AI in Medical Imaging (CLAIM), a framework for reporting. We found variable-and sometimes poor-quality of reporting and identified significant and frequently missing information in publications. Compliance with CLAIM was high for descriptions of models (100%, IQR 80%-100%), but lower than expected for descriptions of study design (71%, IQR 63-86%), datasets used in training and testing (63%, IQR 50%-67%) and model performance (60%, IQR 50%-70%). Here, we present a summary of our key findings, aimed at general readers who may not be experts in AI, and use them as a framework to discuss the factors determining quality of reporting, making recommendations for improving the reporting of research in this field. We aim to assist researchers in presenting their work and readers in their appraisal of evidence. Finally, we emphasise the need for close scrutiny of studies presenting AI tools, even in the face of the excitement surrounding AI in cardiac imaging.

近年来,有关心脏成像人工智能(AI)工具的研究急剧增加。其中包括对心脏核磁共振成像(CMR)结构进行分割的人工智能工具,这是获取临床相关功能信息的重要步骤。这些研究的报告质量对该领域的发展以及将人工智能工具转化为临床实践具有重要影响。我们最近开展了一项系统综述,以评估心脏磁共振成像中自动分割方法研究的报告质量(Alabed et al.心血管医学前沿 9:956811)。我们根据医学影像人工智能检查表(CLAIM)这一报告框架对 209 项研究进行了评估。我们发现报告的质量参差不齐,有时甚至很差,并在出版物中发现了大量且经常缺失的信息。模型描述对 CLAIM 的符合率很高(100%,IQR 80%-100%),但研究设计描述(71%,IQR 63-86%)、训练和测试所用数据集(63%,IQR 50%-67%)和模型性能(60%,IQR 50%-70%)的符合率低于预期。在此,我们针对可能不是人工智能专家的普通读者总结了我们的主要发现,并以此为框架讨论了决定报告质量的因素,为改进该领域的研究报告提出了建议。我们旨在帮助研究人员介绍他们的工作,并帮助读者评估证据。最后,我们强调,即使人工智能在心脏成像领域的应用令人兴奋,也需要对介绍人工智能工具的研究进行严格审查。
{"title":"How should studies using AI be reported? lessons from a systematic review in cardiac MRI.","authors":"Ahmed Maiter, Mahan Salehi, Andrew J Swift, Samer Alabed","doi":"10.3389/fradi.2023.1112841","DOIUrl":"10.3389/fradi.2023.1112841","url":null,"abstract":"<p><p>Recent years have seen a dramatic increase in studies presenting artificial intelligence (AI) tools for cardiac imaging. Amongst these are AI tools that undertake segmentation of structures on cardiac MRI (CMR), an essential step in obtaining clinically relevant functional information. The quality of reporting of these studies carries significant implications for advancement of the field and the translation of AI tools to clinical practice. We recently undertook a systematic review to evaluate the quality of reporting of studies presenting automated approaches to segmentation in cardiac MRI (Alabed et al. 2022 Quality of reporting in AI cardiac MRI segmentation studies-a systematic review and recommendations for future studies. <i>Frontiers in Cardiovascular Medicine</i> 9:956811). 209 studies were assessed for compliance with the Checklist for AI in Medical Imaging (CLAIM), a framework for reporting. We found variable-and sometimes poor-quality of reporting and identified significant and frequently missing information in publications. Compliance with CLAIM was high for descriptions of models (100%, IQR 80%-100%), but lower than expected for descriptions of study design (71%, IQR 63-86%), datasets used in training and testing (63%, IQR 50%-67%) and model performance (60%, IQR 50%-70%). Here, we present a summary of our key findings, aimed at general readers who may not be experts in AI, and use them as a framework to discuss the factors determining quality of reporting, making recommendations for improving the reporting of research in this field. We aim to assist researchers in presenting their work and readers in their appraisal of evidence. Finally, we emphasise the need for close scrutiny of studies presenting AI tools, even in the face of the excitement surrounding AI in cardiac imaging.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10364997/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10252059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The promise and limitations of artificial intelligence in musculoskeletal imaging. 人工智能在肌肉骨骼成像中的前景和局限性。
Pub Date : 2023-01-01 DOI: 10.3389/fradi.2023.1242902
Patrick Debs, Laura M Fayad

With the recent developments in deep learning and the rapid growth of convolutional neural networks, artificial intelligence has shown promise as a tool that can transform several aspects of the musculoskeletal imaging cycle. Its applications can involve both interpretive and non-interpretive tasks such as the ordering of imaging, scheduling, protocoling, image acquisition, report generation and communication of findings. However, artificial intelligence tools still face a number of challenges that can hinder effective implementation into clinical practice. The purpose of this review is to explore both the successes and limitations of artificial intelligence applications throughout the muscuskeletal imaging cycle and to highlight how these applications can help enhance the service radiologists deliver to their patients, resulting in increased efficiency as well as improved patient and provider satisfaction.

随着深度学习的最新发展和卷积神经网络的快速发展,人工智能已经显示出作为一种可以改变肌肉骨骼成像周期几个方面的工具的前景。它的应用可以涉及解释性和非解释性任务,如成像排序、调度、协议、图像采集、报告生成和结果交流。然而,人工智能工具仍然面临着许多挑战,这些挑战可能会阻碍其在临床实践中的有效实施。本综述的目的是探讨人工智能应用在整个肌肉骨骼成像周期中的成功和局限性,并强调这些应用如何帮助放射科医生提高对患者的服务,从而提高效率,提高患者和提供者的满意度。
{"title":"The promise and limitations of artificial intelligence in musculoskeletal imaging.","authors":"Patrick Debs,&nbsp;Laura M Fayad","doi":"10.3389/fradi.2023.1242902","DOIUrl":"https://doi.org/10.3389/fradi.2023.1242902","url":null,"abstract":"<p><p>With the recent developments in deep learning and the rapid growth of convolutional neural networks, artificial intelligence has shown promise as a tool that can transform several aspects of the musculoskeletal imaging cycle. Its applications can involve both interpretive and non-interpretive tasks such as the ordering of imaging, scheduling, protocoling, image acquisition, report generation and communication of findings. However, artificial intelligence tools still face a number of challenges that can hinder effective implementation into clinical practice. The purpose of this review is to explore both the successes and limitations of artificial intelligence applications throughout the muscuskeletal imaging cycle and to highlight how these applications can help enhance the service radiologists deliver to their patients, resulting in increased efficiency as well as improved patient and provider satisfaction.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10440743/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10048687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Case report: ultrasound assisted catheter directed thrombolysis of an embolic partial occlusion of the superior mesenteric artery. 病例报告:超声辅助导管定向溶栓治疗肠系膜上动脉栓塞性部分闭塞。
Pub Date : 2023-01-01 DOI: 10.3389/fradi.2023.1167901
Simone Bongiovanni, Marco Bozzolo, Simone Amabile, Enrico Peano, Alberto Balderi

Acute mesenteric ischemia (AMI) is a severe medical condition defined by insufficient vascular supply to the small bowel through mesenteric vessels, resulting in necrosis and eventual gangrene of bowel walls. We present the case of a 64-year-old man with recrudescence of prolonged epigastric pain at rest of few hours duration, cold sweating and episodes of vomiting. A computed tomography scan of his abdomen revealed multiple filling defects in the mid-distal part of the superior mesenteric artery (SMA) and the proximal part of jejunal branches, associated with small intestine walls thickening, suggesting SMA thromboembolism and initial intestinal ischemia. Considering the absence of signs of peritonitis at the abdominal examination and the presence of multiple arterial emboli was decided to perform an endovascular treatment with ultrasound assisted catheter-directed thrombolysis with EkoSonic Endovascular System-EKOS, which resulted in complete dissolution of the multiple emboli and improved blood flow into the intestine wall. The day after the procedure the patient's pain improved significantly and 5 days after he was discharged home asymptomatic on warfarin anticoagulation. After 1 year of follow-up the patient is fine with no further episodes of mesenteric ischemia or other embolisms.

急性肠系膜缺血(AMI)是一种严重的医学疾病,由肠系膜血管向小肠的血管供应不足导致肠壁坏死和最终坏疽。我们提出的情况下,64岁的男子复发延长胃脘痛休息数小时的持续时间,冷汗和呕吐发作。腹部计算机断层扫描显示,肠系膜上动脉(SMA)中远端和空肠分支近端存在多发充盈缺损,并伴有小肠壁增厚,提示SMA血栓栓塞和初始肠缺血。考虑到腹部检查未见腹膜炎征象,且存在多动脉栓塞,我们决定采用超声辅助导管溶栓,使用EkoSonic血管内系统- ekos进行血管内治疗,导致多栓子完全溶解,改善了血液流入肠壁。术后1天患者疼痛明显改善,出院5天后无华法林抗凝症状。经过1年的随访,患者很好,没有再发生肠系膜缺血或其他栓塞。
{"title":"Case report: ultrasound assisted catheter directed thrombolysis of an embolic partial occlusion of the superior mesenteric artery.","authors":"Simone Bongiovanni,&nbsp;Marco Bozzolo,&nbsp;Simone Amabile,&nbsp;Enrico Peano,&nbsp;Alberto Balderi","doi":"10.3389/fradi.2023.1167901","DOIUrl":"https://doi.org/10.3389/fradi.2023.1167901","url":null,"abstract":"<p><p>Acute mesenteric ischemia (AMI) is a severe medical condition defined by insufficient vascular supply to the small bowel through mesenteric vessels, resulting in necrosis and eventual gangrene of bowel walls. We present the case of a 64-year-old man with recrudescence of prolonged epigastric pain at rest of few hours duration, cold sweating and episodes of vomiting. A computed tomography scan of his abdomen revealed multiple filling defects in the mid-distal part of the superior mesenteric artery (SMA) and the proximal part of jejunal branches, associated with small intestine walls thickening, suggesting SMA thromboembolism and initial intestinal ischemia. Considering the absence of signs of peritonitis at the abdominal examination and the presence of multiple arterial emboli was decided to perform an endovascular treatment with ultrasound assisted catheter-directed thrombolysis with EkoSonic Endovascular System-EKOS, which resulted in complete dissolution of the multiple emboli and improved blood flow into the intestine wall. The day after the procedure the patient's pain improved significantly and 5 days after he was discharged home asymptomatic on warfarin anticoagulation. After 1 year of follow-up the patient is fine with no further episodes of mesenteric ischemia or other embolisms.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365118/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10252060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in radiology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1