首页 > 最新文献

Proceedings. IEEE International Symposium on Biomedical Imaging最新文献

英文 中文
A Course-Focused Dual Curriculum For Image Captioning. 以课程为中心的图像字幕双课程。
Pub Date : 2021-04-01 Epub Date: 2021-05-25 DOI: 10.1109/ISBI48211.2021.9434055
Mohammad Alsharid, Rasheed El-Bouri, Harshita Sharma, Lior Drukker, Aris T Papageorghiou, J Alison Noble

We propose a curriculum learning captioning method to caption fetal ultrasound images by training a model to dynamically transition between two different modalities (image and text) as training progresses. Specifically, we propose a course-focused dual curriculum method, where a course is training with a curriculum based on only one of the two modalities involved in image captioning. We compare two configurations of the course-focused dual curriculum; an image-first course-focused dual curriculum which prepares the early training batches primarily on the complexity of the image information before slowly introducing an order of batches for training based on the complexity of the text information, and a text-first course-focused dual curriculum which operates in reverse. The evaluation results show that dynamically transitioning between text and images over epochs of training improves results when compared to the scenario where both modalities are considered in equal measure in every epoch.

我们提出了一种课程学习字幕方法,通过训练一个模型在两种不同的模式(图像和文本)之间动态转换来为胎儿超声图像进行字幕。具体来说,我们提出了一种以课程为中心的双课程方法,其中一门课程是使用基于图像字幕所涉及的两种模式之一的课程进行培训。我们比较了以课程为中心的双重课程的两种配置;一种以图像优先的课程为重点的双课程,其主要根据图像信息的复杂性准备早期训练批次,然后根据文本信息的复杂性慢慢引入批次的训练顺序,以及一种以文本优先的课程为重点的双课程,其反向操作。评估结果表明,与在每个epoch中同等程度地考虑两种模式的场景相比,在文本和图像之间进行动态转换可以改善训练结果。
{"title":"A Course-Focused Dual Curriculum For Image Captioning.","authors":"Mohammad Alsharid,&nbsp;Rasheed El-Bouri,&nbsp;Harshita Sharma,&nbsp;Lior Drukker,&nbsp;Aris T Papageorghiou,&nbsp;J Alison Noble","doi":"10.1109/ISBI48211.2021.9434055","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434055","url":null,"abstract":"<p><p>We propose a curriculum learning captioning method to caption fetal ultrasound images by training a model to dynamically transition between two different modalities (image and text) as training progresses. Specifically, we propose a course-focused dual curriculum method, where a course is training with a curriculum based on only one of the two modalities involved in image captioning. We compare two configurations of the course-focused dual curriculum; an image-first course-focused dual curriculum which prepares the early training batches primarily on the complexity of the image information before slowly introducing an order of batches for training based on the complexity of the text information, and a text-first course-focused dual curriculum which operates in reverse. The evaluation results show that dynamically transitioning between text and images over epochs of training improves results when compared to the scenario where both modalities are considered in equal measure in every epoch.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"716-720"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ISBI48211.2021.9434055","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39327913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
MULTI-DOMAIN LEARNING BY META-LEARNING: TAKING OPTIMAL STEPS IN MULTI-DOMAIN LOSS LANDSCAPES BY INNER-LOOP LEARNING. 基于元学习的多领域学习:通过内循环学习在多领域损失格局中采取最优步骤。
Pub Date : 2021-04-01 Epub Date: 2021-05-25 DOI: 10.1109/ISBI48211.2021.9433977
Anthony Sicilia, Xingchen Zhao, Davneet S Minhas, Erin E O'Connor, Howard J Aizenstein, William E Klunk, Dana L Tudorascu, Seong Jae Hwang

We consider a model-agnostic solution to the problem of Multi-Domain Learning (MDL) for multi-modal applications. Many existing MDL techniques are model-dependent solutions which explicitly require nontrivial architectural changes to construct domain-specific modules. Thus, properly applying these MDL techniques for new problems with well-established models, e.g. U-Net for semantic segmentation, may demand various low-level implementation efforts. In this paper, given emerging multi-modal data (e.g., various structural neuroimaging modalities), we aim to enable MDL purely algorithmically so that widely used neural networks can trivially achieve MDL in a model-independent manner. To this end, we consider a weighted loss function and extend it to an effective procedure by employing techniques from the recently active area of learning-to-learn (meta-learning). Specifically, we take inner-loop gradient steps to dynamically estimate posterior distributions over the hyperparameters of our loss function. Thus, our method is model-agnostic, requiring no additional model parameters and no network architecture changes; instead, only a few efficient algorithmic modifications are needed to improve performance in MDL. We demonstrate our solution to a fitting problem in medical imaging, specifically, in the automatic segmentation of white matter hyperintensity (WMH). We look at two neuroimaging modalities (T1-MR and FLAIR) with complementary information fitting for our problem.

针对多模态应用的多领域学习(MDL)问题,提出了一种与模型无关的解决方案。许多现有的MDL技术都是依赖于模型的解决方案,它们显式地要求进行重要的体系结构更改来构建特定于领域的模块。因此,适当地将这些MDL技术应用于具有成熟模型的新问题,例如用于语义分割的U-Net,可能需要各种底层实现工作。在本文中,考虑到新兴的多模态数据(例如,各种结构神经成像模式),我们的目标是使MDL纯粹通过算法实现,以便广泛使用的神经网络可以以模型独立的方式轻松实现MDL。为此,我们考虑一个加权损失函数,并通过采用最近活跃的学习-学习(元学习)领域的技术将其扩展为一个有效的过程。具体来说,我们采用内环梯度步骤来动态估计损失函数超参数上的后验分布。因此,我们的方法是模型无关的,不需要额外的模型参数,也不需要改变网络架构;相反,只需要一些有效的算法修改就可以提高MDL的性能。我们展示了我们的解决方案,以拟合问题在医学成像,特别是在自动分割白质高强度(WMH)。我们看两种神经成像模式(T1-MR和FLAIR)与互补的信息适合我们的问题。
{"title":"MULTI-DOMAIN LEARNING BY META-LEARNING: TAKING OPTIMAL STEPS IN MULTI-DOMAIN LOSS LANDSCAPES BY INNER-LOOP LEARNING.","authors":"Anthony Sicilia,&nbsp;Xingchen Zhao,&nbsp;Davneet S Minhas,&nbsp;Erin E O'Connor,&nbsp;Howard J Aizenstein,&nbsp;William E Klunk,&nbsp;Dana L Tudorascu,&nbsp;Seong Jae Hwang","doi":"10.1109/ISBI48211.2021.9433977","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433977","url":null,"abstract":"<p><p>We consider a model-agnostic solution to the problem of Multi-Domain Learning (MDL) for multi-modal applications. Many existing MDL techniques are model-dependent solutions which explicitly require nontrivial architectural changes to construct domain-specific modules. Thus, properly applying these MDL techniques for new problems with well-established models, e.g. U-Net for semantic segmentation, may demand various low-level implementation efforts. In this paper, given emerging multi-modal data (e.g., various structural neuroimaging modalities), we aim to enable MDL purely algorithmically so that widely used neural networks can trivially achieve MDL in a model-independent manner. To this end, we consider a weighted loss function and extend it to an effective procedure by employing techniques from the recently active area of learning-to-learn (meta-learning). Specifically, we take inner-loop gradient steps to dynamically estimate posterior distributions over the hyperparameters of our loss function. Thus, our method is <i>model-agnostic</i>, requiring no additional model parameters and no network architecture changes; instead, only a few efficient algorithmic modifications are needed to improve performance in MDL. We demonstrate our solution to a fitting problem in medical imaging, specifically, in the automatic segmentation of white matter hyperintensity (WMH). We look at two neuroimaging modalities (T1-MR and FLAIR) with complementary information fitting for our problem.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"650-654"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ISBI48211.2021.9433977","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39588695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
INTEGRATIVE RADIOMICS MODELS TO PREDICT BIOPSY RESULTS FOR NEGATIVE PROSTATE MRI. 综合放射组学模型预测前列腺mri阴性活检结果。
Pub Date : 2021-04-01 Epub Date: 2021-05-25 DOI: 10.1109/isbi48211.2021.9433879
Haoxin Zheng, Qi Miao, Steven S Raman, Fabien Scalzo, Kyunghyun Sung

Multi-parametric MRI (mpMRI) is a powerful non-invasive tool for diagnosing prostate cancer (PCa) and is widely recommended to be performed before prostate biopsies. Prostate Imaging Reporting and Data System version (PI-RADS) is used to interpret mpMRI. However, when the pre-biopsy mpMRI is negative, PI-RADS 1 or 2, there exists no consensus on which patients should undergo prostate biopsies. Recently, radiomics has shown great abilities in quantitative imaging analysis with outstanding performance on computer-aid diagnosis tasks. We proposed an integrative radiomics-based approach to predict the prostate biopsy results when pre-biopsy mpMRI is negative. Specifically, the proposed approach combined radiomics features and clinical features with machine learning to stratify positive and negative biopsy groups among negative mpMRI patients. We retrospectively reviewed all clinical prostate MRIs and identified 330 negative mpMRI scans, followed by biopsy results. Our proposed model was trained and validated with 10-fold cross-validation and reached the negative predicted value (NPV) of 0.99, the sensitivity of 0.88, and the specificity of 0.63 in receiver operating characteristic (ROC) analysis. Compared with results from existing methods, ours achieved 11.2% higher NPV and 87.2% higher sensitivity with a cost of 23.2% less specificity.

多参数磁共振成像(mpMRI)是诊断前列腺癌(PCa)的一种强大的非侵入性工具,被广泛推荐在前列腺活检之前进行。前列腺成像报告和数据系统版本(PI-RADS)用于解释mpMRI。然而,当活检前mpMRI为阴性,PI-RADS为1或2时,对于哪些患者应该进行前列腺活检尚无共识。近年来,放射组学在定量成像分析方面表现出了很强的能力,在计算机辅助诊断任务方面表现突出。当活检前mpMRI为阴性时,我们提出了一种基于放射学的综合方法来预测前列腺活检结果。具体而言,该方法将放射组学特征和临床特征与机器学习相结合,在mpMRI阴性患者中对阳性和阴性活检组进行分层。我们回顾性地回顾了所有临床前列腺mri,并确定了330例mpMRI阴性扫描,随后是活检结果。我们提出的模型经10倍交叉验证,在受试者工作特征(ROC)分析中,阴性预测值(NPV)为0.99,敏感性为0.88,特异性为0.63。与现有方法的结果相比,我们的NPV提高了11.2%,灵敏度提高了87.2%,特异性降低了23.2%。
{"title":"INTEGRATIVE RADIOMICS MODELS TO PREDICT BIOPSY RESULTS FOR NEGATIVE PROSTATE MRI.","authors":"Haoxin Zheng,&nbsp;Qi Miao,&nbsp;Steven S Raman,&nbsp;Fabien Scalzo,&nbsp;Kyunghyun Sung","doi":"10.1109/isbi48211.2021.9433879","DOIUrl":"https://doi.org/10.1109/isbi48211.2021.9433879","url":null,"abstract":"<p><p>Multi-parametric MRI (mpMRI) is a powerful non-invasive tool for diagnosing prostate cancer (PCa) and is widely recommended to be performed before prostate biopsies. Prostate Imaging Reporting and Data System version (PI-RADS) is used to interpret mpMRI. However, when the pre-biopsy mpMRI is negative, PI-RADS 1 or 2, there exists no consensus on which patients should undergo prostate biopsies. Recently, radiomics has shown great abilities in quantitative imaging analysis with outstanding performance on computer-aid diagnosis tasks. We proposed an integrative radiomics-based approach to predict the prostate biopsy results when pre-biopsy mpMRI is negative. Specifically, the proposed approach combined radiomics features and clinical features with machine learning to stratify positive and negative biopsy groups among negative mpMRI patients. We retrospectively reviewed all clinical prostate MRIs and identified 330 negative mpMRI scans, followed by biopsy results. Our proposed model was trained and validated with 10-fold cross-validation and reached the negative predicted value (NPV) of 0.99, the sensitivity of 0.88, and the specificity of 0.63 in receiver operating characteristic (ROC) analysis. Compared with results from existing methods, ours achieved 11.2% higher NPV and 87.2% higher sensitivity with a cost of 23.2% less specificity.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"877-881"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9433879","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39862550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RECONSTRUCTION AND SEGMENTATION OF PARALLEL MR DATA USING IMAGE DOMAIN DEEP-SLR. 使用图像域深层扫描仪重建和分割并行 MR 数据。
Pub Date : 2021-04-01 Epub Date: 2021-05-25 DOI: 10.1109/isbi48211.2021.9434056
Aniket Pramanik, Mathews Jacob

The main focus of this work is a novel framework for the joint reconstruction and segmentation of parallel MRI (PMRI) brain data. We introduce an image domain deep network for calibrationless recovery of undersampled PMRI data. The proposed approach is the deep-learning (DL) based generalization of local low-rank based approaches for uncalibrated PMRI recovery including CLEAR [6]. Since the image domain approach exploits additional annihilation relations compared to k-space based approaches, we expect it to offer improved performance. To minimize segmentation errors resulting from undersampling artifacts, we combined the proposed scheme with a segmentation network and trained it in an end-to-end fashion. In addition to reducing segmentation errors, this approach also offers improved reconstruction performance by reducing overfitting; the reconstructed images exhibit reduced blurring and sharper edges than independently trained reconstruction network.

这项工作的重点是为并行磁共振成像(PMRI)脑数据的联合重建和分割提供一个新框架。我们引入了一种图像域深度网络,用于对欠采样 PMRI 数据进行无校准恢复。所提出的方法是基于深度学习(DL)的局部低秩方法(包括 CLEAR [6])的泛化,用于无校准 PMRI 恢复。与基于 k 空间的方法相比,图像域方法利用了额外的湮灭关系,因此我们希望它能提供更好的性能。为了尽量减少因采样不足造成的分割误差,我们将所提出的方案与分割网络相结合,并以端到端的方式对其进行训练。除了减少分割误差,这种方法还能通过减少过拟合来提高重建性能;与独立训练的重建网络相比,重建图像的模糊程度更低,边缘更清晰。
{"title":"RECONSTRUCTION AND SEGMENTATION OF PARALLEL MR DATA USING IMAGE DOMAIN DEEP-SLR.","authors":"Aniket Pramanik, Mathews Jacob","doi":"10.1109/isbi48211.2021.9434056","DOIUrl":"10.1109/isbi48211.2021.9434056","url":null,"abstract":"<p><p>The main focus of this work is a novel framework for the joint reconstruction and segmentation of parallel MRI (PMRI) brain data. We introduce an image domain deep network for calibrationless recovery of undersampled PMRI data. The proposed approach is the deep-learning (DL) based generalization of local low-rank based approaches for uncalibrated PMRI recovery including CLEAR [6]. Since the image domain approach exploits additional annihilation relations compared to k-space based approaches, we expect it to offer improved performance. To minimize segmentation errors resulting from undersampling artifacts, we combined the proposed scheme with a segmentation network and trained it in an end-to-end fashion. In addition to reducing segmentation errors, this approach also offers improved reconstruction performance by reducing overfitting; the reconstructed images exhibit reduced blurring and sharper edges than independently trained reconstruction network.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8330410/pdf/nihms-1668202.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39289271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based Parameter Mapping with Uncertainty Estimation for Fat Quantification using Accelerated Free-Breathing Radial MRI. 基于深度学习的参数映射与不确定性估计,用于加速自由呼吸径向MRI脂肪量化。
Pub Date : 2021-04-01 Epub Date: 2021-05-25 DOI: 10.1109/isbi48211.2021.9433938
Shu-Fu Shih, Sevgi Gokce Kafali, Tess Armstrong, Xiaodong Zhong, Kara L Calkins, Holden H Wu

Deep learning has been applied to remove artifacts from undersampled MRI and to replace time-consuming signal fitting in quantitative MRI, but these have usually been treated as separate tasks, which does not fully exploit the shared information. This work proposes a new two-stage framework that completes these two tasks in a concerted approach and also estimates the pixel-wise uncertainty levels. Results from accelerated free-breathing radial MRI for liver fat quantification demonstrate that the proposed framework can achieve high image quality from undersampled radial data, high accuracy for liver fat quantification, and detect uncertainty caused by noisy input data. The proposed framework achieved 3-fold acceleration to <1 min scan time and reduced the computational time for signal fitting to <100 ms/slice in free-breathing liver fat quantification.

深度学习已被应用于从欠采样MRI中去除伪影,并取代定量MRI中耗时的信号拟合,但这些通常被视为单独的任务,不能充分利用共享信息。这项工作提出了一个新的两阶段框架,以协调一致的方法完成这两项任务,并估计像素不确定性水平。加速自由呼吸径向MRI肝脏脂肪量化结果表明,该框架可以从欠采样的径向数据中获得高图像质量,肝脏脂肪量化精度高,并检测噪声输入数据引起的不确定性。提出的框架实现了3倍的加速
{"title":"Deep Learning-Based Parameter Mapping with Uncertainty Estimation for Fat Quantification using Accelerated Free-Breathing Radial MRI.","authors":"Shu-Fu Shih,&nbsp;Sevgi Gokce Kafali,&nbsp;Tess Armstrong,&nbsp;Xiaodong Zhong,&nbsp;Kara L Calkins,&nbsp;Holden H Wu","doi":"10.1109/isbi48211.2021.9433938","DOIUrl":"https://doi.org/10.1109/isbi48211.2021.9433938","url":null,"abstract":"<p><p>Deep learning has been applied to remove artifacts from undersampled MRI and to replace time-consuming signal fitting in quantitative MRI, but these have usually been treated as separate tasks, which does not fully exploit the shared information. This work proposes a new two-stage framework that completes these two tasks in a concerted approach and also estimates the pixel-wise uncertainty levels. Results from accelerated free-breathing radial MRI for liver fat quantification demonstrate that the proposed framework can achieve high image quality from undersampled radial data, high accuracy for liver fat quantification, and detect uncertainty caused by noisy input data. The proposed framework achieved 3-fold acceleration to <1 min scan time and reduced the computational time for signal fitting to <100 ms/slice in free-breathing liver fat quantification.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"433-437"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9433938","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39816504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
UNIMODAL CYCLIC REGULARIZATION FOR TRAINING MULTIMODAL IMAGE REGISTRATION NETWORKS. 训练多模态图像配准网络的单峰循环正则化。
Pub Date : 2021-04-01 Epub Date: 2021-05-25 DOI: 10.1109/isbi48211.2021.9433926
Zhe Xu, Jiangpeng Yan, Jie Luo, William Wells, Xiu Li, Jayender Jagadeesan

The loss function of an unsupervised multimodal image registration framework has two terms, i.e., a metric for similarity measure and regularization. In the deep learning era, researchers proposed many approaches to automatically learn the similarity metric, which has been shown effective in improving registration performance. However, for the regularization term, most existing multimodal registration approaches still use a hand-crafted formula to impose artificial properties on the estimated deformation field. In this work, we propose a unimodal cyclic regularization training pipeline, which learns task-specific prior knowledge from simpler unimodal registration, to constrain the deformation field of multimodal registration. In the experiment of abdominal CT-MR registration, the proposed method yields better results over conventional regularization methods, especially for severely deformed local regions.

无监督多模态图像配准框架的损失函数有两个项,即相似性度量和正则化度量。在深度学习时代,研究人员提出了许多自动学习相似度度量的方法,这些方法已被证明有效地提高了配准性能。然而,对于正则化项,大多数现有的多模态配准方法仍然使用手工制作的公式来对估计的变形场施加人工属性。在这项工作中,我们提出了一种单峰循环正则化训练管道,它从更简单的单峰配准中学习任务特定的先验知识,以约束多模态配准的变形场。在腹部CT-MR配准实验中,该方法对局部严重变形区域的配准效果优于常规正则化方法。
{"title":"UNIMODAL CYCLIC REGULARIZATION FOR TRAINING MULTIMODAL IMAGE REGISTRATION NETWORKS.","authors":"Zhe Xu,&nbsp;Jiangpeng Yan,&nbsp;Jie Luo,&nbsp;William Wells,&nbsp;Xiu Li,&nbsp;Jayender Jagadeesan","doi":"10.1109/isbi48211.2021.9433926","DOIUrl":"https://doi.org/10.1109/isbi48211.2021.9433926","url":null,"abstract":"<p><p>The loss function of an unsupervised multimodal image registration framework has two terms, i.e., a metric for similarity measure and regularization. In the deep learning era, researchers proposed many approaches to automatically learn the similarity metric, which has been shown effective in improving registration performance. However, for the regularization term, most existing multimodal registration approaches still use a hand-crafted formula to impose artificial properties on the estimated deformation field. In this work, we propose a unimodal cyclic regularization training pipeline, which learns task-specific prior knowledge from simpler unimodal registration, to constrain the deformation field of multimodal registration. In the experiment of abdominal CT-MR registration, the proposed method yields better results over conventional regularization methods, especially for severely deformed local regions.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9433926","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39291016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
SHAPE-REGULARIZED UNSUPERVISED LEFT VENTRICULAR MOTION NETWORK WITH SEGMENTATION CAPABILITY IN 3D+TIME ECHOCARDIOGRAPHY. 三维+时间超声心动图中具有分割能力的形状正则化无监督左心室运动网络。
Pub Date : 2021-04-01 Epub Date: 2021-05-25 DOI: 10.1109/isbi48211.2021.9433888
Kevinminh Ta, Shawn S Ahn, John C Stendahl, Albert J Sinusas, James S Duncan

Accurate motion estimation and segmentation of the left ventricle from medical images are important tasks for quantitative evaluation of cardiovascular health. Echocardiography offers a cost-efficient and non-invasive modality for examining the heart, but provides additional challenges for automated analyses due to the low signal-to-noise ratio inherent in ultrasound imaging. In this work, we propose a shape regularized convolutional neural network for estimating dense displacement fields between sequential 3D B-mode echocardiography images with the capability of also predicting left ventricular segmentation masks. Manually traced segmentations are used as a guide to assist in the unsupervised estimation of displacement between a source and a target image while also serving as labels to train the network to additionally predict segmentations. To enforce realistic cardiac motion patterns, a flow incompressibility term is also incorporated to penalize divergence. Our proposed network is evaluated on an in vivo canine 3D+t B-mode echocardiographic dataset. It is shown that the shape regularizer improves the motion estimation performance of the network and our overall model performs favorably against competing methods.

从医学图像中对左心室进行准确的运动估计和分割是定量评价心血管健康的重要任务。超声心动图提供了一种低成本、无创的心脏检查方式,但由于超声成像固有的低信噪比,为自动分析带来了额外的挑战。在这项工作中,我们提出了一种形状正则化卷积神经网络,用于估计连续3D b型超声心动图图像之间的密集位移场,并具有预测左心室分割掩模的能力。手动跟踪的分割被用作指导,以协助源图像和目标图像之间的位移的无监督估计,同时也作为标签来训练网络,以额外预测分割。为了加强真实的心脏运动模式,还加入了流动不可压缩性项来惩罚分歧。我们提出的网络在犬体内3D+t b型超声心动图数据集上进行了评估。结果表明,形状正则化器提高了网络的运动估计性能,并且我们的整体模型比竞争方法表现得更好。
{"title":"SHAPE-REGULARIZED UNSUPERVISED LEFT VENTRICULAR MOTION NETWORK WITH SEGMENTATION CAPABILITY IN 3D+TIME ECHOCARDIOGRAPHY.","authors":"Kevinminh Ta,&nbsp;Shawn S Ahn,&nbsp;John C Stendahl,&nbsp;Albert J Sinusas,&nbsp;James S Duncan","doi":"10.1109/isbi48211.2021.9433888","DOIUrl":"https://doi.org/10.1109/isbi48211.2021.9433888","url":null,"abstract":"<p><p>Accurate motion estimation and segmentation of the left ventricle from medical images are important tasks for quantitative evaluation of cardiovascular health. Echocardiography offers a cost-efficient and non-invasive modality for examining the heart, but provides additional challenges for automated analyses due to the low signal-to-noise ratio inherent in ultrasound imaging. In this work, we propose a shape regularized convolutional neural network for estimating dense displacement fields between sequential 3D B-mode echocardiography images with the capability of also predicting left ventricular segmentation masks. Manually traced segmentations are used as a guide to assist in the unsupervised estimation of displacement between a source and a target image while also serving as labels to train the network to additionally predict segmentations. To enforce realistic cardiac motion patterns, a flow incompressibility term is also incorporated to penalize divergence. Our proposed network is evaluated on an <i>in vivo</i> canine 3D+t B-mode echocardiographic dataset. It is shown that the shape regularizer improves the motion estimation performance of the network and our overall model performs favorably against competing methods.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"536-540"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9433888","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39103614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A UNIFIED CONDITIONAL DISENTANGLEMENT FRAMEWORK FOR MULTIMODAL BRAIN MR IMAGE TRANSLATION. 多模态脑磁共振图像翻译的统一条件解纠缠框架。
Pub Date : 2021-04-01 DOI: 10.1109/isbi48211.2021.9433897
Xiaofeng Liu, Fangxu Xing, Georges El Fakhri, Jonghye Woo

Multimodal MRI provides complementary and clinically relevant information to probe tissue condition and to characterize various diseases. However, it is often difficult to acquire sufficiently many modalities from the same subject due to limitations in study plans, while quantitative analysis is still demanded. In this work, we propose a unified conditional disentanglement framework to synthesize any arbitrary modality from an input modality. Our framework hinges on a cycle-constrained conditional adversarial training approach, where it can extract a modality-invariant anatomical feature with a modality-agnostic encoder and generate a target modality with a conditioned decoder. We validate our framework on four MRI modalities, including T1-weighted, T1 contrast enhanced, T2-weighted, and FLAIR MRI, from the BraTS'18 database, showing superior performance on synthesis quality over the comparison methods. In addition, we report results from experiments on a tumor segmentation task carried out with synthesized data.

多模态MRI提供了互补和临床相关的信息来探测组织状况和表征各种疾病。然而,由于研究计划的限制,往往很难从同一学科获得足够多的模式,而定量分析仍然是必要的。在这项工作中,我们提出了一个统一的条件解纠缠框架,从输入模态合成任意模态。我们的框架依赖于循环约束条件对抗训练方法,其中它可以使用模态不可知编码器提取模态不变的解剖特征,并使用条件解码器生成目标模态。我们在BraTS’18数据库中的四种MRI模式上验证了我们的框架,包括T1加权、T1对比度增强、t2加权和FLAIR MRI,显示出比比较方法更优越的合成质量。此外,我们报告了用合成数据进行肿瘤分割任务的实验结果。
{"title":"A UNIFIED CONDITIONAL DISENTANGLEMENT FRAMEWORK FOR MULTIMODAL BRAIN MR IMAGE TRANSLATION.","authors":"Xiaofeng Liu,&nbsp;Fangxu Xing,&nbsp;Georges El Fakhri,&nbsp;Jonghye Woo","doi":"10.1109/isbi48211.2021.9433897","DOIUrl":"https://doi.org/10.1109/isbi48211.2021.9433897","url":null,"abstract":"<p><p>Multimodal MRI provides complementary and clinically relevant information to probe tissue condition and to characterize various diseases. However, it is often difficult to acquire sufficiently many modalities from the same subject due to limitations in study plans, while quantitative analysis is still demanded. In this work, we propose a unified conditional disentanglement framework to synthesize any arbitrary modality from an input modality. Our framework hinges on a cycle-constrained conditional adversarial training approach, where it can extract a modality-invariant anatomical feature with a modality-agnostic encoder and generate a target modality with a conditioned decoder. We validate our framework on four MRI modalities, including T1-weighted, T1 contrast enhanced, T2-weighted, and FLAIR MRI, from the BraTS'18 database, showing superior performance on synthesis quality over the comparison methods. In addition, we report results from experiments on a tumor segmentation task carried out with synthesized data.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9433897","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39452028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
HISTOPATHOLOGY IMAGE REGISTRATION BY INTEGRATED TEXTURE AND SPATIAL PROXIMITY BASED LANDMARK SELECTION AND MODIFICATION. 基于纹理和空间接近的组织病理学图像配准。
Pub Date : 2021-04-01 Epub Date: 2021-05-25 DOI: 10.1109/isbi48211.2021.9434114
Pangpang Liu, Fusheng Wang, George Teodoro, Jun Kong

Three-dimensional (3D) digital pathology has been emerging for next-generation tissue based cancer research. To enable such histopathology image volume analysis, serial histopathology slides need to be well aligned. In this paper, we propose a histopathology image registration fine tuning method with integrated landmark evaluations by texture and spatial proximity measures. Representative anatomical structures and image corner features are first detected as landmark candidates. Next, we identify strong and modify weak matched landmarks by leveraging image texture features and landmark spatial proximity measures. Both qualitative and quantitative results of extensive experiments demonstrate that our proposed method is robust and can further enhance registration accuracy of our previously registered image set by 31.15% (correlation), 4.88% (mutual information), and 41.02% (mean squared error), respectively. The promising experimental results suggest that our method can be used as a fine tuning module to further boost registration accuracy, a premise of histology spatial and morphology analysis in an information-lossless 3D tissue space for cancer research.

三维(3D)数字病理学已经出现在下一代基于组织的癌症研究中。为了使这样的组织病理学图像体积分析,连续的组织病理学切片需要很好地对齐。在本文中,我们提出了一种组织病理学图像配准微调方法,结合纹理和空间接近度的综合地标评估。首先检测具有代表性的解剖结构和图像角点特征作为候选地标。接下来,我们利用图像纹理特征和地标空间接近度量来识别强匹配地标并修改弱匹配地标。大量的定性和定量实验结果表明,我们提出的方法具有鲁棒性,可以进一步提高我们之前注册的图像集的配准精度,分别提高31.15%(相关性)、4.88%(互信息)和41.02%(均方误差)。实验结果表明,我们的方法可以作为一个微调模块来进一步提高配准精度,这是在信息无损的三维组织空间中进行组织空间和形态分析的前提。
{"title":"HISTOPATHOLOGY IMAGE REGISTRATION BY INTEGRATED TEXTURE AND SPATIAL PROXIMITY BASED LANDMARK SELECTION AND MODIFICATION.","authors":"Pangpang Liu,&nbsp;Fusheng Wang,&nbsp;George Teodoro,&nbsp;Jun Kong","doi":"10.1109/isbi48211.2021.9434114","DOIUrl":"https://doi.org/10.1109/isbi48211.2021.9434114","url":null,"abstract":"<p><p>Three-dimensional (3D) digital pathology has been emerging for next-generation tissue based cancer research. To enable such histopathology image volume analysis, serial histopathology slides need to be well aligned. In this paper, we propose a histopathology image registration fine tuning method with integrated landmark evaluations by texture and spatial proximity measures. Representative anatomical structures and image corner features are first detected as landmark candidates. Next, we identify strong and modify weak matched landmarks by leveraging image texture features and landmark spatial proximity measures. Both qualitative and quantitative results of extensive experiments demonstrate that our proposed method is robust and can further enhance registration accuracy of our previously registered image set by 31.15% (correlation), 4.88% (mutual information), and 41.02% (mean squared error), respectively. The promising experimental results suggest that our method can be used as a fine tuning module to further boost registration accuracy, a premise of histology spatial and morphology analysis in an information-lossless 3D tissue space for cancer research.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"1827-1830"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9434114","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39458883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
DYNAMIC IMAGING USING DEEP BILINEAR UNSUPERVISED LEARNING (DEBLUR). 使用深度双线性无监督学习(去模糊)的动态成像。
Pub Date : 2021-04-01 Epub Date: 2021-05-25 DOI: 10.1109/isbi48211.2021.9433882
Abdul Haseeb Ahmed, Prashant Nagpal, Stanley Kruger, Mathews Jacob

Bilinear models such as low-rank and compressed sensing, which decompose the dynamic data to spatial and temporal factors, are powerful and memory efficient tools for the recovery of dynamic MRI data. These methods rely on sparsity and energy compaction priors on the factors to regularize the recovery. Motivated by deep image prior, we introduce a novel bilinear model, whose factors are regularized using convolutional neural networks. To reduce the run time, we initialize the CNN parameters by pre-training them on pre-acquired data with longer acquistion time. Since fully sampled data is not available, pretraining is performed on undersampled data in an unsupervised fashion. We use sparsity regularization of the network parameters to minimize the overfitting of the network to measurement noise. Our experiments on on free-breathing and ungated cardiac CINE data acquired using a navigated golden-angle gradient-echo radial sequence show the ability of our method to provide reduced spatial blurring as compared to low-rank and SToRM reconstructions.

低秩压缩感知等双线性模型将动态数据分解为空间因子和时间因子,是恢复动态MRI数据的有效工具。这些方法依赖于稀疏性和能量压实先验因素来规范采收率。在深度图像先验的激励下,我们引入了一种新的双线性模型,并利用卷积神经网络对其因素进行正则化。为了减少运行时间,我们通过在较长采集时间的预采集数据上进行预训练来初始化CNN参数。由于没有完全采样的数据,所以以无监督的方式对欠采样数据进行预训练。我们使用网络参数的稀疏正则化来最小化网络对测量噪声的过拟合。我们对使用导航黄金角梯度回波径向序列获得的自由呼吸和非门控心脏CINE数据进行的实验表明,与低秩和SToRM重建相比,我们的方法能够提供更少的空间模糊。
{"title":"DYNAMIC IMAGING USING DEEP BILINEAR UNSUPERVISED LEARNING (DEBLUR).","authors":"Abdul Haseeb Ahmed,&nbsp;Prashant Nagpal,&nbsp;Stanley Kruger,&nbsp;Mathews Jacob","doi":"10.1109/isbi48211.2021.9433882","DOIUrl":"https://doi.org/10.1109/isbi48211.2021.9433882","url":null,"abstract":"<p><p>Bilinear models such as low-rank and compressed sensing, which decompose the dynamic data to spatial and temporal factors, are powerful and memory efficient tools for the recovery of dynamic MRI data. These methods rely on sparsity and energy compaction priors on the factors to regularize the recovery. Motivated by deep image prior, we introduce a novel bilinear model, whose factors are regularized using convolutional neural networks. To reduce the run time, we initialize the CNN parameters by pre-training them on pre-acquired data with longer acquistion time. Since fully sampled data is not available, pretraining is performed on undersampled data in an unsupervised fashion. We use sparsity regularization of the network parameters to minimize the overfitting of the network to measurement noise. Our experiments on on free-breathing and ungated cardiac CINE data acquired using a navigated golden-angle gradient-echo radial sequence show the ability of our method to provide reduced spatial blurring as compared to low-rank and SToRM reconstructions.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"1099-1102"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9433882","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39552699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proceedings. IEEE International Symposium on Biomedical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1