Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/ISBI48211.2021.9434055
Mohammad Alsharid, Rasheed El-Bouri, Harshita Sharma, Lior Drukker, Aris T Papageorghiou, J Alison Noble
We propose a curriculum learning captioning method to caption fetal ultrasound images by training a model to dynamically transition between two different modalities (image and text) as training progresses. Specifically, we propose a course-focused dual curriculum method, where a course is training with a curriculum based on only one of the two modalities involved in image captioning. We compare two configurations of the course-focused dual curriculum; an image-first course-focused dual curriculum which prepares the early training batches primarily on the complexity of the image information before slowly introducing an order of batches for training based on the complexity of the text information, and a text-first course-focused dual curriculum which operates in reverse. The evaluation results show that dynamically transitioning between text and images over epochs of training improves results when compared to the scenario where both modalities are considered in equal measure in every epoch.
{"title":"A Course-Focused Dual Curriculum For Image Captioning.","authors":"Mohammad Alsharid, Rasheed El-Bouri, Harshita Sharma, Lior Drukker, Aris T Papageorghiou, J Alison Noble","doi":"10.1109/ISBI48211.2021.9434055","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434055","url":null,"abstract":"<p><p>We propose a curriculum learning captioning method to caption fetal ultrasound images by training a model to dynamically transition between two different modalities (image and text) as training progresses. Specifically, we propose a course-focused dual curriculum method, where a course is training with a curriculum based on only one of the two modalities involved in image captioning. We compare two configurations of the course-focused dual curriculum; an image-first course-focused dual curriculum which prepares the early training batches primarily on the complexity of the image information before slowly introducing an order of batches for training based on the complexity of the text information, and a text-first course-focused dual curriculum which operates in reverse. The evaluation results show that dynamically transitioning between text and images over epochs of training improves results when compared to the scenario where both modalities are considered in equal measure in every epoch.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"716-720"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ISBI48211.2021.9434055","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39327913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/ISBI48211.2021.9433977
Anthony Sicilia, Xingchen Zhao, Davneet S Minhas, Erin E O'Connor, Howard J Aizenstein, William E Klunk, Dana L Tudorascu, Seong Jae Hwang
We consider a model-agnostic solution to the problem of Multi-Domain Learning (MDL) for multi-modal applications. Many existing MDL techniques are model-dependent solutions which explicitly require nontrivial architectural changes to construct domain-specific modules. Thus, properly applying these MDL techniques for new problems with well-established models, e.g. U-Net for semantic segmentation, may demand various low-level implementation efforts. In this paper, given emerging multi-modal data (e.g., various structural neuroimaging modalities), we aim to enable MDL purely algorithmically so that widely used neural networks can trivially achieve MDL in a model-independent manner. To this end, we consider a weighted loss function and extend it to an effective procedure by employing techniques from the recently active area of learning-to-learn (meta-learning). Specifically, we take inner-loop gradient steps to dynamically estimate posterior distributions over the hyperparameters of our loss function. Thus, our method is model-agnostic, requiring no additional model parameters and no network architecture changes; instead, only a few efficient algorithmic modifications are needed to improve performance in MDL. We demonstrate our solution to a fitting problem in medical imaging, specifically, in the automatic segmentation of white matter hyperintensity (WMH). We look at two neuroimaging modalities (T1-MR and FLAIR) with complementary information fitting for our problem.
{"title":"MULTI-DOMAIN LEARNING BY META-LEARNING: TAKING OPTIMAL STEPS IN MULTI-DOMAIN LOSS LANDSCAPES BY INNER-LOOP LEARNING.","authors":"Anthony Sicilia, Xingchen Zhao, Davneet S Minhas, Erin E O'Connor, Howard J Aizenstein, William E Klunk, Dana L Tudorascu, Seong Jae Hwang","doi":"10.1109/ISBI48211.2021.9433977","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433977","url":null,"abstract":"<p><p>We consider a model-agnostic solution to the problem of Multi-Domain Learning (MDL) for multi-modal applications. Many existing MDL techniques are model-dependent solutions which explicitly require nontrivial architectural changes to construct domain-specific modules. Thus, properly applying these MDL techniques for new problems with well-established models, e.g. U-Net for semantic segmentation, may demand various low-level implementation efforts. In this paper, given emerging multi-modal data (e.g., various structural neuroimaging modalities), we aim to enable MDL purely algorithmically so that widely used neural networks can trivially achieve MDL in a model-independent manner. To this end, we consider a weighted loss function and extend it to an effective procedure by employing techniques from the recently active area of learning-to-learn (meta-learning). Specifically, we take inner-loop gradient steps to dynamically estimate posterior distributions over the hyperparameters of our loss function. Thus, our method is <i>model-agnostic</i>, requiring no additional model parameters and no network architecture changes; instead, only a few efficient algorithmic modifications are needed to improve performance in MDL. We demonstrate our solution to a fitting problem in medical imaging, specifically, in the automatic segmentation of white matter hyperintensity (WMH). We look at two neuroimaging modalities (T1-MR and FLAIR) with complementary information fitting for our problem.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"650-654"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ISBI48211.2021.9433977","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39588695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/isbi48211.2021.9433879
Haoxin Zheng, Qi Miao, Steven S Raman, Fabien Scalzo, Kyunghyun Sung
Multi-parametric MRI (mpMRI) is a powerful non-invasive tool for diagnosing prostate cancer (PCa) and is widely recommended to be performed before prostate biopsies. Prostate Imaging Reporting and Data System version (PI-RADS) is used to interpret mpMRI. However, when the pre-biopsy mpMRI is negative, PI-RADS 1 or 2, there exists no consensus on which patients should undergo prostate biopsies. Recently, radiomics has shown great abilities in quantitative imaging analysis with outstanding performance on computer-aid diagnosis tasks. We proposed an integrative radiomics-based approach to predict the prostate biopsy results when pre-biopsy mpMRI is negative. Specifically, the proposed approach combined radiomics features and clinical features with machine learning to stratify positive and negative biopsy groups among negative mpMRI patients. We retrospectively reviewed all clinical prostate MRIs and identified 330 negative mpMRI scans, followed by biopsy results. Our proposed model was trained and validated with 10-fold cross-validation and reached the negative predicted value (NPV) of 0.99, the sensitivity of 0.88, and the specificity of 0.63 in receiver operating characteristic (ROC) analysis. Compared with results from existing methods, ours achieved 11.2% higher NPV and 87.2% higher sensitivity with a cost of 23.2% less specificity.
{"title":"INTEGRATIVE RADIOMICS MODELS TO PREDICT BIOPSY RESULTS FOR NEGATIVE PROSTATE MRI.","authors":"Haoxin Zheng, Qi Miao, Steven S Raman, Fabien Scalzo, Kyunghyun Sung","doi":"10.1109/isbi48211.2021.9433879","DOIUrl":"https://doi.org/10.1109/isbi48211.2021.9433879","url":null,"abstract":"<p><p>Multi-parametric MRI (mpMRI) is a powerful non-invasive tool for diagnosing prostate cancer (PCa) and is widely recommended to be performed before prostate biopsies. Prostate Imaging Reporting and Data System version (PI-RADS) is used to interpret mpMRI. However, when the pre-biopsy mpMRI is negative, PI-RADS 1 or 2, there exists no consensus on which patients should undergo prostate biopsies. Recently, radiomics has shown great abilities in quantitative imaging analysis with outstanding performance on computer-aid diagnosis tasks. We proposed an integrative radiomics-based approach to predict the prostate biopsy results when pre-biopsy mpMRI is negative. Specifically, the proposed approach combined radiomics features and clinical features with machine learning to stratify positive and negative biopsy groups among negative mpMRI patients. We retrospectively reviewed all clinical prostate MRIs and identified 330 negative mpMRI scans, followed by biopsy results. Our proposed model was trained and validated with 10-fold cross-validation and reached the negative predicted value (NPV) of 0.99, the sensitivity of 0.88, and the specificity of 0.63 in receiver operating characteristic (ROC) analysis. Compared with results from existing methods, ours achieved 11.2% higher NPV and 87.2% higher sensitivity with a cost of 23.2% less specificity.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"877-881"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9433879","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39862550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/isbi48211.2021.9434056
Aniket Pramanik, Mathews Jacob
The main focus of this work is a novel framework for the joint reconstruction and segmentation of parallel MRI (PMRI) brain data. We introduce an image domain deep network for calibrationless recovery of undersampled PMRI data. The proposed approach is the deep-learning (DL) based generalization of local low-rank based approaches for uncalibrated PMRI recovery including CLEAR [6]. Since the image domain approach exploits additional annihilation relations compared to k-space based approaches, we expect it to offer improved performance. To minimize segmentation errors resulting from undersampling artifacts, we combined the proposed scheme with a segmentation network and trained it in an end-to-end fashion. In addition to reducing segmentation errors, this approach also offers improved reconstruction performance by reducing overfitting; the reconstructed images exhibit reduced blurring and sharper edges than independently trained reconstruction network.
这项工作的重点是为并行磁共振成像(PMRI)脑数据的联合重建和分割提供一个新框架。我们引入了一种图像域深度网络,用于对欠采样 PMRI 数据进行无校准恢复。所提出的方法是基于深度学习(DL)的局部低秩方法(包括 CLEAR [6])的泛化,用于无校准 PMRI 恢复。与基于 k 空间的方法相比,图像域方法利用了额外的湮灭关系,因此我们希望它能提供更好的性能。为了尽量减少因采样不足造成的分割误差,我们将所提出的方案与分割网络相结合,并以端到端的方式对其进行训练。除了减少分割误差,这种方法还能通过减少过拟合来提高重建性能;与独立训练的重建网络相比,重建图像的模糊程度更低,边缘更清晰。
{"title":"RECONSTRUCTION AND SEGMENTATION OF PARALLEL MR DATA USING IMAGE DOMAIN DEEP-SLR.","authors":"Aniket Pramanik, Mathews Jacob","doi":"10.1109/isbi48211.2021.9434056","DOIUrl":"10.1109/isbi48211.2021.9434056","url":null,"abstract":"<p><p>The main focus of this work is a novel framework for the joint reconstruction and segmentation of parallel MRI (PMRI) brain data. We introduce an image domain deep network for calibrationless recovery of undersampled PMRI data. The proposed approach is the deep-learning (DL) based generalization of local low-rank based approaches for uncalibrated PMRI recovery including CLEAR [6]. Since the image domain approach exploits additional annihilation relations compared to k-space based approaches, we expect it to offer improved performance. To minimize segmentation errors resulting from undersampling artifacts, we combined the proposed scheme with a segmentation network and trained it in an end-to-end fashion. In addition to reducing segmentation errors, this approach also offers improved reconstruction performance by reducing overfitting; the reconstructed images exhibit reduced blurring and sharper edges than independently trained reconstruction network.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8330410/pdf/nihms-1668202.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39289271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/isbi48211.2021.9433938
Shu-Fu Shih, Sevgi Gokce Kafali, Tess Armstrong, Xiaodong Zhong, Kara L Calkins, Holden H Wu
Deep learning has been applied to remove artifacts from undersampled MRI and to replace time-consuming signal fitting in quantitative MRI, but these have usually been treated as separate tasks, which does not fully exploit the shared information. This work proposes a new two-stage framework that completes these two tasks in a concerted approach and also estimates the pixel-wise uncertainty levels. Results from accelerated free-breathing radial MRI for liver fat quantification demonstrate that the proposed framework can achieve high image quality from undersampled radial data, high accuracy for liver fat quantification, and detect uncertainty caused by noisy input data. The proposed framework achieved 3-fold acceleration to <1 min scan time and reduced the computational time for signal fitting to <100 ms/slice in free-breathing liver fat quantification.
{"title":"Deep Learning-Based Parameter Mapping with Uncertainty Estimation for Fat Quantification using Accelerated Free-Breathing Radial MRI.","authors":"Shu-Fu Shih, Sevgi Gokce Kafali, Tess Armstrong, Xiaodong Zhong, Kara L Calkins, Holden H Wu","doi":"10.1109/isbi48211.2021.9433938","DOIUrl":"https://doi.org/10.1109/isbi48211.2021.9433938","url":null,"abstract":"<p><p>Deep learning has been applied to remove artifacts from undersampled MRI and to replace time-consuming signal fitting in quantitative MRI, but these have usually been treated as separate tasks, which does not fully exploit the shared information. This work proposes a new two-stage framework that completes these two tasks in a concerted approach and also estimates the pixel-wise uncertainty levels. Results from accelerated free-breathing radial MRI for liver fat quantification demonstrate that the proposed framework can achieve high image quality from undersampled radial data, high accuracy for liver fat quantification, and detect uncertainty caused by noisy input data. The proposed framework achieved 3-fold acceleration to <1 min scan time and reduced the computational time for signal fitting to <100 ms/slice in free-breathing liver fat quantification.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"433-437"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9433938","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39816504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/isbi48211.2021.9433926
Zhe Xu, Jiangpeng Yan, Jie Luo, William Wells, Xiu Li, Jayender Jagadeesan
The loss function of an unsupervised multimodal image registration framework has two terms, i.e., a metric for similarity measure and regularization. In the deep learning era, researchers proposed many approaches to automatically learn the similarity metric, which has been shown effective in improving registration performance. However, for the regularization term, most existing multimodal registration approaches still use a hand-crafted formula to impose artificial properties on the estimated deformation field. In this work, we propose a unimodal cyclic regularization training pipeline, which learns task-specific prior knowledge from simpler unimodal registration, to constrain the deformation field of multimodal registration. In the experiment of abdominal CT-MR registration, the proposed method yields better results over conventional regularization methods, especially for severely deformed local regions.
{"title":"UNIMODAL CYCLIC REGULARIZATION FOR TRAINING MULTIMODAL IMAGE REGISTRATION NETWORKS.","authors":"Zhe Xu, Jiangpeng Yan, Jie Luo, William Wells, Xiu Li, Jayender Jagadeesan","doi":"10.1109/isbi48211.2021.9433926","DOIUrl":"https://doi.org/10.1109/isbi48211.2021.9433926","url":null,"abstract":"<p><p>The loss function of an unsupervised multimodal image registration framework has two terms, i.e., a metric for similarity measure and regularization. In the deep learning era, researchers proposed many approaches to automatically learn the similarity metric, which has been shown effective in improving registration performance. However, for the regularization term, most existing multimodal registration approaches still use a hand-crafted formula to impose artificial properties on the estimated deformation field. In this work, we propose a unimodal cyclic regularization training pipeline, which learns task-specific prior knowledge from simpler unimodal registration, to constrain the deformation field of multimodal registration. In the experiment of abdominal CT-MR registration, the proposed method yields better results over conventional regularization methods, especially for severely deformed local regions.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9433926","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39291016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/isbi48211.2021.9433888
Kevinminh Ta, Shawn S Ahn, John C Stendahl, Albert J Sinusas, James S Duncan
Accurate motion estimation and segmentation of the left ventricle from medical images are important tasks for quantitative evaluation of cardiovascular health. Echocardiography offers a cost-efficient and non-invasive modality for examining the heart, but provides additional challenges for automated analyses due to the low signal-to-noise ratio inherent in ultrasound imaging. In this work, we propose a shape regularized convolutional neural network for estimating dense displacement fields between sequential 3D B-mode echocardiography images with the capability of also predicting left ventricular segmentation masks. Manually traced segmentations are used as a guide to assist in the unsupervised estimation of displacement between a source and a target image while also serving as labels to train the network to additionally predict segmentations. To enforce realistic cardiac motion patterns, a flow incompressibility term is also incorporated to penalize divergence. Our proposed network is evaluated on an in vivo canine 3D+t B-mode echocardiographic dataset. It is shown that the shape regularizer improves the motion estimation performance of the network and our overall model performs favorably against competing methods.
{"title":"SHAPE-REGULARIZED UNSUPERVISED LEFT VENTRICULAR MOTION NETWORK WITH SEGMENTATION CAPABILITY IN 3D+TIME ECHOCARDIOGRAPHY.","authors":"Kevinminh Ta, Shawn S Ahn, John C Stendahl, Albert J Sinusas, James S Duncan","doi":"10.1109/isbi48211.2021.9433888","DOIUrl":"https://doi.org/10.1109/isbi48211.2021.9433888","url":null,"abstract":"<p><p>Accurate motion estimation and segmentation of the left ventricle from medical images are important tasks for quantitative evaluation of cardiovascular health. Echocardiography offers a cost-efficient and non-invasive modality for examining the heart, but provides additional challenges for automated analyses due to the low signal-to-noise ratio inherent in ultrasound imaging. In this work, we propose a shape regularized convolutional neural network for estimating dense displacement fields between sequential 3D B-mode echocardiography images with the capability of also predicting left ventricular segmentation masks. Manually traced segmentations are used as a guide to assist in the unsupervised estimation of displacement between a source and a target image while also serving as labels to train the network to additionally predict segmentations. To enforce realistic cardiac motion patterns, a flow incompressibility term is also incorporated to penalize divergence. Our proposed network is evaluated on an <i>in vivo</i> canine 3D+t B-mode echocardiographic dataset. It is shown that the shape regularizer improves the motion estimation performance of the network and our overall model performs favorably against competing methods.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"536-540"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9433888","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39103614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1109/isbi48211.2021.9433897
Xiaofeng Liu, Fangxu Xing, Georges El Fakhri, Jonghye Woo
Multimodal MRI provides complementary and clinically relevant information to probe tissue condition and to characterize various diseases. However, it is often difficult to acquire sufficiently many modalities from the same subject due to limitations in study plans, while quantitative analysis is still demanded. In this work, we propose a unified conditional disentanglement framework to synthesize any arbitrary modality from an input modality. Our framework hinges on a cycle-constrained conditional adversarial training approach, where it can extract a modality-invariant anatomical feature with a modality-agnostic encoder and generate a target modality with a conditioned decoder. We validate our framework on four MRI modalities, including T1-weighted, T1 contrast enhanced, T2-weighted, and FLAIR MRI, from the BraTS'18 database, showing superior performance on synthesis quality over the comparison methods. In addition, we report results from experiments on a tumor segmentation task carried out with synthesized data.
{"title":"A UNIFIED CONDITIONAL DISENTANGLEMENT FRAMEWORK FOR MULTIMODAL BRAIN MR IMAGE TRANSLATION.","authors":"Xiaofeng Liu, Fangxu Xing, Georges El Fakhri, Jonghye Woo","doi":"10.1109/isbi48211.2021.9433897","DOIUrl":"https://doi.org/10.1109/isbi48211.2021.9433897","url":null,"abstract":"<p><p>Multimodal MRI provides complementary and clinically relevant information to probe tissue condition and to characterize various diseases. However, it is often difficult to acquire sufficiently many modalities from the same subject due to limitations in study plans, while quantitative analysis is still demanded. In this work, we propose a unified conditional disentanglement framework to synthesize any arbitrary modality from an input modality. Our framework hinges on a cycle-constrained conditional adversarial training approach, where it can extract a modality-invariant anatomical feature with a modality-agnostic encoder and generate a target modality with a conditioned decoder. We validate our framework on four MRI modalities, including T1-weighted, T1 contrast enhanced, T2-weighted, and FLAIR MRI, from the BraTS'18 database, showing superior performance on synthesis quality over the comparison methods. In addition, we report results from experiments on a tumor segmentation task carried out with synthesized data.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9433897","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39452028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/isbi48211.2021.9434114
Pangpang Liu, Fusheng Wang, George Teodoro, Jun Kong
Three-dimensional (3D) digital pathology has been emerging for next-generation tissue based cancer research. To enable such histopathology image volume analysis, serial histopathology slides need to be well aligned. In this paper, we propose a histopathology image registration fine tuning method with integrated landmark evaluations by texture and spatial proximity measures. Representative anatomical structures and image corner features are first detected as landmark candidates. Next, we identify strong and modify weak matched landmarks by leveraging image texture features and landmark spatial proximity measures. Both qualitative and quantitative results of extensive experiments demonstrate that our proposed method is robust and can further enhance registration accuracy of our previously registered image set by 31.15% (correlation), 4.88% (mutual information), and 41.02% (mean squared error), respectively. The promising experimental results suggest that our method can be used as a fine tuning module to further boost registration accuracy, a premise of histology spatial and morphology analysis in an information-lossless 3D tissue space for cancer research.
{"title":"HISTOPATHOLOGY IMAGE REGISTRATION BY INTEGRATED TEXTURE AND SPATIAL PROXIMITY BASED LANDMARK SELECTION AND MODIFICATION.","authors":"Pangpang Liu, Fusheng Wang, George Teodoro, Jun Kong","doi":"10.1109/isbi48211.2021.9434114","DOIUrl":"https://doi.org/10.1109/isbi48211.2021.9434114","url":null,"abstract":"<p><p>Three-dimensional (3D) digital pathology has been emerging for next-generation tissue based cancer research. To enable such histopathology image volume analysis, serial histopathology slides need to be well aligned. In this paper, we propose a histopathology image registration fine tuning method with integrated landmark evaluations by texture and spatial proximity measures. Representative anatomical structures and image corner features are first detected as landmark candidates. Next, we identify strong and modify weak matched landmarks by leveraging image texture features and landmark spatial proximity measures. Both qualitative and quantitative results of extensive experiments demonstrate that our proposed method is robust and can further enhance registration accuracy of our previously registered image set by 31.15% (correlation), 4.88% (mutual information), and 41.02% (mean squared error), respectively. The promising experimental results suggest that our method can be used as a fine tuning module to further boost registration accuracy, a premise of histology spatial and morphology analysis in an information-lossless 3D tissue space for cancer research.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"1827-1830"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9434114","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39458883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/isbi48211.2021.9433882
Abdul Haseeb Ahmed, Prashant Nagpal, Stanley Kruger, Mathews Jacob
Bilinear models such as low-rank and compressed sensing, which decompose the dynamic data to spatial and temporal factors, are powerful and memory efficient tools for the recovery of dynamic MRI data. These methods rely on sparsity and energy compaction priors on the factors to regularize the recovery. Motivated by deep image prior, we introduce a novel bilinear model, whose factors are regularized using convolutional neural networks. To reduce the run time, we initialize the CNN parameters by pre-training them on pre-acquired data with longer acquistion time. Since fully sampled data is not available, pretraining is performed on undersampled data in an unsupervised fashion. We use sparsity regularization of the network parameters to minimize the overfitting of the network to measurement noise. Our experiments on on free-breathing and ungated cardiac CINE data acquired using a navigated golden-angle gradient-echo radial sequence show the ability of our method to provide reduced spatial blurring as compared to low-rank and SToRM reconstructions.
{"title":"DYNAMIC IMAGING USING DEEP BILINEAR UNSUPERVISED LEARNING (DEBLUR).","authors":"Abdul Haseeb Ahmed, Prashant Nagpal, Stanley Kruger, Mathews Jacob","doi":"10.1109/isbi48211.2021.9433882","DOIUrl":"https://doi.org/10.1109/isbi48211.2021.9433882","url":null,"abstract":"<p><p>Bilinear models such as low-rank and compressed sensing, which decompose the dynamic data to spatial and temporal factors, are powerful and memory efficient tools for the recovery of dynamic MRI data. These methods rely on sparsity and energy compaction priors on the factors to regularize the recovery. Motivated by deep image prior, we introduce a novel bilinear model, whose factors are regularized using convolutional neural networks. To reduce the run time, we initialize the CNN parameters by pre-training them on pre-acquired data with longer acquistion time. Since fully sampled data is not available, pretraining is performed on undersampled data in an unsupervised fashion. We use sparsity regularization of the network parameters to minimize the overfitting of the network to measurement noise. Our experiments on on free-breathing and ungated cardiac CINE data acquired using a navigated golden-angle gradient-echo radial sequence show the ability of our method to provide reduced spatial blurring as compared to low-rank and SToRM reconstructions.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"1099-1102"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9433882","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39552699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}