Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/isbi48211.2021.9434127
Benjamin Billot, Stefano Cerri, Koen Van Leemput, Adrian V Dalca, Juan Eugenio Iglesias
We present the first deep learning method to segment Multiple Sclerosis lesions and brain structures from MRI scans of any (possibly multimodal) contrast and resolution. Our method only requires segmentations to be trained (no images), as it leverages the generative model of Bayesian segmentation to generate synthetic scans with simulated lesions, which are then used to train a CNN. Our method can be retrained to segment at any resolution by adjusting the amount of synthesised partial volume. By construction, the synthetic scans are perfectly aligned with their labels, which enables training with noisy labels obtained with automatic methods. The training data are generated on the fly, and aggressive augmentation (including artefacts) is applied for improved generalisation. We demonstrate our method on two public datasets, comparing it with a state-of-the-art Bayesian approach implemented in FreeSurfer, and dataset specific CNNs trained on real data. The code is available at https://github.com/BBillot/SynthSeg.
{"title":"JOINT SEGMENTATION OF MULTIPLE SCLEROSIS LESIONS AND BRAIN ANATOMY IN MRI SCANS OF ANY CONTRAST AND RESOLUTION WITH CNNs.","authors":"Benjamin Billot, Stefano Cerri, Koen Van Leemput, Adrian V Dalca, Juan Eugenio Iglesias","doi":"10.1109/isbi48211.2021.9434127","DOIUrl":"10.1109/isbi48211.2021.9434127","url":null,"abstract":"<p><p>We present the first deep learning method to segment Multiple Sclerosis lesions and brain structures from MRI scans of any (possibly multimodal) contrast and resolution. Our method only requires segmentations to be trained (no images), as it leverages the generative model of Bayesian segmentation to generate synthetic scans with simulated lesions, which are then used to train a CNN. Our method can be retrained to segment at any resolution by adjusting the amount of synthesised partial volume. By construction, the synthetic scans are perfectly aligned with their labels, which enables training with noisy labels obtained with automatic methods. The training data are generated on the fly, and aggressive augmentation (including artefacts) is applied for improved generalisation. We demonstrate our method on two public datasets, comparing it with a state-of-the-art Bayesian approach implemented in FreeSurfer, and dataset specific CNNs trained on real data. The code is available at https://github.com/BBillot/SynthSeg.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"1971-1974"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8340983/pdf/nihms-1727379.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39291017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-03-25DOI: 10.1109/isbi48211.2021.9433839
Qing Zou, Abdul Haseeb Ahmed, Prashant Nagpal, Stanley Kruger, Mathews Jacob
We introduce a novel generative smoothness regularization on manifolds (SToRM) model for the recovery of dynamic image data from highly undersampled measurements. The proposed generative framework represents the image time series as a smooth non-linear function of low-dimensional latent vectors that capture the cardiac and respiratory phases. The non-linear function is represented using a deep convolutional neural network (CNN). Unlike the popular CNN approaches that require extensive fully-sampled training data that is not available in this setting, the parameters of the CNN generator as well as the latent vectors are jointly estimated from the undersampled measurements using stochastic gradient descent. We penalize the norm of the gradient of the generator to encourage the learning of a smooth surface/manifold, while temporal gradients of the latent vectors are penalized to encourage the time series to be smooth. The main benefits of the proposed scheme are (a) the quite significant reduction in memory demand compared to the analysis based SToRM model, and (b) the spatial regularization brought in by the CNN model. We also introduce efficient progressive approaches to minimize the computational complexity of the algorithm.
{"title":"DEEP GENERATIVE STORM MODEL FOR DYNAMIC IMAGING.","authors":"Qing Zou, Abdul Haseeb Ahmed, Prashant Nagpal, Stanley Kruger, Mathews Jacob","doi":"10.1109/isbi48211.2021.9433839","DOIUrl":"10.1109/isbi48211.2021.9433839","url":null,"abstract":"<p><p>We introduce a novel generative smoothness regularization on manifolds (SToRM) model for the recovery of dynamic image data from highly undersampled measurements. The proposed generative framework represents the image time series as a smooth non-linear function of low-dimensional latent vectors that capture the cardiac and respiratory phases. The non-linear function is represented using a deep convolutional neural network (CNN). Unlike the popular CNN approaches that require extensive fully-sampled training data that is not available in this setting, the parameters of the CNN generator as well as the latent vectors are jointly estimated from the undersampled measurements using stochastic gradient descent. We penalize the norm of the gradient of the generator to encourage the learning of a smooth surface/manifold, while temporal gradients of the latent vectors are penalized to encourage the time series to be smooth. The main benefits of the proposed scheme are (a) the quite significant reduction in memory demand compared to the analysis based SToRM model, and (b) the spatial regularization brought in by the CNN model. We also introduce efficient progressive approaches to minimize the computational complexity of the algorithm.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8320670/pdf/nihms-1668003.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39267150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/isbi48211.2021.9433815
Shen Zhao, Lee C Potter, Rizwan Ahmad
Magnetic Resonance Imaging (MRI) is a noninvasive imaging technique that provides excellent soft-tissue contrast without using ionizing radiation. MRI's clinical application may be limited by long data acquisition time; therefore, MR image reconstruction from highly under-sampled k-space data has been an active research area. Calibrationless MRI not only enables a higher acceleration rate but also increases flexibility for sampling pattern design. To leverage non-linear machine learning priors, we pair our High-dimensional Fast Convolutional Framework (HICU) [1] with a plug-in denoiser and demonstrate its feasibility using 2D brain data.
{"title":"CALIBRATIONLESS MRI RECONSTRUCTION WITH A PLUG-IN DENOISER.","authors":"Shen Zhao, Lee C Potter, Rizwan Ahmad","doi":"10.1109/isbi48211.2021.9433815","DOIUrl":"https://doi.org/10.1109/isbi48211.2021.9433815","url":null,"abstract":"<p><p>Magnetic Resonance Imaging (MRI) is a noninvasive imaging technique that provides excellent soft-tissue contrast without using ionizing radiation. MRI's clinical application may be limited by long data acquisition time; therefore, MR image reconstruction from highly under-sampled k-space data has been an active research area. Calibrationless MRI not only enables a higher acceleration rate but also increases flexibility for sampling pattern design. To leverage non-linear machine learning priors, we pair our High-dimensional Fast Convolutional Framework (HICU) [1] with a plug-in denoiser and demonstrate its feasibility using 2D brain data.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"1846-1849"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9433815","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39834658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/isbi48211.2021.9433812
Carlos J Soto, Peiyao A Zhao, Kyle N Klein, David M Gilbert, Anuj Srivastava
This paper develops statistical tools for testing differences in shapes of chromosomes resulting from certain gene knockouts (KO), specifically RIF1 gene KO (RKO) and the cohesin subunit RAD21 gene KO (CKO). It utilizes a two-sample test for comparing shapes of KO chromosomes with wild type (WT) at two levels: (1) Coarse shape analysis, where one compares shapes of full or large parts of chromosomes, and (2) Fine shape analysis, where chromosomes are first segmented into (TAD-based) pieces and then the corresponding pieces are compared across populations. The shape comparisons - coarse and fine - are based on an elastic shape metric for comparing shapes of 3D curves. The experiments show that the KO populations, RKO and CKO, have statistically significant differences from WT at both coarse and fine levels. Furthermore, this framework highlights local regions where these differences are most prominent.
本文开发了统计工具,用于测试某些基因敲除(KO)导致的染色体形状差异,特别是 RIF1 基因敲除(RKO)和粘合素亚基 RAD21 基因敲除(CKO)。它利用双样本检验在两个层面上比较 KO 染色体与野生型(WT)的形状:(1) 粗形状分析,即比较染色体全部或大部分的形状;(2) 细形状分析,即首先将染色体分割成(基于 TAD 的)片段,然后比较不同群体的相应片段。粗略和精细的形状比较基于一种用于比较三维曲线形状的弹性形状指标。实验表明,KO 群体(RKO 和 CKO)与 WT 群体在粗略和精细程度上都有显著的统计学差异。此外,该框架还突出了这些差异最显著的局部区域。
{"title":"STATISTICAL COMPARISONS OF CHROMOSOMAL SHAPE POPULATIONS.","authors":"Carlos J Soto, Peiyao A Zhao, Kyle N Klein, David M Gilbert, Anuj Srivastava","doi":"10.1109/isbi48211.2021.9433812","DOIUrl":"10.1109/isbi48211.2021.9433812","url":null,"abstract":"<p><p>This paper develops statistical tools for testing differences in shapes of chromosomes resulting from certain gene knockouts (KO), specifically RIF1 gene KO (RKO) and the cohesin subunit RAD21 gene KO (CKO). It utilizes a <i>two-sample test</i> for comparing shapes of KO chromosomes with wild type (WT) at two levels: (1) <i>Coarse shape analysis</i>, where one compares shapes of full or large parts of chromosomes, and (2) <i>Fine shape analysis</i>, where chromosomes are first segmented into (TAD-based) pieces and then the corresponding pieces are compared across populations. The shape comparisons - coarse and fine - are based on an elastic shape metric for comparing shapes of 3D curves. The experiments show that the KO populations, RKO and CKO, have statistically significant differences from WT at both coarse and fine levels. Furthermore, this framework highlights local regions where these differences are most prominent.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"788-791"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8840943/pdf/nihms-1776570.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39924997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/isbi48211.2021.9433982
Shawn Mathew, Saad Nadeem, Arie Kaufman
Optical colonoscopy (OC), the most prevalent colon cancer screening tool, has a high miss rate due to a number of factors, including the geometry of the colon (haustral fold and sharp bends occlusions), endoscopist inexperience or fatigue, endoscope field of view. We present a framework to visualize the missed regions per-frame during OC, and provides a workable clinical solution. Specifically, we make use of 3D reconstructed virtual colonoscopy (VC) data and the insight that VC and OC share the same underlying geometry but differ in color, texture and specular reflections, embedded in the OC. A lossy unpaired image-to-image translation model is introduced with enforced shared latent space for OC and VC. This shared space captures the geometric information while deferring the color, texture, and specular information creation to additional Gaussian noise input. The latter can be utilized to generate one-to-many mappings from VC to OC and OC to OC. The code, data and trained models will be released via our Computational Endoscopy Platform at https://github.com/nadeemlab/CEP.
{"title":"VISUALIZING MISSING SURFACES IN COLONOSCOPY VIDEOS USING SHARED LATENT SPACE REPRESENTATIONS.","authors":"Shawn Mathew, Saad Nadeem, Arie Kaufman","doi":"10.1109/isbi48211.2021.9433982","DOIUrl":"10.1109/isbi48211.2021.9433982","url":null,"abstract":"<p><p>Optical colonoscopy (OC), the most prevalent colon cancer screening tool, has a high miss rate due to a number of factors, including the geometry of the colon (haustral fold and sharp bends occlusions), endoscopist inexperience or fatigue, endoscope field of view. We present a framework to visualize the missed regions per-frame during OC, and provides a workable clinical solution. Specifically, we make use of 3D reconstructed virtual colonoscopy (VC) data and the insight that VC and OC share the same underlying geometry but differ in color, texture and specular reflections, embedded in the OC. A lossy unpaired image-to-image translation model is introduced with enforced shared latent space for OC and VC. This shared space captures the geometric information while deferring the color, texture, and specular information creation to additional Gaussian noise input. The latter can be utilized to generate one-to-many mappings from VC to OC and OC to OC. The code, data and trained models will be released via our Computational Endoscopy Platform at https://github.com/nadeemlab/CEP.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"329-333"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9433982","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39513772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/isbi48211.2021.9434143
Eleonora Ficiarà, Valentino Crespi, Shruti Prashant Gadewar, Sophia I Thomopoulos, Joshua Boyd, Paul M Thompson, Neda Jahanshad, Fabrizio Pizzagalli
Magnetic resonance imaging (MRI) has a potential for early diagnosis of individuals at risk for developing Alzheimer's disease (AD). Cognitive performance in healthy elderly people and in those with mild cognitive impairment (MCI) has been associated with measures of cortical gyrification [1] and thickness (CT) [2], yet the extent to which sulcal measures can help to predict AD conversion above and beyond CT measures is not known. Here, we analyzed 721 participants with MCI from phases 1 and 2 of the Alzheimer's Disease Neuroimaging Initiative, applying a two-state Markov model to study the conversion from MCI to AD condition. Our preliminary results suggest that MRI-based cortical features, including sulcal morphometry, may help to predict conversion from MCI to AD.
{"title":"Predicting Progression from Mild Cognitive Impairment to Alzheimer's Disease using MRI-based Cortical Features and a Two-State Markov Model.","authors":"Eleonora Ficiarà, Valentino Crespi, Shruti Prashant Gadewar, Sophia I Thomopoulos, Joshua Boyd, Paul M Thompson, Neda Jahanshad, Fabrizio Pizzagalli","doi":"10.1109/isbi48211.2021.9434143","DOIUrl":"https://doi.org/10.1109/isbi48211.2021.9434143","url":null,"abstract":"<p><p>Magnetic resonance imaging (MRI) has a potential for early diagnosis of individuals at risk for developing Alzheimer's disease (AD). Cognitive performance in healthy elderly people and in those with mild cognitive impairment (MCI) has been associated with measures of cortical gyrification [1] and thickness (CT) [2], yet the extent to which sulcal measures can help to predict AD conversion above and beyond CT measures is not known. Here, we analyzed 721 participants with MCI from phases 1 and 2 of the Alzheimer's Disease Neuroimaging Initiative, applying a two-state Markov model to study the conversion from MCI to AD condition. Our preliminary results suggest that MRI-based cortical features, including sulcal morphometry, may help to predict conversion from MCI to AD.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"1145-1149"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9434143","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40317904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/isbi48211.2021.9433837
Yaqiong Chai, Mengting Liu, Ben A Duffy, Hosung Kim
Changes in brain morphology, such as cortical thinning are of great value for understanding the trajectory of brain aging and various neurodegenerative diseases. In this work, we employed a generative neural network variational autoencoder (VAE) that is conditional on age and is able to generate cortical thickness maps at various ages given an input cortical thickness map. To take into account the mesh topology in the model, we proposed a loss function based on weighted adjacency to integrate the surface topography defined as edge connections with the cortical thickness mapped as vertices. Compared to traditional conditional VAE that did not use the surface topological information, our method better predicted "future" cortical thickness maps, especially when the age gap became wider. Our model has the potential to predict the distinctive temporospatial pattern of individual cortical morphology in relation to aging and neurodegenerative diseases.
{"title":"LEARNING TO SYNTHESIZE CORTICAL MORPHOLOGICAL CHANGES USING GRAPH CONDITIONAL VARIATIONAL AUTOENCODER.","authors":"Yaqiong Chai, Mengting Liu, Ben A Duffy, Hosung Kim","doi":"10.1109/isbi48211.2021.9433837","DOIUrl":"https://doi.org/10.1109/isbi48211.2021.9433837","url":null,"abstract":"<p><p>Changes in brain morphology, such as cortical thinning are of great value for understanding the trajectory of brain aging and various neurodegenerative diseases. In this work, we employed a generative neural network variational autoencoder (VAE) that is conditional on age and is able to generate cortical thickness maps at various ages given an input cortical thickness map. To take into account the mesh topology in the model, we proposed a loss function based on weighted adjacency to integrate the surface topography defined as edge connections with the cortical thickness mapped as vertices. Compared to traditional conditional VAE that did not use the surface topological information, our method better predicted \"future\" cortical thickness maps, especially when the age gap became wider. Our model has the potential to predict the distinctive temporospatial pattern of individual cortical morphology in relation to aging and neurodegenerative diseases.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"1495-1499"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9433837","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40322297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-03DOI: 10.1109/ISBI45749.2020.9098505
R Droste, P Chatelain, L Drukker, H Sharma, A T Papageorghiou, J A Noble
Anatomical landmarks are a crucial prerequisite for many medical imaging tasks. Usually, the set of landmarks for a given task is predefined by experts. The landmark locations for a given image are then annotated manually or via machine learning methods trained on manual annotations. In this paper, in contrast, we present a method to automatically discover and localize anatomical landmarks in medical images. Specifically, we consider landmarks that attract the visual attention of humans, which we term visually salient landmarks. We illustrate the method for fetal neurosonographic images. First, full-length clinical fetal ultrasound scans are recorded with live sonographer gaze-tracking. Next, a convolutional neural network (CNN) is trained to predict the gaze point distribution (saliency map) of the sonographers on scan video frames. The CNN is then used to predict saliency maps of unseen fetal neurosonographic images, and the landmarks are extracted as the local maxima of these saliency maps. Finally, the landmarks are matched across images by clustering the landmark CNN features. We show that the discovered landmarks can be used within affine image registration, with average landmark alignment errors between 4.1% and 10.9% of the fetal head long axis length.
{"title":"Discovering Salient Anatomical Landmarks by Predicting Human Gaze.","authors":"R Droste, P Chatelain, L Drukker, H Sharma, A T Papageorghiou, J A Noble","doi":"10.1109/ISBI45749.2020.9098505","DOIUrl":"https://doi.org/10.1109/ISBI45749.2020.9098505","url":null,"abstract":"<p><p>Anatomical landmarks are a crucial prerequisite for many medical imaging tasks. Usually, the set of landmarks for a given task is predefined by experts. The landmark locations for a given image are then annotated manually or via machine learning methods trained on manual annotations. In this paper, in contrast, we present a method to automatically discover and localize anatomical landmarks in medical images. Specifically, we consider landmarks that attract the visual attention of humans, which we term <i>visually salient landmarks</i>. We illustrate the method for fetal neurosonographic images. First, full-length clinical fetal ultrasound scans are recorded with live sonographer gaze-tracking. Next, a convolutional neural network (CNN) is trained to predict the gaze point distribution (saliency map) of the sonographers on scan video frames. The CNN is then used to predict saliency maps of unseen fetal neurosonographic images, and the landmarks are extracted as the local maxima of these saliency maps. Finally, the landmarks are matched across images by clustering the landmark CNN features. We show that the discovered landmarks can be used within affine image registration, with average landmark alignment errors between 4.1% and 10.9% of the fetal head long axis length.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"1711-1714"},"PeriodicalIF":0.0,"publicationDate":"2020-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ISBI45749.2020.9098505","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38006116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-03DOI: 10.1109/ISBI45749.2020.9098666
Jianbo Jiao, Richard Droste, Lior Drukker, Aris T Papageorghiou, J Alison Noble
Recent advances in deep learning have achieved promising performance for medical image analysis, while in most cases ground-truth annotations from human experts are necessary to train the deep model. In practice, such annotations are expensive to collect and can be scarce for medical imaging applications. Therefore, there is significant interest in learning representations from unlabelled raw data. In this paper, we propose a self-supervised learning approach to learn meaningful and transferable representations from medical imaging video without any type of human annotation. We assume that in order to learn such a representation, the model should identify anatomical structures from the unlabelled data. Therefore we force the model to address anatomy-aware tasks with free supervision from the data itself. Specifically, the model is designed to correct the order of a reshuffled video clip and at the same time predict the geometric transformation applied to the video clip. Experiments on fetal ultrasound video show that the proposed approach can effectively learn meaningful and strong representations, which transfer well to downstream tasks like standard plane detection and saliency prediction.
{"title":"Self-Supervised Representation Learning for Ultrasound Video.","authors":"Jianbo Jiao, Richard Droste, Lior Drukker, Aris T Papageorghiou, J Alison Noble","doi":"10.1109/ISBI45749.2020.9098666","DOIUrl":"https://doi.org/10.1109/ISBI45749.2020.9098666","url":null,"abstract":"<p><p>Recent advances in deep learning have achieved promising performance for medical image analysis, while in most cases ground-truth annotations from human experts are necessary to train the deep model. In practice, such annotations are expensive to collect and can be scarce for medical imaging applications. Therefore, there is significant interest in learning representations from unlabelled raw data. In this paper, we propose a self-supervised learning approach to learn meaningful and transferable representations from medical imaging video without any type of human annotation. We assume that in order to learn such a representation, the model should identify anatomical structures from the unlabelled data. Therefore we force the model to address anatomy-aware tasks with free supervision from the data itself. Specifically, the model is designed to correct the order of a reshuffled video clip and at the same time predict the geometric transformation applied to the video clip. Experiments on fetal ultrasound video show that the proposed approach can effectively learn meaningful and strong representations, which transfer well to downstream tasks like standard plane detection and saliency prediction.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"1847-1850"},"PeriodicalIF":0.0,"publicationDate":"2020-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ISBI45749.2020.9098666","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38006117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01Epub Date: 2020-05-22DOI: 10.1109/isbi45749.2020.9098593
Merry P Mani, Hemant K Aggarwal, Sanjay Ghosh, Mathews Jacob
We propose a model-based deep learning architecture for the reconstruction of highly accelerated diffusion magnetic resonance imaging (MRI) that enables high resolution imaging. The proposed reconstruction jointly recovers all the diffusion weighted images in a single step from a joint k-q under-sampled acquisition in a parallel MRI setting. We propose the novel use of a pre-trained denoiser as a regularizer in a model-based reconstruction for the recovery of highly under-sampled data. Specifically, we designed the denoiser based on a general diffusion MRI tissue microstructure model for multi-compartmental modeling. By using a wide range of biologically plausible parameter values for the multi-compartmental microstructure model, we simulated diffusion signal that spans the entire microstructure parameter space. A neural network was trained in an unsupervised manner using an autoencoder to learn the diffusion MRI signal subspace. We employed the autoencoder in a model-based reconstruction and show that the autoencoder provides a strong denoising prior to recover the q-space signal. We show reconstruction results on a simulated brain dataset that shows high acceleration capabilities of the proposed method.
{"title":"Model-Based Deep Learning for Reconstruction of Joint k-q Under-sampled High Resolution Diffusion MRI.","authors":"Merry P Mani, Hemant K Aggarwal, Sanjay Ghosh, Mathews Jacob","doi":"10.1109/isbi45749.2020.9098593","DOIUrl":"https://doi.org/10.1109/isbi45749.2020.9098593","url":null,"abstract":"<p><p>We propose a model-based deep learning architecture for the reconstruction of highly accelerated diffusion magnetic resonance imaging (MRI) that enables high resolution imaging. The proposed reconstruction jointly recovers all the diffusion weighted images in a single step from a joint k-q under-sampled acquisition in a parallel MRI setting. We propose the novel use of a pre-trained denoiser as a regularizer in a model-based reconstruction for the recovery of highly under-sampled data. Specifically, we designed the denoiser based on a general diffusion MRI tissue microstructure model for multi-compartmental modeling. By using a wide range of biologically plausible parameter values for the multi-compartmental microstructure model, we simulated diffusion signal that spans the entire microstructure parameter space. A neural network was trained in an unsupervised manner using an autoencoder to learn the diffusion MRI signal subspace. We employed the autoencoder in a model-based reconstruction and show that the autoencoder provides a strong denoising prior to recover the q-space signal. We show reconstruction results on a simulated brain dataset that shows high acceleration capabilities of the proposed method.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"913-916"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi45749.2020.9098593","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25360296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}