As a fundamental task in computer vision, object detection methods for the 2D image such as Faster R-CNN and SSD can be efficiently trained end-to-end. However, current methods for volumetric data like computed tomography (CT) usually contain two steps to do region proposal and classification separately. In this work, we present a unified framework called Volume R-CNN for object detection in volumetric data. Volume R-CNN is an end-to-end method that could perform region proposal, classification and instance segmentation all in one model, which dramatically reduces computational overhead and parameter numbers. These tasks are joined using a key component named RoIAlign3D that extracts features of RoIs smoothly and works superiorly well for small objects in the 3D image. To the best of our knowledge, Volume R-CNN is the first common end-to-end framework for both object detection and instance segmentation in CT. Without bells and whistles, our single model achieves remarkable results in LUNA16. Ablation experiments are conducted to analyze the effectiveness of our method.
{"title":"Volume R-CNN: Unified Framework for CT Object Detection and Instance Segmentation","authors":"Yun Chen, Junxuan Chen, Bo Xiao, Zhengfang Wu, Ying Chi, Xuansong Xie, Xiansheng Hua","doi":"10.1109/ISBI.2019.8759390","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759390","url":null,"abstract":"As a fundamental task in computer vision, object detection methods for the 2D image such as Faster R-CNN and SSD can be efficiently trained end-to-end. However, current methods for volumetric data like computed tomography (CT) usually contain two steps to do region proposal and classification separately. In this work, we present a unified framework called Volume R-CNN for object detection in volumetric data. Volume R-CNN is an end-to-end method that could perform region proposal, classification and instance segmentation all in one model, which dramatically reduces computational overhead and parameter numbers. These tasks are joined using a key component named RoIAlign3D that extracts features of RoIs smoothly and works superiorly well for small objects in the 3D image. To the best of our knowledge, Volume R-CNN is the first common end-to-end framework for both object detection and instance segmentation in CT. Without bells and whistles, our single model achieves remarkable results in LUNA16. Ablation experiments are conducted to analyze the effectiveness of our method.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129700699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759323
M. Green, E. Marom, E. Konen, N. Kiryati, Arnaldo Mayer
Lung cancer CT screening programs are continuously reducing patient exposure to radiation at the expense of image quality. State-of-the-art denoising algorithms are instrumental in preserving the diagnostic value of these images. In this work, a novel neural denoising scheme is proposed for ULD chest CT. The proposed method aggregates multi-scale features that provide rich information for the computation of a perceptive loss. The loss is further optimized for chest CT data by using denoising auto-encoders on real CT images to build the feature extracting network instead of using an existing network trained on natural images. The proposed method was validated on co-registered pairs of real ULD and normal dose scans and compared favorably with published state-of-the-art denoising$ networks both qualitatively and quantitatively.
{"title":"Feature Aggregation in Perceptual Loss for Ultra Low-Dose (ULD) CT Denoising","authors":"M. Green, E. Marom, E. Konen, N. Kiryati, Arnaldo Mayer","doi":"10.1109/ISBI.2019.8759323","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759323","url":null,"abstract":"Lung cancer CT screening programs are continuously reducing patient exposure to radiation at the expense of image quality. State-of-the-art denoising algorithms are instrumental in preserving the diagnostic value of these images. In this work, a novel neural denoising scheme is proposed for ULD chest CT. The proposed method aggregates multi-scale features that provide rich information for the computation of a perceptive loss. The loss is further optimized for chest CT data by using denoising auto-encoders on real CT images to build the feature extracting network instead of using an existing network trained on natural images. The proposed method was validated on co-registered pairs of real ULD and normal dose scans and compared favorably with published state-of-the-art denoising$ networks both qualitatively and quantitatively.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"8 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128801254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759309
Patrick Sousa, A. Galdran, P. Costa, A. Campilho
Lung volume segmentation is a key step in the design of Computer-Aided Diagnosis systems for automated lung pathology analysis. However, isolating the lung from CT volumes can be a challenging process due to considerable deformations and the potential presence of pathologies. Convolutional Neural Networks (CNN) are effective tools for modeling the spatial relationship between lung voxels. Unfortunately, they typically require large quantities of annotated data, and manually delineating the lung from volumetric CT scans can be a cumbersome process. We propose to train a 3D CNN to solve this task based on semi-automatically generated annotations. For this, we introduce an extension of the well-known V-Net architecture that can handle higher-dimensional input data. Even if the training set labels are noisy and contain errors, our experiments show that it is possible to learn to accurately segment the lung relying on them. Numerical comparisons on an external test set containing lung segmentations provided by a medical expert demonstrate that the proposed model generalizes well to new data, reaching an average 98.7% Dice coefficient. The proposed approach results in a superior performance with respect to the standard V-Net model, particularly on the lung boundary.
{"title":"Learning to Segment the Lung Volume from CT Scans Based on Semi-Automatic Ground-Truth","authors":"Patrick Sousa, A. Galdran, P. Costa, A. Campilho","doi":"10.1109/ISBI.2019.8759309","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759309","url":null,"abstract":"Lung volume segmentation is a key step in the design of Computer-Aided Diagnosis systems for automated lung pathology analysis. However, isolating the lung from CT volumes can be a challenging process due to considerable deformations and the potential presence of pathologies. Convolutional Neural Networks (CNN) are effective tools for modeling the spatial relationship between lung voxels. Unfortunately, they typically require large quantities of annotated data, and manually delineating the lung from volumetric CT scans can be a cumbersome process. We propose to train a 3D CNN to solve this task based on semi-automatically generated annotations. For this, we introduce an extension of the well-known V-Net architecture that can handle higher-dimensional input data. Even if the training set labels are noisy and contain errors, our experiments show that it is possible to learn to accurately segment the lung relying on them. Numerical comparisons on an external test set containing lung segmentations provided by a medical expert demonstrate that the proposed model generalizes well to new data, reaching an average 98.7% Dice coefficient. The proposed approach results in a superior performance with respect to the standard V-Net model, particularly on the lung boundary.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125220645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759421
C. Rolland, J. Lebenberg, F. Leroy, E. Moulton, P. Adibpour, D. Rivière, C. Poupon, L. Hertz-Pannier, J. F. Mangin, G. Dehaene-Lambertz, J. Dubois
The development of the human brain is a complex process that starts during early pregnancy and extends until the end of adolescence. In parallel to morphological changes in brain size and gyrification, several microstructural changes occur in the cortex, such as the development of dendritic arborization, synaptogenesis and pruning, and fiber myelination. Magnetic Resonance Imaging (MRI) can provide indirect markers of these mechanisms through the mapping of quantitative parameters. Here, we used a dedicated methodological framework to perform reliable voxel-wise analyses over the infant cortex. The examination of hemispheric asymmetries in microstructure required careful alignment of morphological asymmetries through registration of native and flipped brains using a 2-step matching strategy of sulci (DISCO approach) and cortical ribbon (DARTEL approach). We tested the potential of this approach in 1-to-5-month-old infants, with a focus on cortical longitudinal diffusivity from Diffusion Tensor Imaging (DTI). This enabled us to unravel different microstructural evolution patterns of specific sensorimotor and language regions in the left and right hemispheres.
{"title":"Exploring Microstructure Asymmetries in the Infant Brain Cortex: A Methodological Framework Combining Structural and Diffusion Mri","authors":"C. Rolland, J. Lebenberg, F. Leroy, E. Moulton, P. Adibpour, D. Rivière, C. Poupon, L. Hertz-Pannier, J. F. Mangin, G. Dehaene-Lambertz, J. Dubois","doi":"10.1109/ISBI.2019.8759421","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759421","url":null,"abstract":"The development of the human brain is a complex process that starts during early pregnancy and extends until the end of adolescence. In parallel to morphological changes in brain size and gyrification, several microstructural changes occur in the cortex, such as the development of dendritic arborization, synaptogenesis and pruning, and fiber myelination. Magnetic Resonance Imaging (MRI) can provide indirect markers of these mechanisms through the mapping of quantitative parameters. Here, we used a dedicated methodological framework to perform reliable voxel-wise analyses over the infant cortex. The examination of hemispheric asymmetries in microstructure required careful alignment of morphological asymmetries through registration of native and flipped brains using a 2-step matching strategy of sulci (DISCO approach) and cortical ribbon (DARTEL approach). We tested the potential of this approach in 1-to-5-month-old infants, with a focus on cortical longitudinal diffusivity from Diffusion Tensor Imaging (DTI). This enabled us to unravel different microstructural evolution patterns of specific sensorimotor and language regions in the left and right hemispheres.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125393850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759102
Islam Reda, M. Ghazal, A. Shalaby, Mohammed M Elmogy, A. Aboulfotouh, M. El-Ghar, Adel Said Elmaghraby, R. Keynton, A. El-Baz
A computer-aided diagnosis (CAD) system for early detection of prostate cancer from diffusion-weighted magnetic resonance imaging (DWI) is proposed in this paper. The proposed system starts by defining a region of interest that includes the prostate across the different slices of the input DWI volume. Then, the apparent diffusion coefficient (ADC) of the defined ROI is calculated, normalized and refined. Finally, the classification of prostate into either benign or malignant is achieved using a classification system of two stages. In the first stage, seven convolutional neural networks (CNNs) are used to determine initial classification probabilities for each case. Then, an SVM with Guassian kernel is fed with these probabilities to determine the ultimate diagnosis. The proposed system is new in the sense that it has the ability to detect prostate cancer with minimal prior processing (e.g., rough definition of the prostate region). Evaluation of the developed system is done using DWI datasets collected at seven different b -values from 40 patients (20 benign and 20 malignant). The acquisition of these DWI datasets is performed using two different scanners with different magnetic field strengths (1.5 Tesla and 3 Tesla). The resulting area under curve (AUC) after the second stage of classification is 0.99, which shows a high performance of our system without segmentation similar to the performance of up-to-date systems.
{"title":"Detecting Prostate Cancer Using A CNN-Based System Without Segmentation","authors":"Islam Reda, M. Ghazal, A. Shalaby, Mohammed M Elmogy, A. Aboulfotouh, M. El-Ghar, Adel Said Elmaghraby, R. Keynton, A. El-Baz","doi":"10.1109/ISBI.2019.8759102","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759102","url":null,"abstract":"A computer-aided diagnosis (CAD) system for early detection of prostate cancer from diffusion-weighted magnetic resonance imaging (DWI) is proposed in this paper. The proposed system starts by defining a region of interest that includes the prostate across the different slices of the input DWI volume. Then, the apparent diffusion coefficient (ADC) of the defined ROI is calculated, normalized and refined. Finally, the classification of prostate into either benign or malignant is achieved using a classification system of two stages. In the first stage, seven convolutional neural networks (CNNs) are used to determine initial classification probabilities for each case. Then, an SVM with Guassian kernel is fed with these probabilities to determine the ultimate diagnosis. The proposed system is new in the sense that it has the ability to detect prostate cancer with minimal prior processing (e.g., rough definition of the prostate region). Evaluation of the developed system is done using DWI datasets collected at seven different b -values from 40 patients (20 benign and 20 malignant). The acquisition of these DWI datasets is performed using two different scanners with different magnetic field strengths (1.5 Tesla and 3 Tesla). The resulting area under curve (AUC) after the second stage of classification is 0.99, which shows a high performance of our system without segmentation similar to the performance of up-to-date systems.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116811752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759264
I. Arvidsson, N. Overgaard, K. Åström, A. Heyden
The fact that deep learning based algorithms used for digital pathology tend to overfit to the site of the training data is well-known. Since an algorithm that does not generalize is not very useful, we have in this work studied how different data augmentation techniques can reduce this problem but also how data from different sites can be normalized to each other. For both of these approaches we have used cycle generative adversarial networks (GAN); either to generate more examples to train on or to transform images from one site to another. Furthermore, we have investigated to what extent standard augmentation techniques improve the generalization performance. We performed experiments on four datasets with slides from prostate biopsies, stained with H&E, detailed annotated with Gleason grades. We obtained results similar to previous studies, with accuracies of 77% for Gleason grading for images from the same site as the training data and 59% for images from other sites. However, we also found out that the use of traditional augmentation techniques gave better performance compared to when using cycle GANs, either to augment the training data or to normalize the test data.
{"title":"Comparison of Different Augmentation Techniques for Improved Generalization Performance for Gleason Grading","authors":"I. Arvidsson, N. Overgaard, K. Åström, A. Heyden","doi":"10.1109/ISBI.2019.8759264","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759264","url":null,"abstract":"The fact that deep learning based algorithms used for digital pathology tend to overfit to the site of the training data is well-known. Since an algorithm that does not generalize is not very useful, we have in this work studied how different data augmentation techniques can reduce this problem but also how data from different sites can be normalized to each other. For both of these approaches we have used cycle generative adversarial networks (GAN); either to generate more examples to train on or to transform images from one site to another. Furthermore, we have investigated to what extent standard augmentation techniques improve the generalization performance. We performed experiments on four datasets with slides from prostate biopsies, stained with H&E, detailed annotated with Gleason grades. We obtained results similar to previous studies, with accuracies of 77% for Gleason grading for images from the same site as the training data and 59% for images from other sites. However, we also found out that the use of traditional augmentation techniques gave better performance compared to when using cycle GANs, either to augment the training data or to normalize the test data.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131111737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759481
C. O. Laura, Patrick Hofmann, K. Drechsler, S. Wesarg
The nasal cavity and paranasal sinuses present large interpa-tient variabilities. Additional circumstances like for example, concha bullosa or nasal septum deviations complicate their segmentation. As in other areas of the body a previous multi-structure detection could facilitate the segmentation task. In this paper an approach is proposed to individually detect all sinuses and the nasal cavity. For a better delimitation of their borders the use of an irregular polyhedron is proposed. For an accurate prediction the Darknet-19 deep neural network is used which combined with the You Only Look Once method has shown very promising results in other fields of computer vision. 57 CT scans were available of which 85% were used for training and the remaining 15% for validation.
鼻腔和副鼻窦表现出很大的患者间差异。其他情况,例如,耳甲大疱或鼻中隔偏差使其分割复杂化。与身体的其他部位一样,先前的多结构检测可以促进分割任务。本文提出了一种单独检测所有鼻窦和鼻腔的方法。为了更好地划分它们的边界,建议使用不规则多面体。为了准确预测,使用了Darknet-19深度神经网络,该网络与You Only Look Once方法相结合,在计算机视觉的其他领域显示出非常有希望的结果。57个CT扫描可用,其中85%用于训练,其余15%用于验证。
{"title":"Automatic Detection of the Nasal Cavities and Paranasal Sinuses Using Deep Neural Networks","authors":"C. O. Laura, Patrick Hofmann, K. Drechsler, S. Wesarg","doi":"10.1109/ISBI.2019.8759481","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759481","url":null,"abstract":"The nasal cavity and paranasal sinuses present large interpa-tient variabilities. Additional circumstances like for example, concha bullosa or nasal septum deviations complicate their segmentation. As in other areas of the body a previous multi-structure detection could facilitate the segmentation task. In this paper an approach is proposed to individually detect all sinuses and the nasal cavity. For a better delimitation of their borders the use of an irregular polyhedron is proposed. For an accurate prediction the Darknet-19 deep neural network is used which combined with the You Only Look Once method has shown very promising results in other fields of computer vision. 57 CT scans were available of which 85% were used for training and the remaining 15% for validation.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127652022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759494
J. Beaumont, H. Saint-Jalmes, O. Acosta, T. Kober, M. Tanner, J. Ferré, O. Salvado, J. Fripp, G. Gambarota
A novel magnetic resonance imaging (MRI) sequence called fluid and white matter suppression (FLAWS) was recently proposed for brain imaging at 3T. This sequence provides two co-registered 3D-MRI datasets of T1-weighted images. The voxel-wise division of these two datasets yields contrast-enhanced images that have been used in preoperative Deep Brain Stimulation (DBS) planning. In the current study, we propose a new way of combining the two 3D-MRI FLAWS datasets to increase the contrast-to-noise ratio of the resulting images. Furthermore, since many centers performing DBS are equipped with 1. 5T MRI systems, we also optimized the FLAWS sequence parameters for the data acquisition at the field strength of 1. 5T.
{"title":"High Contrast T1-Weigthed Mri with Fluid and White Matter Suppression Using Mp2Rage","authors":"J. Beaumont, H. Saint-Jalmes, O. Acosta, T. Kober, M. Tanner, J. Ferré, O. Salvado, J. Fripp, G. Gambarota","doi":"10.1109/ISBI.2019.8759494","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759494","url":null,"abstract":"A novel magnetic resonance imaging (MRI) sequence called fluid and white matter suppression (FLAWS) was recently proposed for brain imaging at 3T. This sequence provides two co-registered 3D-MRI datasets of T1-weighted images. The voxel-wise division of these two datasets yields contrast-enhanced images that have been used in preoperative Deep Brain Stimulation (DBS) planning. In the current study, we propose a new way of combining the two 3D-MRI FLAWS datasets to increase the contrast-to-noise ratio of the resulting images. Furthermore, since many centers performing DBS are equipped with 1. 5T MRI systems, we also optimized the FLAWS sequence parameters for the data acquisition at the field strength of 1. 5T.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114737565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759446
Thomas Yu, M. Pizzolato, Erick Jorge Canales-Rodríguez, J. Thiran
We present a voxel-wise Bayesian multi-compartment $T_{2}$ relaxometry fitting method based on Hamiltonian Markov Chain Monte Carlo (HMCMC) sampling. The $T_{2}$ spectrum is modeled as a mixture of truncated Gaussian components, which involves the estimation of parameters in a completely data-driven and voxel-based fashion, i.e. without fixing any parameters or imposing spatial regularization. We estimate each parameter as the expectation of the corresponding marginal distribution drawn from the joint posterior obtained with Hamiltonian sampling. We validate our scheme on synthetic and ex vivo data for which histology is available. We show that the proposed method enables a more robust parameter estimation than a state of the art point estimate based on differential evolution. Moreover, the proposed HMCMC-based myelin water fraction calculation reveals high spatial correlation with the histological counterpart.
{"title":"Robust T2 Relaxometry With Hamiltonian MCMC for Myelin Water Fraction Estimation","authors":"Thomas Yu, M. Pizzolato, Erick Jorge Canales-Rodríguez, J. Thiran","doi":"10.1109/ISBI.2019.8759446","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759446","url":null,"abstract":"We present a voxel-wise Bayesian multi-compartment $T_{2}$ relaxometry fitting method based on Hamiltonian Markov Chain Monte Carlo (HMCMC) sampling. The $T_{2}$ spectrum is modeled as a mixture of truncated Gaussian components, which involves the estimation of parameters in a completely data-driven and voxel-based fashion, i.e. without fixing any parameters or imposing spatial regularization. We estimate each parameter as the expectation of the corresponding marginal distribution drawn from the joint posterior obtained with Hamiltonian sampling. We validate our scheme on synthetic and ex vivo data for which histology is available. We show that the proposed method enables a more robust parameter estimation than a state of the art point estimate based on differential evolution. Moreover, the proposed HMCMC-based myelin water fraction calculation reveals high spatial correlation with the histological counterpart.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125635635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-08DOI: 10.1109/ISBI.2019.8759183
Mengting Liu, C. Lepage, Seun Jeon, T. Flynn, Shiyu Yuan, Justin Kim, A. Toga, A. Barkovich, Duan Xu, Alan C. Evans, Hosung Kim
Though quantification of cortical thickness characterizes a main aspect of morphology in developing brains, it is challenged in the analysis of neonatal brain MRI due to inaccurate pial surface extraction. In this study, we propose a pial surface reconstruction method to address for the relatively large partial volume (PV) within the sulcal basin. The new approach leverages the benefits of using new skeletonization and the deformation models with a new gradient feature. The proposed skeletonization method combines the voxels representing the skeleton of cerebrospinal fluid partial volume (CSF-PV) with the voxels of the medial plane of the gray matter (GM) volume of the sulcus where no CSF-PV is estimated due to the squashed sulcal bank and the limited resolution. Subsequently, the outer cortical boundary is identified by first deforming the initial surface to the skeleton, then refining it using the gradient model characterizing the subtle edges representing the “ground truth” of the GM/CSF boundary. Our landmark-based evaluation showed that the initial boundary identified by the skeletonization was already close to the “ground truth” of the GM/CSF boundary (0.4 mm distant). Furthermore, this was significantly improved by the reconstruction of the final pial surface $( lt 0.1$ mm; $mathrm {p}lt 0.0001)$. The mean cortical thickness measured through our pipeline positively correlated with postmenstrual age (PMA) at scan $( mathrm {p}lt 0.0001)$. The range of the measurement was biologically reasonable (1.4 mm at 28 weeks of PMA to 2.2 mm at term equivalent $vs$. young adults: 2.5–3.5 mm) and was quite close to past reports (2.1 mm at term).
虽然皮质厚度的量化是发育中的大脑形态学的一个主要方面,但由于不准确的脑皮层表面提取,它在新生儿脑MRI分析中受到挑战。在这项研究中,我们提出了一种基底面重建方法来解决相对较大的部分体积(PV)在沟状盆地内。新方法利用了使用新的骨架化和具有新的梯度特征的变形模型的好处。提出的骨架化方法将代表脑脊液部分体积(CSF-PV)骨架的体素与沟灰质(GM)体积的内平面体素相结合,其中由于被压扁的沟库和有限的分辨率,无法估计CSF-PV。随后,通过首先变形骨架的初始表面来识别外皮层边界,然后使用梯度模型对其进行细化,该模型表征了代表GM/CSF边界的“基本真理”的细微边缘。我们基于里程碑的评估表明,骨架化识别的初始边界已经接近GM/CSF边界的“基本事实”(0.4 mm远)。此外,通过重建最终的头部表面(lt 0.1$ mm;$math {p}lt 0.0001)$。通过我们的管道测量的平均皮质厚度与扫描$( mathm {p}lt 0.0001)$时的月经后年龄(PMA)呈正相关。测量的范围在生物学上是合理的(PMA 28周时为1.4 mm, term equivalent $vs$时为2.2 mm)。年轻人:2.5-3.5毫米),与过去的报道(足月2.1毫米)相当接近。
{"title":"A Skeleton and Deformation Based Model for Neonatal Pial Surface Reconstruction in Preterm Newborns","authors":"Mengting Liu, C. Lepage, Seun Jeon, T. Flynn, Shiyu Yuan, Justin Kim, A. Toga, A. Barkovich, Duan Xu, Alan C. Evans, Hosung Kim","doi":"10.1109/ISBI.2019.8759183","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759183","url":null,"abstract":"Though quantification of cortical thickness characterizes a main aspect of morphology in developing brains, it is challenged in the analysis of neonatal brain MRI due to inaccurate pial surface extraction. In this study, we propose a pial surface reconstruction method to address for the relatively large partial volume (PV) within the sulcal basin. The new approach leverages the benefits of using new skeletonization and the deformation models with a new gradient feature. The proposed skeletonization method combines the voxels representing the skeleton of cerebrospinal fluid partial volume (CSF-PV) with the voxels of the medial plane of the gray matter (GM) volume of the sulcus where no CSF-PV is estimated due to the squashed sulcal bank and the limited resolution. Subsequently, the outer cortical boundary is identified by first deforming the initial surface to the skeleton, then refining it using the gradient model characterizing the subtle edges representing the “ground truth” of the GM/CSF boundary. Our landmark-based evaluation showed that the initial boundary identified by the skeletonization was already close to the “ground truth” of the GM/CSF boundary (0.4 mm distant). Furthermore, this was significantly improved by the reconstruction of the final pial surface $( lt 0.1$ mm; $mathrm {p}lt 0.0001)$. The mean cortical thickness measured through our pipeline positively correlated with postmenstrual age (PMA) at scan $( mathrm {p}lt 0.0001)$. The range of the measurement was biologically reasonable (1.4 mm at 28 weeks of PMA to 2.2 mm at term equivalent $vs$. young adults: 2.5–3.5 mm) and was quite close to past reports (2.1 mm at term).","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133954924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}