Pub Date : 2017-01-01Epub Date: 2017-09-07DOI: 10.1007/978-3-319-67389-9_36
Yang Li, Jingyu Liu, Ke Li, Pew-Thian Yap, Minjeong Kim, Chong-Yaw Wee, Dinggang Shen
Functional connectivity network derived from resting-state fMRI data has been found as effective biomarkers for identifying patients with mild cognitive impairment from healthy elderly. However, the ordinary functional connectivity network is essentially a low-order network with the assumption that the brain is static during the entire scanning period, ignoring the temporal variations among correlations derived from brain region pairs. To overcome this weakness, we proposed a new type of high-order network to more accurately describe the relationship of temporal variations among brain regions. Specifically, instead of the commonly used undirected pairwise Pearson's correlation coefficient, we first estimated the low-order effective connectivity network based on a novel sparse regression algorithm. By using the similar approach, we then constructed the high-order effective connectivity network from low-order connectivity to incorporate signal flow information among the brain regions. We finally combined the low-order and the high-order effective connectivity networks using two decision trees for MCI classification and experimental results obtained demonstrate the superiority of the proposed method over the conventional undirected low-order and high-order functional connectivity networks, as well as the low-order and high-order effective connectivity networks when they were used separately.
{"title":"Fusion of High-Order and Low-Order Effective Connectivity Networks for MCI Classification.","authors":"Yang Li, Jingyu Liu, Ke Li, Pew-Thian Yap, Minjeong Kim, Chong-Yaw Wee, Dinggang Shen","doi":"10.1007/978-3-319-67389-9_36","DOIUrl":"10.1007/978-3-319-67389-9_36","url":null,"abstract":"<p><p>Functional connectivity network derived from resting-state fMRI data has been found as effective biomarkers for identifying patients with mild cognitive impairment from healthy elderly. However, the ordinary functional connectivity network is essentially a low-order network with the assumption that the brain is static during the entire scanning period, ignoring the temporal variations among correlations derived from brain region pairs. To overcome this weakness, we proposed a new type of high-order network to more accurately describe the relationship of temporal variations among brain regions. Specifically, instead of the commonly used undirected pairwise Pearson's correlation coefficient, we first estimated the low-order effective connectivity network based on a novel sparse regression algorithm. By using the similar approach, we then constructed the high-order effective connectivity network from low-order connectivity to incorporate signal flow information among the brain regions. We finally combined the low-order and the high-order effective connectivity networks using two decision trees for MCI classification and experimental results obtained demonstrate the superiority of the proposed method over the conventional undirected low-order and high-order functional connectivity networks, as well as the low-order and high-order effective connectivity networks when they were used separately.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"2017 ","pages":"307-315"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5999334/pdf/nihms939425.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36230106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01Epub Date: 2017-09-07DOI: 10.1007/978-3-319-67389-9_18
Pei Dong, Xiaohuan Cao, Jun Zhang, Minjeong Kim, Guorong Wu, Dinggang Shen
Groupwise image registration provides an unbiased registration solution upon a population of images, which can facilitate the subsequent population analysis. However, it is generally computationally expensive for performing groupwise registration on a large set of images. To alleviate this issue, we propose to utilize a fast initialization technique for speeding up the groupwise registration. Our main idea is to generate a set of simulated brain MRI samples with known deformations to their group center. This can be achieved in the training stage by two steps. First, a set of training brain MR images is registered to their group center with a certain existing groupwise registration method. Then, in order to augment the samples, we perform PCA on the set of obtained deformation fields (to the group center) to parameterize the deformation fields. In doing so, we can generate a large number of deformation fields, as well as their respective simulated samples using different parameters for PCA. In the application stage, when given a new set of testing brain MR images, we can mix them with the augmented training samples. Then, for each testing image, we can find its closest sample in the augmented training dataset for fast estimating its deformation field to the group center of the training set. In this way, a tentative group center of the testing image set can be immediately estimated, and the deformation field of each testing image to this estimated group center can be obtained. With this fast initialization for groupwise registration of testing images, we can finally use an existing groupwise registration method to quickly refine the groupwise registration results. Experimental results on ADNI dataset show the significantly improved computational efficiency and competitive registration accuracy, compared to state-of-the-art groupwise registration methods.
{"title":"Efficient Groupwise Registration for Brain MRI by Fast Initialization.","authors":"Pei Dong, Xiaohuan Cao, Jun Zhang, Minjeong Kim, Guorong Wu, Dinggang Shen","doi":"10.1007/978-3-319-67389-9_18","DOIUrl":"https://doi.org/10.1007/978-3-319-67389-9_18","url":null,"abstract":"<p><p>Groupwise image registration provides an unbiased registration solution upon a population of images, which can facilitate the subsequent population analysis. However, it is generally computationally expensive for performing groupwise registration on a large set of images. To alleviate this issue, we propose to utilize a fast initialization technique for speeding up the groupwise registration. Our main idea is to generate a set of simulated brain MRI samples with known deformations to their group center. This can be achieved in the training stage by two steps. First, a set of training brain MR images is registered to their group center with a certain existing groupwise registration method. Then, in order to augment the samples, we perform PCA on the set of obtained deformation fields (to the group center) to parameterize the deformation fields. In doing so, we can generate a large number of deformation fields, as well as their respective simulated samples using different parameters for PCA. In the application stage, when given a new set of testing brain MR images, we can mix them with the augmented training samples. Then, for each testing image, we can find its closest sample in the augmented training dataset for fast estimating its deformation field to the group center of the training set. In this way, a tentative group center of the testing image set can be immediately estimated, and the deformation field of each testing image to this estimated group center can be obtained. With this fast initialization for groupwise registration of testing images, we can finally use an existing groupwise registration method to quickly refine the groupwise registration results. Experimental results on ADNI dataset show the significantly improved computational efficiency and competitive registration accuracy, compared to state-of-the-art groupwise registration methods.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"10541 ","pages":"150-158"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-67389-9_18","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35687492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01Epub Date: 2017-09-07DOI: 10.1007/978-3-319-67389-9_31
Dong Nie, Li Wang, Roger Trullo, Jianfu Li, Peng Yuan, James Xia, Dinggang Shen
Computed tomography (CT) is commonly used as a diagnostic and treatment planning imaging modality in craniomaxillofacial (CMF) surgery to correct patient's bony defects. A major disadvantage of CT is that it emits harmful ionizing radiation to patients during the exam. Magnetic resonance imaging (MRI) is considered to be much safer and noninvasive, and often used to study CMF soft tissues (e.g., temporomandibular joint and brain). However, it is extremely difficult to accurately segment CMF bony structures from MRI since both bone and air appear to be black in MRI, along with low signal-to-noise ratio and partial volume effect. To this end, we proposed a 3D deep-learning based cascade framework to solve these issues. Specifically, a 3D fully convolutional network (FCN) architecture is first adopted to coarsely segment the bony structures. As the coarsely segmented bony structures by FCN tend to be thicker, convolutional neural network (CNN) is further utilized for fine-grained segmentation. To enhance the discriminative ability of the CNN, we particularly concatenate the predicted probability maps from FCN and the original MRI, and feed them together into the CNN to provide more context information for segmentation. Experimental results demonstrate a good performance and also the clinical feasibility of our proposed 3D deep-learning based cascade framework.
{"title":"Segmentation of Craniomaxillofacial Bony Structures from MRI with a 3D Deep-Learning Based Cascade Framework.","authors":"Dong Nie, Li Wang, Roger Trullo, Jianfu Li, Peng Yuan, James Xia, Dinggang Shen","doi":"10.1007/978-3-319-67389-9_31","DOIUrl":"10.1007/978-3-319-67389-9_31","url":null,"abstract":"<p><p>Computed tomography (CT) is commonly used as a diagnostic and treatment planning imaging modality in craniomaxillofacial (CMF) surgery to correct patient's bony defects. A major disadvantage of CT is that it emits harmful ionizing radiation to patients during the exam. Magnetic resonance imaging (MRI) is considered to be much safer and noninvasive, and often used to study CMF soft tissues (e.g., temporomandibular joint and brain). However, it is extremely difficult to accurately segment CMF bony structures from MRI since both bone and air appear to be black in MRI, along with low signal-to-noise ratio and partial volume effect. To this end, we proposed a 3D deep-learning based cascade framework to solve these issues. Specifically, a 3D fully convolutional network (FCN) architecture is first adopted to coarsely segment the bony structures. As the coarsely segmented bony structures by FCN tend to be thicker, convolutional neural network (CNN) is further utilized for fine-grained segmentation. To enhance the discriminative ability of the CNN, we particularly concatenate the predicted probability maps from FCN and the original MRI, and feed them together into the CNN to provide more context information for segmentation. Experimental results demonstrate a good performance and also the clinical feasibility of our proposed 3D deep-learning based cascade framework.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"10541 ","pages":"266-273"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5798482/pdf/nihms915076.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35807448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01Epub Date: 2017-09-07DOI: 10.1007/978-3-319-67389-9_37
Yang Li, Hao Yang, Ke Li, Pew-Thian Yap, Minjeong Kim, Chong-Yaw Wee, Dinggang Shen
Inferring effective brain connectivity network is a challenging task owing to perplexing noise effects, the curse of dimensionality, and inter-subject variability. However, most existing network inference methods are based on correlation analysis and consider the datum points individually, revealing limited information of the neuron interactions and ignoring the relations amongst the derivatives of the data. Hence, we proposed a novel ultra group-constrained sparse linear regression model for effective connectivity inference. This model utilizes not only the discrepancy between observed signals and the model prediction, but also the discrepancy between the associated weak derivatives of the observed and the model signals for a more accurate effective connectivity inference. What's more, a group constraint is applied to minimize the inter-subject variability and the proposed modeling was validated on a mild cognitive impairment dataset with superior results achieved.
{"title":"Novel Effective Connectivity Network Inference for MCI Identification.","authors":"Yang Li, Hao Yang, Ke Li, Pew-Thian Yap, Minjeong Kim, Chong-Yaw Wee, Dinggang Shen","doi":"10.1007/978-3-319-67389-9_37","DOIUrl":"https://doi.org/10.1007/978-3-319-67389-9_37","url":null,"abstract":"<p><p>Inferring effective brain connectivity network is a challenging task owing to perplexing noise effects, the curse of dimensionality, and inter-subject variability. However, most existing network inference methods are based on correlation analysis and consider the datum points individually, revealing limited information of the neuron interactions and ignoring the relations amongst the derivatives of the data. Hence, we proposed a novel ultra group-constrained sparse linear regression model for effective connectivity inference. This model utilizes not only the discrepancy between observed signals and the model prediction, but also the discrepancy between the associated weak derivatives of the observed and the model signals for a more accurate effective connectivity inference. What's more, a group constraint is applied to minimize the inter-subject variability and the proposed modeling was validated on a mild cognitive impairment dataset with superior results achieved.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"2017 ","pages":"316-324"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-67389-9_37","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36230107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01Epub Date: 2017-09-07DOI: 10.1007/978-3-319-67389-9_19
Jun Wang, Qian Wang, Shitong Wang, Dinggang Shen
It is challenging to derive early diagnosis from neuroimaging data for autism spectrum disorder (ASD). In this work, we propose a novel sparse multi-view task-centralized (Sparse-MVTC) classification method for computer-assisted diagnosis of ASD. In particular, since ASD is known to be age- and sex-related, we partition all subjects into different groups of age/sex, each of which can be treated as a classification task to learn. Meanwhile, we extract multi-view features from functional magnetic resonance imaging to describe the brain connectivity of each subject. This formulates a multi-view multi-task sparse learning problem and it is solved by a novel Sparse-MVTC method. Specifically, we treat each task as a central task and other tasks as the auxiliary ones. We then consider the task-task and view-view relations between the central task and each auxiliary task. We can use this task-centralized strategy for a highly efficient solution. The comprehensive experiments on the ABIDE database demonstrate that our proposed Sparse-MVTC method can significantly outperform the existing classification methods in ASD diagnosis.
{"title":"Sparse Multi-view Task-Centralized Learning for ASD Diagnosis.","authors":"Jun Wang, Qian Wang, Shitong Wang, Dinggang Shen","doi":"10.1007/978-3-319-67389-9_19","DOIUrl":"https://doi.org/10.1007/978-3-319-67389-9_19","url":null,"abstract":"<p><p>It is challenging to derive early diagnosis from neuroimaging data for autism spectrum disorder (ASD). In this work, we propose a novel sparse multi-view task-centralized (Sparse-MVTC) classification method for computer-assisted diagnosis of ASD. In particular, since ASD is known to be age- and sex-related, we partition all subjects into different groups of age/sex, each of which can be treated as a classification task to learn. Meanwhile, we extract multi-view features from functional magnetic resonance imaging to describe the brain connectivity of each subject. This formulates a multi-view multi-task sparse learning problem and it is solved by a novel Sparse-MVTC method. Specifically, we treat each task as a central task and other tasks as the auxiliary ones. We then consider the task-task and view-view relations between the central task and each auxiliary task. We can use this task-centralized strategy for a highly efficient solution. The comprehensive experiments on the ABIDE database demonstrate that our proposed Sparse-MVTC method can significantly outperform the existing classification methods in ASD diagnosis.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"10541 ","pages":"159-167"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-67389-9_19","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35842724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1007/978-3-319-47157-0_22
Polina Binder, Nematollah K Batmanghelich, Raul San Jose Estepar, Polina Golland
Emphysema is one of the hallmarks of Chronic Obstructive Pulmonary Disorder (COPD), a devastating lung disease often caused by smoking. Emphysema appears on Computed Tomography (CT) scans as a variety of textures that correlate with disease subtypes. It has been shown that the disease subtypes and textures are linked to physiological indicators and prognosis, although neither is well characterized clinically. Most previous computational approaches to modeling emphysema imaging data have focused on supervised classification of lung textures in patches of CT scans. In this work, we describe a generative model that jointly captures heterogeneity of disease subtypes and of the patient population. We also describe a corresponding inference algorithm that simultaneously discovers disease subtypes and population structure in an unsupervised manner. This approach enables us to create image-based descriptors of emphysema beyond those that can be identified through manual labeling of currently defined phenotypes. By applying the resulting algorithm to a large data set, we identify groups of patients and disease subtypes that correlate with distinct physiological indicators.
{"title":"Unsupervised Discovery of Emphysema Subtypes in a Large Clinical Cohort.","authors":"Polina Binder, Nematollah K Batmanghelich, Raul San Jose Estepar, Polina Golland","doi":"10.1007/978-3-319-47157-0_22","DOIUrl":"10.1007/978-3-319-47157-0_22","url":null,"abstract":"<p><p>Emphysema is one of the hallmarks of Chronic Obstructive Pulmonary Disorder (COPD), a devastating lung disease often caused by smoking. Emphysema appears on Computed Tomography (CT) scans as a variety of textures that correlate with disease subtypes. It has been shown that the disease subtypes and textures are linked to physiological indicators and prognosis, although neither is well characterized clinically. Most previous computational approaches to modeling emphysema imaging data have focused on supervised classification of lung textures in patches of CT scans. In this work, we describe a generative model that jointly captures heterogeneity of disease subtypes and of the patient population. We also describe a corresponding inference algorithm that simultaneously discovers disease subtypes and population structure in an unsupervised manner. This approach enables us to create image-based descriptors of emphysema beyond those that can be identified through manual labeling of currently defined phenotypes. By applying the resulting algorithm to a large data set, we identify groups of patients and disease subtypes that correlate with distinct physiological indicators.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"180-187"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5317320/pdf/nihms837319.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34760103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The functional connectome has gained increased attention in the neuroscience community. In general, most network connectivity models are based on correlations between discrete-time series signals that only connect two different brain regions. However, these bivariate region-to-region models do not involve three or more brain regions that form a subnetwork. Here we propose a learning-based method to explore subnetwork biomarkers that are significantly distinguishable between two clinical cohorts. Learning on hypergraph is employed in our work. Specifically, we construct a hypergraph by exhaustively inspecting all possible subnetworks for all subjects, where each hyperedge connects a group of subjects demonstrating highly correlated functional connectivity behavior throughout the underlying subnetwork. The objective function of hypergraph learning is to jointly optimize the weights for all hyperedges which make the separation of two groups by the learned data representation be in the best consensus with the observed clinical labels. We deploy our method to find high order childhood autism biomarkers from rs-fMRI images. Promising results have been obtained from comprehensive evaluation on the discriminative power and generality in diagnosis of Autism.
{"title":"Identifying High Order Brain Connectome Biomarkers via Learning on Hypergraph.","authors":"Chen Zu, Yue Gao, Brent Munsell, Minjeong Kim, Ziwen Peng, Yingying Zhu, Wei Gao, Daoqiang Zhang, Dinggang Shen, Guorong Wu","doi":"10.1007/978-3-319-47157-0_1","DOIUrl":"https://doi.org/10.1007/978-3-319-47157-0_1","url":null,"abstract":"<p><p>The functional connectome has gained increased attention in the neuroscience community. In general, most network connectivity models are based on correlations between discrete-time series signals that only connect two different brain regions. However, these bivariate region-to-region models do not involve three or more brain regions that form a subnetwork. Here we propose a learning-based method to explore subnetwork biomarkers that are significantly distinguishable between two clinical cohorts. Learning on hypergraph is employed in our work. Specifically, we construct a hypergraph by exhaustively inspecting all possible subnetworks for all subjects, where each hyperedge connects a group of subjects demonstrating highly correlated functional connectivity behavior throughout the underlying subnetwork. The objective function of hypergraph learning is to jointly optimize the weights for all hyperedges which make the separation of two groups by the learned data representation be in the best consensus with the observed clinical labels. We deploy our method to find high order childhood autism biomarkers from rs-fMRI images. Promising results have been obtained from comprehensive evaluation on the discriminative power and generality in diagnosis of Autism.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"1-9"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-47157-0_1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36253612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1007/978-3-319-47157-0_9
Minjeong Kim, Guorong Wu, Isrem Rekik, Dinggang Shen
The growing collection of longitudinal images for brain disease diagnosis necessitates the development of advanced longitudinal registration and anatomical labeling methods that can respect temporal consistency between images. However, the characteristics of such longitudinal images and how they lodge into the image manifold are often neglected in existing labeling methods. Indeed, most of them independently align atlases to each target time-point image for propagating the pre-defined atlas labels to the subject domain. In this paper, we present a dual-layer groupwise registration method to consistently label anatomical regions of interest in brain images across different time-points using a multi-atlases-based labeling framework. Our framework can best enhance the labeling of longitudinal images through: (1) using the group mean of the longitudinal images of each subject (i.e., subject-mean) as a bridge between atlases and the longitudinal subject scans to align atlases to all time-point images jointly; and (2) using inter-atlas relationship in their nesting manifold to better register each atlas image to the subject-mean. These steps yield to a more consistent (from the joint alignment of atlases with all time-point images) and more accurate (from the manifold-guided registration between each atlases and the subject-mean image) registration, thereby eventually improving the consistency and accuracy for the subsequent labeling step. We have tested our dual-layer groupwise registration method to label two challenging longitudinal brain datasets (i.e., healthy infants and Alzheimer's disease subjects). Our experimental results have showed that our method achieves higher labeling accuracy while keeping the labeling consistency over time, when compared to the traditional registration scheme (without our proposed contributions). Moreover, the proposed framework can flexibly integrate with the existing label fusion methods, such as sparse-patch based methods, to improve the labeling accuracy of longitudinal datasets.
{"title":"Dual-Layer Groupwise Registration for Consistent Labeling of Longitudinal Brain Images.","authors":"Minjeong Kim, Guorong Wu, Isrem Rekik, Dinggang Shen","doi":"10.1007/978-3-319-47157-0_9","DOIUrl":"https://doi.org/10.1007/978-3-319-47157-0_9","url":null,"abstract":"<p><p>The growing collection of longitudinal images for brain disease diagnosis necessitates the development of advanced longitudinal registration and anatomical labeling methods that can respect temporal consistency between images. However, the characteristics of such longitudinal images and how they lodge into the image manifold are often neglected in existing labeling methods. Indeed, most of them independently align atlases to each target time-point image for propagating the pre-defined atlas labels to the subject domain. In this paper, we present a <i>dual</i>-<i>layer groupwise registration method</i> to consistently label anatomical regions of interest in brain images across different time-points using a multi-atlases-based labeling framework. Our framework can best enhance the labeling of longitudinal images through: <b>(1)</b> using the group mean of the longitudinal images of each subject (i.e., subject-mean) as a bridge between atlases and the longitudinal subject scans to align atlases to all time-point images jointly; and <b>(2)</b> using inter-atlas relationship in their nesting manifold to better register each atlas image to the subject-mean. These steps yield to a more consistent (from the joint alignment of atlases with all time-point images) and more accurate (from the manifold-guided registration between each atlases and the subject-mean image) registration, thereby eventually improving the consistency and accuracy for the subsequent labeling step. We have tested our dual-layer groupwise registration method to label two challenging longitudinal brain datasets (i.e., healthy infants and Alzheimer's disease subjects). Our experimental results have showed that our method achieves higher labeling accuracy while keeping the labeling consistency over time, when compared to the traditional registration scheme (without our proposed contributions). Moreover, the proposed framework can flexibly integrate with the existing label fusion methods, such as sparse-patch based methods, to improve the labeling accuracy of longitudinal datasets.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"69-76"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-47157-0_9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35080512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1007/978-3-319-47157-0_8
Jun Zhang, Yaozong Gao, Sang Hyun Park, Xiaopeng Zong, Weili Lin, Dinggang Shen
Quantitative analysis of perivascular spaces (PVSs) is important to reveal the correlations between cerebrovascular lesions and neurodegenerative diseases. In this study, we propose a learning-based segmentation framework to extract the PVSs from high-resolution 7T MR images. Specifically, we integrate three types of vascular filter responses into a structured random forest for classifying voxels into PVS and background. In addition, we also propose a novel entropy-based sampling strategy to extract informative samples in the background for training the classification model. Since various vascular features can be extracted by the three vascular filters, even thin and low-contrast structures can be effectively extracted from the noisy background. Moreover, continuous and smooth segmentation results can be obtained by utilizing the patch-based structured labels. The segmentation performance is evaluated on 19 subjects with 7T MR images, and the experimental results demonstrate that the joint use of entropy-based sampling strategy, vascular features and structured learning improves the segmentation accuracy, with the Dice similarity coefficient reaching 66 %.
{"title":"Segmentation of Perivascular Spaces Using Vascular Features and Structured Random Forest from 7T MR Image.","authors":"Jun Zhang, Yaozong Gao, Sang Hyun Park, Xiaopeng Zong, Weili Lin, Dinggang Shen","doi":"10.1007/978-3-319-47157-0_8","DOIUrl":"https://doi.org/10.1007/978-3-319-47157-0_8","url":null,"abstract":"<p><p>Quantitative analysis of perivascular spaces (PVSs) is important to reveal the correlations between cerebrovascular lesions and neurodegenerative diseases. In this study, we propose a learning-based segmentation framework to extract the PVSs from high-resolution 7T MR images. Specifically, we integrate three types of vascular filter responses into a structured random forest for classifying voxels into PVS and background. In addition, we also propose a novel entropy-based sampling strategy to extract informative samples in the background for training the classification model. Since various vascular features can be extracted by the three vascular filters, even thin and low-contrast structures can be effectively extracted from the noisy background. Moreover, continuous and smooth segmentation results can be obtained by utilizing the patch-based structured labels. The segmentation performance is evaluated on 19 subjects with 7T MR images, and the experimental results demonstrate that the joint use of entropy-based sampling strategy, vascular features and structured learning improves the segmentation accuracy, with the Dice similarity coefficient reaching 66 %.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"61-68"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-47157-0_8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35080511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1007/978-3-319-47157-0_29
Zhengwang Wu, Sang Hyun Park, Yanrong Guo, Yaozong Gao, Dinggang Shen
This paper proposes a novel method of using regression-guided deformable models for brain regions of interest (ROIs) segmentation. Different from conventional deformable segmentation, which often deforms shape model locally and thus sensitive to initialization, we propose to learn a regressor to explicitly guide the shape deformation, thus eventually improves the performance of ROI segmentation. The regressor is learned via two steps, (1) a joint classification and regression random forest (CRRF) and (2) an auto-context model. The CRRF predicts each voxel's deformation to the nearest point on the ROI boundary as well as each voxel's class label (e.g., ROI versus background). The auto-context model further refines all voxel's deformations (i.e., deformation field) and class labels (i.e., label maps) by considering the neighboring structures. Compared to the conventional random forest regressor, the proposed regressor provides more accurate deformation field estimation and thus more robust in guiding deformation of the shape model. Validated in segmentation of 14 midbrain ROIs from the IXI dataset, our method outperforms the state-of-art multi-atlas label fusion and classification methods, and also significantly reduces the computation cost.
{"title":"Regression Guided Deformable Models for Segmentation of Multiple Brain ROIs.","authors":"Zhengwang Wu, Sang Hyun Park, Yanrong Guo, Yaozong Gao, Dinggang Shen","doi":"10.1007/978-3-319-47157-0_29","DOIUrl":"https://doi.org/10.1007/978-3-319-47157-0_29","url":null,"abstract":"<p><p>This paper proposes a novel method of using regression-guided deformable models for brain regions of interest (ROIs) segmentation. Different from conventional deformable segmentation, which often deforms shape model locally and thus sensitive to initialization, we propose to learn a regressor to explicitly guide the shape deformation, thus eventually improves the performance of ROI segmentation. The regressor is learned via two steps, (1) a joint classification and regression random forest (CRRF) and (2) an auto-context model. The CRRF predicts each voxel's deformation to the nearest point on the ROI boundary as well as each voxel's class label (e.g., ROI <i>versus</i> background). The auto-context model further refines all voxel's deformations (i.e., deformation field) and class labels (i.e., label maps) by considering the neighboring structures. Compared to the conventional random forest regressor, the proposed regressor provides more accurate deformation field estimation and thus more robust in guiding deformation of the shape model. Validated in segmentation of 14 midbrain ROIs from the IXI dataset, our method outperforms the state-of-art multi-atlas label fusion and classification methods, and also significantly reduces the computation cost.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"237-245"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-47157-0_29","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35080514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}