Hippocampal subfields play important and divergent roles in both memory formation and early diagnosis of many neurological diseases, but automatic subfield segmentation is less explored due to its small size and poor image contrast. In this paper, we propose an automatic learning-based hippocampal subfields segmentation framework using multi-modality 3TMR images, including T1 MRI and resting-state fMRI (rs-fMRI). To do this, we first acquire both 3T and 7T T1 MRIs for each training subject, and then the 7T T1 MRI are linearly registered onto the 3T T1 MRI. Six hippocampal subfields are manually labeled on the aligned 7T T1 MRI, which has the 7T image contrast but sits in the 3T T1 space. Next, corresponding appearance and relationship features from both 3T T1 MRI and rs-fMRI are extracted to train a structured random forest as a multi-label classifier to conduct the segmentation. Finally, the subfield segmentation is further refined iteratively by additional context features and updated relationship features. To our knowledge, this is the first work that addresses the challenging automatic hippocampal subfields segmentation using 3T routine T1 MRI and rs-fMRI. The quantitative comparison between our results and manual ground truth demonstrates the effectiveness of our method. Besides, we also find that (a) multi-modality features significantly improved subfield segmentation performance due to the complementary information among modalities; (b) automatic segmentation results using 3T multimodality images are partially comparable to those on 7T T1 MRI.
{"title":"Automatic Hippocampal Subfield Segmentation from 3T Multi-modality Images.","authors":"Zhengwang Wu, Yaozong Gao, Feng Shi, Valerie Jewells, Dinggang Shen","doi":"10.1007/978-3-319-47157-0_28","DOIUrl":"10.1007/978-3-319-47157-0_28","url":null,"abstract":"<p><p>Hippocampal subfields play important and divergent roles in both memory formation and early diagnosis of many neurological diseases, but automatic subfield segmentation is less explored due to its small size and poor image contrast. In this paper, we propose an automatic learning-based hippocampal subfields segmentation framework using multi-modality 3TMR images, including T1 MRI and resting-state fMRI (rs-fMRI). To do this, we first acquire both 3T and 7T T1 MRIs for each training subject, and then the 7T T1 MRI are linearly registered onto the 3T T1 MRI. Six hippocampal subfields are manually labeled on the aligned 7T T1 MRI, which has the 7T image contrast but sits in the 3T T1 space. Next, corresponding appearance and relationship features from both 3T T1 MRI and rs-fMRI are extracted to train a structured random forest as a multi-label classifier to conduct the segmentation. Finally, the subfield segmentation is further refined iteratively by additional context features and updated relationship features. To our knowledge, this is the first work that addresses the challenging automatic hippocampal subfields segmentation using 3T routine T1 MRI and rs-fMRI. The quantitative comparison between our results and manual ground truth demonstrates the effectiveness of our method. Besides, we also find that (a) multi-modality features significantly improved subfield segmentation performance due to the complementary information among modalities; (b) automatic segmentation results using 3T multimodality images are partially comparable to those on 7T T1 MRI.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"229-236"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5464731/pdf/nihms833106.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35080513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-01DOI: 10.1007/978-3-319-47157-0_26
Renping Yu, Minghui Deng, P. Yap, Zhihui Wei, Li Wang, D. Shen
{"title":"Learning-Based 3T Brain MRI Segmentation with Guidance from 7T MRI Labeling.","authors":"Renping Yu, Minghui Deng, P. Yap, Zhihui Wei, Li Wang, D. Shen","doi":"10.1007/978-3-319-47157-0_26","DOIUrl":"https://doi.org/10.1007/978-3-319-47157-0_26","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"176 1","pages":"213-220"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77651903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-01Epub Date: 2016-10-01DOI: 10.1007/978-3-319-47157-0_30
Xi Yang, Yan Jin, Xiaobo Chen, Han Zhang, Gang Li, Dinggang Shen
The resting-state functional MRI (rs-fMRI) has been demonstrated as a valuable neuroimaging tool to identify mild cognitive impairment (MCI) patients. Previous studies showed network breakdown in MCI patients with thresholded rs-fMRI connectivity networks. Recently, machine learning techniques have assisted MCI diagnosis by integrating information from multiple networks constructed with a range of thresholds. However, due to the difficulty of searching optimal thresholds, they are often predetermined and uniformly applied to the entire network. Here, we propose an element-wise thresholding strategy to dynamically construct multiple functional networks, i.e., using possibly different thresholds for different elements in the connectivity matrix. These dynamically generated networks are then integrated with a network fusion scheme to capture their common and complementary information. Finally, the features extracted from the fused network are fed into support vector machine (SVM) for MCI diagnosis. Compared to the previous methods, our proposed framework can greatly improve MCI classification performance.
{"title":"Functional Connectivity Network Fusion with Dynamic Thresholding for MCI Diagnosis.","authors":"Xi Yang, Yan Jin, Xiaobo Chen, Han Zhang, Gang Li, Dinggang Shen","doi":"10.1007/978-3-319-47157-0_30","DOIUrl":"10.1007/978-3-319-47157-0_30","url":null,"abstract":"<p><p>The resting-state functional MRI (rs-fMRI) has been demonstrated as a valuable neuroimaging tool to identify mild cognitive impairment (MCI) patients. Previous studies showed network breakdown in MCI patients with thresholded rs-fMRI connectivity networks. Recently, machine learning techniques have assisted MCI diagnosis by integrating information from multiple networks constructed with a range of thresholds. However, due to the difficulty of searching optimal thresholds, they are often predetermined and uniformly applied to the entire network. Here, we propose an element-wise thresholding strategy to dynamically construct multiple functional networks, i.e., using possibly different thresholds for different elements in the connectivity matrix. These dynamically generated networks are then integrated with a network fusion scheme to capture their common and complementary information. Finally, the features extracted from the fused network are fed into support vector machine (SVM) for MCI diagnosis. Compared to the previous methods, our proposed framework can greatly improve MCI classification performance.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"246-253"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5609704/pdf/nihms851226.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35542412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-01Epub Date: 2016-10-01DOI: 10.1007/978-3-319-47157-0_38
Xiaofeng Zhu, Kim-Han Thung, Jun Zhang, Dinggang She
This paper proposes a framework of fast neuroimaging-based retrieval and AD analysis, by three key steps: (1) landmark detection, which efficiently extracts landmark-based neuroimaging features without the need of nonlinear registration in testing stage; (2) landmark selection, which removes redundant/noisy landmarks via proposing a feature selection method that considers structural information among landmarks; and (3) hashing, which converts high-dimensional features of subjects into binary codes, for efficiently conducting approximate nearest neighbor search and diagnosis of AD. We have conducted experiments on Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, and demonstrated that our framework could achieve higher performance than the comparison methods, in terms of accuracy and speed (at least 100 times faster).
本文通过三个关键步骤提出了基于神经影像的快速检索和 AD 分析框架:(1) 地标检测:无需在测试阶段进行非线性配准,即可高效提取基于地标的神经影像特征;(2) 地标选择:通过提出一种考虑地标间结构信息的特征选择方法,去除冗余/噪声地标;(3) 散列:将受试者的高维特征转换为二进制代码,以高效进行近似近邻搜索和 AD 诊断。我们在阿尔茨海默病神经影像计划(ADNI)数据集上进行了实验,结果表明,我们的框架在准确性和速度方面(至少快 100 倍)都能达到比对比方法更高的性能。
{"title":"Fast Neuroimaging-Based Retrieval for Alzheimer's Disease Analysis.","authors":"Xiaofeng Zhu, Kim-Han Thung, Jun Zhang, Dinggang She","doi":"10.1007/978-3-319-47157-0_38","DOIUrl":"10.1007/978-3-319-47157-0_38","url":null,"abstract":"<p><p>This paper proposes a framework of fast neuroimaging-based retrieval and AD analysis, by three key steps: (1) <i>landmark detection</i>, which efficiently extracts landmark-based neuroimaging features without the need of nonlinear registration in testing stage; (2) <i>landmark selection</i>, which removes redundant/noisy landmarks via proposing a feature selection method that considers structural information among landmarks; and (3) <i>hashing</i>, which converts high-dimensional features of subjects into binary codes, for efficiently conducting approximate nearest neighbor search and diagnosis of AD. We have conducted experiments on Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, and demonstrated that our framework could achieve higher performance than the comparison methods, in terms of accuracy and speed (at least 100 times faster).</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"10019 ","pages":"313-321"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5614455/pdf/nihms851222.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10001173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neuroimaging data have been widely used to derive possible biomarkers for Alzheimer's Disease (AD) diagnosis. As only certain brain regions are related to AD progression, many feature selection methods have been proposed to identify informative features (i.e., brain regions) to build an accurate prediction model. These methods mostly only focus on the feature-target relationship to select features which are discriminative to the targets (e.g., diagnosis labels). However, since the brain regions are anatomically and functionally connected, there could be useful intrinsic relationships among features. In this paper, by utilizing both the feature-target and feature-feature relationships, we propose a novel sparse regression model to select informative features which are discriminative to the targets and also representative to the features. We argue that the features which are representative (i.e., can be used to represent many other features) are important, as they signify strong "connection" with other ROIs, and could be related to the disease progression. We use our model to select features for both binary and multi-class classification tasks, and the experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset show that the proposed method outperforms other comparison methods considered in this work.
神经影像数据已被广泛用于提取阿尔茨海默病(AD)诊断的可能生物标记物。由于只有某些脑区与阿兹海默症的进展相关,人们提出了许多特征选择方法来识别信息特征(即脑区),以建立准确的预测模型。这些方法大多只关注特征与目标的关系,以选择对目标(如诊断标签)具有区分性的特征。然而,由于脑区在解剖学和功能上相互关联,特征之间可能存在有用的内在关系。在本文中,通过利用特征-目标和特征-特征之间的关系,我们提出了一种新颖的稀疏回归模型,以选择对目标具有鉴别性且对特征具有代表性的信息特征。我们认为,具有代表性(即可用于代表许多其他特征)的特征非常重要,因为它们标志着与其他 ROI 的紧密 "联系",并可能与疾病进展相关。我们使用我们的模型为二元分类和多类分类任务选择特征,在阿尔茨海默病神经影像计划(ADNI)数据集上的实验结果表明,所提出的方法优于本研究中考虑的其他比较方法。
{"title":"Joint Discriminative and Representative Feature Selection for Alzheimer's Disease Diagnosis.","authors":"Xiaofeng Zhu, Heung-Il Suk, Kim-Han Thung, Yingying Zhu, Guorong Wu, Dinggang Shen","doi":"10.1007/978-3-319-47157-0_10","DOIUrl":"10.1007/978-3-319-47157-0_10","url":null,"abstract":"<p><p>Neuroimaging data have been widely used to derive possible biomarkers for Alzheimer's Disease (AD) diagnosis. As only certain brain regions are related to AD progression, many feature selection methods have been proposed to identify informative features (<i>i.e.</i>, brain regions) to build an accurate prediction model. These methods mostly only focus on the feature-target relationship to select features which are discriminative to the targets (<i>e.g.</i>, diagnosis labels). However, since the brain regions are anatomically and functionally connected, there could be useful intrinsic relationships among features. In this paper, by utilizing both the feature-target and feature-feature relationships, we propose a novel sparse regression model to select informative features which are discriminative to the targets and also representative to the features. We argue that the features which are representative (<i>i.e.</i>, can be used to represent many other features) are important, as they signify strong \"connection\" with other ROIs, and could be related to the disease progression. We use our model to select features for both binary and multi-class classification tasks, and the experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset show that the proposed method outperforms other comparison methods considered in this work.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"77-85"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5612439/pdf/nihms851223.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35552562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-05DOI: 10.1007/978-3-319-24888-2_25
Hong-Yu Ge, Guorong Wu, Li Wang, Yaozong Gao, D. Shen
{"title":"Hierarchical Multi-modal Image Registration by Learning Common Feature Representations","authors":"Hong-Yu Ge, Guorong Wu, Li Wang, Yaozong Gao, D. Shen","doi":"10.1007/978-3-319-24888-2_25","DOIUrl":"https://doi.org/10.1007/978-3-319-24888-2_25","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"5 1","pages":"203-211"},"PeriodicalIF":0.0,"publicationDate":"2015-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72909296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-05DOI: 10.1007/978-3-319-24888-2_21
Yan Jin, Chong-Yaw Wee, F. Shi, Kim-Han Thung, P. Yap, D. Shen
{"title":"Identification of Infants at Risk for Autism Using Multi-parameter Hierarchical White Matter Connectomes","authors":"Yan Jin, Chong-Yaw Wee, F. Shi, Kim-Han Thung, P. Yap, D. Shen","doi":"10.1007/978-3-319-24888-2_21","DOIUrl":"https://doi.org/10.1007/978-3-319-24888-2_21","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"88 1","pages":"170-177"},"PeriodicalIF":0.0,"publicationDate":"2015-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84328435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01Epub Date: 2015-10-02DOI: 10.1007/978-3-319-24888-2_36
Mingxia Liu, Daoqiang Zhang, Dinggang Shen
Multi-atlas based morphometric pattern analysis has been recently proposed for the automatic diagnosis of Alzheimer's disease (AD) and its early stage, i.e., mild cognitive impairment (MCI), where multi-view feature representations for subjects are generated by using multiple atlases. However, existing multi-atlas based methods usually assume that each class is represented by a specific type of data distribution (i.e., a single cluster), while the underlying distribution of data is actually a prior unknown. In this paper, we propose an inherent structure-guided multi-view leaning (ISML) method for AD/MCI classification. Specifically, we first extract multi-view features for subjects using multiple selected atlases, and then cluster subjects in the original classes into several sub-classes (i.e., clusters) in each atlas space. Then, we encode each subject with a new label vector, by considering both the original class labels and the coding vectors for those sub-classes, followed by a multi-task feature selection model in each of multi-atlas spaces. Finally, we learn multiple SVM classifiers based on the selected features, and fuse them together by an ensemble classification method. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database demonstrate that our method achieves better performance than several state-of-the-art methods in AD/MCI classification.
{"title":"Inherent Structure-Guided Multi-view Learning for Alzheimer's Disease and Mild Cognitive Impairment Classification.","authors":"Mingxia Liu, Daoqiang Zhang, Dinggang Shen","doi":"10.1007/978-3-319-24888-2_36","DOIUrl":"https://doi.org/10.1007/978-3-319-24888-2_36","url":null,"abstract":"<p><p>Multi-atlas based morphometric pattern analysis has been recently proposed for the automatic diagnosis of Alzheimer's disease (AD) and its early stage, i.e., mild cognitive impairment (MCI), where multi-view feature representations for subjects are generated by using multiple atlases. However, existing multi-atlas based methods usually assume that each class is represented by a specific type of data distribution (i.e., a single cluster), while the underlying distribution of data is actually a prior unknown. In this paper, we propose an inherent structure-guided multi-view leaning (ISML) method for AD/MCI classification. Specifically, we first extract multi-view features for subjects using multiple selected atlases, and then cluster subjects in the original classes into several sub-classes (i.e., clusters) in each atlas space. Then, we encode each subject with a new label vector, by considering both the original class labels and the coding vectors for those sub-classes, followed by a multi-task feature selection model in each of multi-atlas spaces. Finally, we learn multiple SVM classifiers based on the selected features, and fuse them together by an ensemble classification method. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database demonstrate that our method achieves better performance than several state-of-the-art methods in AD/MCI classification.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"9352 ","pages":"296-303"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-24888-2_36","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34323845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01Epub Date: 2015-10-02DOI: 10.1007/978-3-319-24888-2_24
Snehashis Roy, Aaron Carass, Jerry L Prince, Dzung L Pham
Segmenting T2-hyperintense white matter lesions from longitudinal MR images is essential in understanding progression of multiple sclerosis. Most lesion segmentation techniques find lesions independently at each time point, even though there are different noise and image contrast variations at each point in the time series. In this paper, we present a patch based 4D lesion segmentation method that takes advantage of the temporal component of longitudinal data. For each subject with multiple time-points, 4D patches are constructed from the T1-w and FLAIR scans of all time-points. For every 4D patch from a subject, a few relevant matching 4D patches are found from a reference, such that their convex combination reconstructs the subject's 4D patch. Then corresponding manual segmentation patches of the reference are combined in a similar manner to generate a 4D membership of lesions of the subject patch. We compare our 4D patch-based segmentation with independent 3D voxel-based and patch-based lesion segmentation algorithms. Based on ground truth segmentations from 30 data sets, we show that the mean Dice coefficients between manual and automated segmentations improve after using the 4D approach compared to two state-of-the-art 3D segmentation algorithms.
{"title":"Longitudinal Patch-Based Segmentation of Multiple Sclerosis White Matter Lesions.","authors":"Snehashis Roy, Aaron Carass, Jerry L Prince, Dzung L Pham","doi":"10.1007/978-3-319-24888-2_24","DOIUrl":"https://doi.org/10.1007/978-3-319-24888-2_24","url":null,"abstract":"<p><p>Segmenting T2-hyperintense white matter lesions from longitudinal MR images is essential in understanding progression of multiple sclerosis. Most lesion segmentation techniques find lesions independently at each time point, even though there are different noise and image contrast variations at each point in the time series. In this paper, we present a patch based 4D lesion segmentation method that takes advantage of the temporal component of longitudinal data. For each subject with multiple time-points, 4D patches are constructed from the <i>T</i><sub>1</sub>-w and FLAIR scans of all time-points. For every 4D patch from a subject, a few relevant matching 4D patches are found from a reference, such that their convex combination reconstructs the subject's 4D patch. Then corresponding manual segmentation patches of the reference are combined in a similar manner to generate a 4D membership of lesions of the subject patch. We compare our 4D patch-based segmentation with independent 3D voxel-based and patch-based lesion segmentation algorithms. Based on ground truth segmentations from 30 data sets, we show that the mean Dice coefficients between manual and automated segmentations improve after using the 4D approach compared to two state-of-the-art 3D segmentation algorithms.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"9352 ","pages":"194-202"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-24888-2_24","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34701960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01Epub Date: 2015-10-02DOI: 10.1007/978-3-319-24888-2_39
Tri Huynh, Yaozong Gao, Jiayin Kang, Li Wang, Pei Zhang, Dinggang Shen
Random forest has been widely recognized as one of the most powerful learning-based predictors in literature, with a broad range of applications in medical imaging. Notable efforts have been focused on enhancing the algorithm in multiple facets. In this paper, we present an original concept of multi-source information gain that escapes from the conventional notion inherent to random forest. We propose the idea of characterizing information gain in the training process by utilizing multiple beneficial sources of information, instead of the sole governing of prediction targets as conventionally known. We suggest the use of location and input image patches as the secondary sources of information for guiding the splitting process in random forest, and experiment on the challenging task of predicting CT images from MRI data. The experimentation is thoroughly analyzed in two datasets, i.e., human brain and prostate, with its performance further validated with the integration of auto-context model. Results prove that the multi-source information gain concept effectively helps better guide the training process with consistent improvement in prediction accuracy.
{"title":"Multi-source Information Gain for Random Forest: An Application to CT Image Prediction from MRI Data.","authors":"Tri Huynh, Yaozong Gao, Jiayin Kang, Li Wang, Pei Zhang, Dinggang Shen","doi":"10.1007/978-3-319-24888-2_39","DOIUrl":"10.1007/978-3-319-24888-2_39","url":null,"abstract":"<p><p>Random forest has been widely recognized as one of the most powerful learning-based predictors in literature, with a broad range of applications in medical imaging. Notable efforts have been focused on enhancing the algorithm in multiple facets. In this paper, we present an original concept of <i>multi-source information gain</i> that escapes from the conventional notion inherent to random forest. We propose the idea of characterizing information gain in the training process by utilizing <i>multiple beneficial sources of information</i>, instead of the <i>sole governing of prediction targets</i> as conventionally known. We suggest the use of location and input image patches as the secondary sources of information for guiding the splitting process in random forest, and experiment on the challenging task of predicting CT images from MRI data. The experimentation is thoroughly analyzed in two datasets, i.e., human brain and prostate, with its performance further validated with the integration of auto-context model. Results prove that the <i>multi-source information gain</i> concept effectively helps better guide the training process with consistent improvement in prediction accuracy.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"9352 ","pages":"321-329"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-24888-2_39","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36789069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}