Pub Date : 2015-10-01Epub Date: 2015-10-02DOI: 10.1007/978-3-319-24888-2_3
Guangkai Ma, Yaozong Gao, Li Wang, Ligang Wu, Dinggang Shen
Random Forest (RF) has been widely used in the learning-based labeling. In RF, each sample is directed from the root to each leaf based on the decisions made in the interior nodes, also called splitting nodes. The splitting nodes assign a testing sample to either left or right child based on the learned splitting function. The final prediction is determined as the average of label probability distributions stored in all arrived leaf nodes. For ambiguous testing samples, which often lie near the splitting boundaries, the conventional splitting function, also referred to as hard split function, tends to make wrong assignments, hence leading to wrong predictions. To overcome this limitation, we propose a novel soft-split random forest (SSRF) framework to improve the reliability of node splitting and finally the accuracy of classification. Specifically, a soft split function is employed to assign a testing sample into both left and right child nodes with their certain probabilities, which can effectively reduce influence of the wrong node assignment on the prediction accuracy. As a result, each testing sample can arrive at multiple leaf nodes, and their respective results can be fused to obtain the final prediction according to the weights accumulated along the path from the root node to each leaf node. Besides, considering the importance of context information, we also adopt a Haar-features based context model to iteratively refine the classification map. We have comprehensively evaluated our method on two public datasets, respectively, for labeling hippocampus in MR images and also labeling three organs in Head & Neck CT images. Compared with the hard-split RF (HSRF), our method achieved a notable improvement in labeling accuracy.
随机森林(RF)已广泛应用于基于学习的标注。在 RF 中,每个样本都是根据内部节点(也称为分割节点)的决定从根指向每片叶子的。分割节点根据学习到的分割函数将测试样本分配给左侧或右侧子节点。最终的预测结果是存储在所有到达的叶节点中的标签概率分布的平均值。对于模棱两可的测试样本(通常位于拆分边界附近),传统的拆分函数(也称为硬拆分函数)往往会做出错误的分配,从而导致错误的预测。为了克服这一局限,我们提出了一种新颖的软拆分随机森林(SSRF)框架,以提高节点拆分的可靠性和分类的准确性。具体来说,采用软拆分函数将测试样本以一定的概率分配到左右两个子节点中,从而有效降低错误节点分配对预测准确性的影响。因此,每个测试样本可以到达多个叶子节点,并根据从根节点到每个叶子节点的路径所积累的权重,将它们各自的结果进行融合,从而得到最终的预测结果。此外,考虑到上下文信息的重要性,我们还采用了基于 Haar 特征的上下文模型来迭代完善分类图。我们在两个公开数据集上对我们的方法进行了全面评估,这两个数据集分别用于标记 MR 图像中的海马和头颈部 CT 图像中的三个器官。与硬分割射频(HSRF)相比,我们的方法显著提高了标注准确率。
{"title":"Soft-Split Random Forest for Anatomy Labeling.","authors":"Guangkai Ma, Yaozong Gao, Li Wang, Ligang Wu, Dinggang Shen","doi":"10.1007/978-3-319-24888-2_3","DOIUrl":"10.1007/978-3-319-24888-2_3","url":null,"abstract":"<p><p>Random Forest (RF) has been widely used in the learning-based labeling. In RF, each sample is directed from the root to each leaf based on the decisions made in the interior nodes, also called splitting nodes. The splitting nodes assign a testing sample to <i>either</i> left <i>or</i> right child based on the learned splitting function. The final prediction is determined as the average of label probability distributions stored in all arrived leaf nodes. For ambiguous testing samples, which often lie near the splitting boundaries, the conventional splitting function, also referred to as <i>hard split</i> function, tends to make wrong assignments, hence leading to wrong predictions. To overcome this limitation, we propose a novel <i>soft-split</i> random forest (SSRF) framework to improve the reliability of node splitting and finally the accuracy of classification. Specifically, a <i>soft split</i> function is employed to assign a testing sample into <i>both</i> left <i>and</i> right child nodes with their certain probabilities, which can effectively reduce influence of the wrong node assignment on the prediction accuracy. As a result, each testing sample can arrive at multiple leaf nodes, and their respective results can be fused to obtain the final prediction according to the weights accumulated along the path from the root node to each leaf node. Besides, considering the importance of context information, we also adopt a Haar-features based context model to iteratively refine the classification map. We have comprehensively evaluated our method on two public datasets, respectively, for labeling hippocampus in MR images and also labeling three organs in Head & Neck CT images. Compared with the <i>hard-split</i> RF (HSRF), our method achieved a notable improvement in labeling accuracy.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"9352 ","pages":"17-25"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6261352/pdf/nihms963645.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36789068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a multi-view learning method using Magnetic Resonance Imaging (MRI) data for Alzheimer's Disease (AD) diagnosis. Specifically, we extract both Region-Of-Interest (ROI) features and Histograms of Oriented Gradient (HOG) features from each MRI image, and then propose mapping HOG features onto the space of ROI features to make them comparable and to impose high intra-class similarity with low inter-class similarity. Finally, both mapped HOG features and original ROI features are input to the support vector machine for AD diagnosis. The purpose of mapping HOG features onto the space of ROI features is to provide complementary information so that features from different views can not only be comparable (i.e., homogeneous) but also be interpretable. For example, ROI features are robust to noise, but lack of reflecting small or subtle changes, while HOG features are diverse but less robust to noise. The proposed multi-view learning method is designed to learn the transformation between two spaces and to separate the classes under the supervision of class labels. The experimental results on the MRI images from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset show that the proposed multi-view method helps enhance disease status identification performance, outperforming both baseline methods and state-of-the-art methods.
本文提出了一种利用磁共振成像(MRI)数据进行阿尔茨海默病(AD)诊断的多视图学习方法。具体来说,我们从每张核磁共振成像图像中提取感兴趣区域(ROI)特征和定向梯度直方图(HOG)特征,然后提出将 HOG 特征映射到 ROI 特征空间,使它们具有可比性,并使类内相似性高而类间相似性低。最后,将映射的 HOG 特征和原始 ROI 特征输入支持向量机,用于 AD 诊断。将 HOG 特征映射到 ROI 特征空间的目的是提供互补信息,使来自不同视图的特征不仅具有可比性(即同质性),而且具有可解释性。例如,ROI 特征对噪声具有鲁棒性,但不能反映微小或细微的变化,而 HOG 特征具有多样性,但对噪声的鲁棒性较差。所提出的多视图学习方法旨在学习两个空间之间的转换,并在类标签的监督下进行类分离。在阿尔茨海默病神经成像计划(ADNI)数据集的核磁共振图像上的实验结果表明,所提出的多视图方法有助于提高疾病状态识别性能,其性能优于基线方法和最先进的方法。
{"title":"Multi-view Classification for Identification of Alzheimer's Disease.","authors":"Xiaofeng Zhu, Heung-Il Suk, Yonghua Zhu, Kim-Han Thung, Guorong Wu, Dinggang Shen","doi":"10.1007/978-3-319-24888-2_31","DOIUrl":"10.1007/978-3-319-24888-2_31","url":null,"abstract":"<p><p>In this paper, we propose a multi-view learning method using Magnetic Resonance Imaging (MRI) data for Alzheimer's Disease (AD) diagnosis. Specifically, we extract both Region-Of-Interest (ROI) features and Histograms of Oriented Gradient (HOG) features from each MRI image, and then propose mapping HOG features onto the space of ROI features to make them comparable and to impose high intra-class similarity with low inter-class similarity. Finally, both mapped HOG features and original ROI features are input to the support vector machine for AD diagnosis. The purpose of mapping HOG features onto the space of ROI features is to provide complementary information so that features from different views can <i>not only</i> be comparable (<i>i.e.,</i> homogeneous) <i>but also</i> be interpretable. For example, ROI features are robust to noise, but lack of reflecting small or subtle changes, while HOG features are diverse but less robust to noise. The proposed multi-view learning method is designed to learn the transformation between two spaces and to separate the classes under the supervision of class labels. The experimental results on the MRI images from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset show that the proposed multi-view method helps enhance disease status identification performance, outperforming both baseline methods and state-of-the-art methods.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"9352 1","pages":"255-262"},"PeriodicalIF":0.0,"publicationDate":"2015-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4758364/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82496986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-01-01DOI: 10.1007/978-3-319-10581-9_25
Murat Bilgel, Aaron Carass, Susan M Resnick, Dean F Wong, Jerry L Prince
Spatial normalization of positron emission tomography (PET) images is essential for population studies, yet work on anatomically accurate PET-to-PET registration is limited. We present a method for the spatial normalization of PET images that improves their anatomical alignment based on a deformation correction model learned from structural image registration. To generate the model, we first create a population-based PET template with a corresponding structural image template. We register each PET image onto the PET template using deformable registration that consists of an affine step followed by a diffeomorphic mapping. Constraining the affine step to be the same as that obtained from the PET registration, we find the diffeomorphic mapping that will align the structural image with the structural template. We train partial least squares (PLS) regression models within small neighborhoods to relate the PET intensities and deformation fields obtained from the diffeomorphic mapping to the structural image deformation fields. The trained model can then be used to obtain more accurate registration of PET images to the PET template without the use of a structural image. A cross validation based evaluation on 79 subjects shows that our method yields more accurate alignment of the PET images compared to deformable PET-to-PET registration as revealed by 1) a visual examination of the deformed images, 2) a smaller error in the deformation fields, and 3) a greater overlap of the deformed anatomical labels with ground truth segmentations.
正电子发射断层扫描(PET)图像的空间归一化对群体研究至关重要,但在解剖学上精确的 PET 对 PET 配准工作却很有限。我们提出了一种 PET 图像空间归一化方法,该方法基于从结构图像配准中学习到的变形校正模型,改善了解剖配准。为了生成模型,我们首先创建了一个基于群体的 PET 模板和一个相应的结构图像模板。我们使用可变形配准技术将每张 PET 图像配准到 PET 模板上,该技术包括仿射步骤和差异映射。我们限制仿射步骤与 PET 配准得到的步骤相同,然后找到差分映射,使结构图像与结构模板对齐。我们在小邻域内训练偏最小二乘法(PLS)回归模型,将差异形态映射得到的 PET 强度和变形场与结构图像变形场联系起来。经过训练的模型可用于在不使用结构图像的情况下将 PET 图像更精确地配准到 PET 模板。对 79 名受试者进行的交叉验证评估表明,与可变形的 PET 对 PET 配准相比,我们的方法能更准确地配准 PET 图像,具体表现在:1)可目测变形图像;2)变形场误差较小;3)变形解剖学标签与地面实况分割重叠较多。
{"title":"Deformation field correction for spatial normalization of PET images using a population-derived partial least squares model.","authors":"Murat Bilgel, Aaron Carass, Susan M Resnick, Dean F Wong, Jerry L Prince","doi":"10.1007/978-3-319-10581-9_25","DOIUrl":"10.1007/978-3-319-10581-9_25","url":null,"abstract":"<p><p>Spatial normalization of positron emission tomography (PET) images is essential for population studies, yet work on anatomically accurate PET-to-PET registration is limited. We present a method for the spatial normalization of PET images that improves their anatomical alignment based on a deformation correction model learned from structural image registration. To generate the model, we first create a population-based PET template with a corresponding structural image template. We register each PET image onto the PET template using deformable registration that consists of an affine step followed by a diffeomorphic mapping. Constraining the affine step to be the same as that obtained from the PET registration, we find the diffeomorphic mapping that will align the structural image with the structural template. We train partial least squares (PLS) regression models within small neighborhoods to relate the PET intensities and deformation fields obtained from the diffeomorphic mapping to the structural image deformation fields. The trained model can then be used to obtain more accurate registration of PET images to the PET template without the use of a structural image. A cross validation based evaluation on 79 subjects shows that our method yields more accurate alignment of the PET images compared to deformable PET-to-PET registration as revealed by 1) a visual examination of the deformed images, 2) a smaller error in the deformation fields, and 3) a greater overlap of the deformed anatomical labels with ground truth segmentations.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"8679 ","pages":"198-206"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4222176/pdf/nihms637009.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32803934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-01-01DOI: 10.1007/978-3-319-10581-9_9
Zhen Yang, Shenghua Zhong, Aaron Carass, Sarah H Ying, Jerry L Prince
Cerebellar ataxia is a progressive neuro-degenerative disease that has multiple genetic versions, each with a characteristic pattern of anatomical degeneration that yields distinctive motor and cognitive problems. Studying this pattern of degeneration can help with the diagnosis of disease subtypes, evaluation of disease stage, and treatment planning. In this work, we propose a learning framework using MR image data for discriminating a set of cerebellar ataxia types and predicting a disease related functional score. We address the difficulty in analyzing high-dimensional image data with limited training subjects by: 1) training weak classifiers/regressors on a set of image subdomains separately, and combining the weak classifier/regressor outputs to make the decision; 2) perturbing the image subdomain to increase the training samples; 3) using a deep learning technique called the stacked auto-encoder to develop highly representative feature vectors of the input data. Experiments show that our approach can reliably classify between one of four categories (healthy control and three types of ataxia), and predict the functional staging score for ataxia.
{"title":"Deep Learning for Cerebellar Ataxia Classification and Functional Score Regression.","authors":"Zhen Yang, Shenghua Zhong, Aaron Carass, Sarah H Ying, Jerry L Prince","doi":"10.1007/978-3-319-10581-9_9","DOIUrl":"https://doi.org/10.1007/978-3-319-10581-9_9","url":null,"abstract":"<p><p>Cerebellar ataxia is a progressive neuro-degenerative disease that has multiple genetic versions, each with a characteristic pattern of anatomical degeneration that yields distinctive motor and cognitive problems. Studying this pattern of degeneration can help with the diagnosis of disease subtypes, evaluation of disease stage, and treatment planning. In this work, we propose a learning framework using MR image data for discriminating a set of cerebellar ataxia types and predicting a disease related functional score. We address the difficulty in analyzing high-dimensional image data with limited training subjects by: 1) training weak classifiers/regressors on a set of image subdomains separately, and combining the weak classifier/regressor outputs to make the decision; 2) perturbing the image subdomain to increase the training samples; 3) using a deep learning technique called the stacked auto-encoder to develop highly representative feature vectors of the input data. Experiments show that our approach can reliably classify between one of four categories (healthy control and three types of ataxia), and predict the functional staging score for ataxia.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"8679 ","pages":"68-76"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-10581-9_9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32945877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-01-01DOI: 10.1007/978-3-319-10581-9_38
Yonggang Shi, Junning Li, Arthur W Toga
In this paper we propose a novel algorithm for the efficient search of the most similar brains from a large collection of MR imaging data. The key idea is to compactly represent and quantify the differences of cortical surfaces in terms of their intrinsic geometry by comparing the Reeb graphs constructed from their Laplace-Beltrami eigenfunctions. To overcome the topological noise in the Reeb graphs, we develop a progressive pruning and matching algorithm based on the persistence of critical points. Given the Reeb graphs of two cortical surfaces, our method can calculate their distance in less than 10 milliseconds on a PC. In experimental results, we apply our method on a large collection of 1326 brains for searching, clustering, and automated labeling to demonstrate its value for the "Big Data" science in human neuroimaging.
{"title":"Persistent Reeb Graph Matching for Fast Brain Search.","authors":"Yonggang Shi, Junning Li, Arthur W Toga","doi":"10.1007/978-3-319-10581-9_38","DOIUrl":"https://doi.org/10.1007/978-3-319-10581-9_38","url":null,"abstract":"<p><p>In this paper we propose a novel algorithm for the efficient search of the most similar brains from a large collection of MR imaging data. The key idea is to compactly represent and quantify the differences of cortical surfaces in terms of their intrinsic geometry by comparing the Reeb graphs constructed from their Laplace-Beltrami eigenfunctions. To overcome the topological noise in the Reeb graphs, we develop a progressive pruning and matching algorithm based on the persistence of critical points. Given the Reeb graphs of two cortical surfaces, our method can calculate their distance in less than 10 milliseconds on a PC. In experimental results, we apply our method on a large collection of 1326 brains for searching, clustering, and automated labeling to demonstrate its value for the \"Big Data\" science in human neuroimaging.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"8679 ","pages":"306-313"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-10581-9_38","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32980388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-01-01DOI: 10.1007/978-3-319-10581-9_12
Yaozong Gao, Li Wang, Yeqin Shao, Dinggang Shen
Segmenting the prostate from CT images is a critical step in the radio-therapy planning for prostate cancer. The segmentation accuracy could largely affect the efficacy of radiation treatment. However, due to the touching boundaries with the bladder and the rectum, the prostate boundary is often ambiguous and hard to recognize, which leads to inconsistent manual delineations across different clinicians. In this paper, we propose a learning-based approach for boundary detection and deformable segmentation of the prostate. Our proposed method aims to learn a boundary distance transform, which maps an intensity image into a boundary distance map. To enforce the spatial consistency on the learned distance transform, we combine our approach with the auto-context model for iteratively refining the estimated distance map. After the refinement, the prostate boundaries can be readily detected by finding the valley in the distance map. In addition, the estimated distance map can also be used as a new external force for guiding the deformable segmentation. Specifically, to automatically segment the prostate, we integrate the estimated boundary distance map into a level set formulation. Experimental results on 73 CT planning images show that the proposed distance transform is more effective than the traditional classification-based method for driving the deformable segmentation. Also, our method can achieve more consistent segmentations than human raters, and more accurate results than the existing methods under comparison.
{"title":"Learning Distance Transform for Boundary Detection and Deformable Segmentation in CT Prostate Images.","authors":"Yaozong Gao, Li Wang, Yeqin Shao, Dinggang Shen","doi":"10.1007/978-3-319-10581-9_12","DOIUrl":"10.1007/978-3-319-10581-9_12","url":null,"abstract":"<p><p>Segmenting the prostate from CT images is a critical step in the radio-therapy planning for prostate cancer. The segmentation accuracy could largely affect the efficacy of radiation treatment. However, due to the touching boundaries with the bladder and the rectum, the prostate boundary is often ambiguous and hard to recognize, which leads to inconsistent manual delineations across different clinicians. In this paper, we propose a learning-based approach for boundary detection and deformable segmentation of the prostate. Our proposed method aims to learn a boundary distance transform, which maps an intensity image into a boundary distance map. To enforce the spatial consistency on the learned distance transform, we combine our approach with the auto-context model for iteratively refining the estimated distance map. After the refinement, the prostate boundaries can be readily detected by finding the valley in the distance map. In addition, the estimated distance map can also be used as a new external force for guiding the deformable segmentation. Specifically, to automatically segment the prostate, we integrate the estimated boundary distance map into a level set formulation. Experimental results on 73 CT planning images show that the proposed distance transform is more effective than the traditional classification-based method for driving the deformable segmentation. Also, our method can achieve more consistent segmentations than human raters, and more accurate results than the existing methods under comparison.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"8679 ","pages":"93-100"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6097539/pdf/nihms942711.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36411059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-01-01DOI: 10.1007/978-3-319-10581-9_31
Snehashis Roy, Aaron Carass, Jerry L Prince, Dzung L Pham
Quantitative measurements from segmentations of soft tissues from magnetic resonance images (MRI) of human brains provide important biomarkers for normal aging, as well as disease progression. In this paper, we propose a patch-based tissue classification method from MR images using sparse dictionary learning from an atlas. Unlike most atlas-based classification methods, deformable registration from the atlas to the subject is not required. An "atlas" consists of an MR image, its tissue probabilities, and the hard segmentation. The "subject" consists of the MR image and the corresponding affine registered atlas probabilities (or priors). A subject specific patch dictionary is created by learning relevant patches from the atlas. Then the subject patches are modeled as sparse combinations of learned atlas patches. The same sparse combination is applied to the segmentation patches of the atlas to generate tissue memberships of the subject. The novel combination of prior probabilities in the example patches enables us to distinguish tissues having similar intensities but having different spatial location. We show that our method outperforms two state-of-the-art whole brain tissue segmentation methods. We experimented on 12 subjects having manual tissue delineations, obtaining mean Dice coefficients of 0:91 and 0:87 for cortical gray matter and cerebral white matter, respectively. In addition, experiments on subjects with ventriculomegaly shows significantly better segmentation using our approach than the competing methods.
{"title":"Subject Specific Sparse Dictionary Learning for Atlas based Brain MRI Segmentation.","authors":"Snehashis Roy, Aaron Carass, Jerry L Prince, Dzung L Pham","doi":"10.1007/978-3-319-10581-9_31","DOIUrl":"10.1007/978-3-319-10581-9_31","url":null,"abstract":"<p><p>Quantitative measurements from segmentations of soft tissues from magnetic resonance images (MRI) of human brains provide important biomarkers for normal aging, as well as disease progression. In this paper, we propose a patch-based tissue classification method from MR images using sparse dictionary learning from an atlas. Unlike most atlas-based classification methods, deformable registration from the atlas to the subject is not required. An \"atlas\" consists of an MR image, its tissue probabilities, and the hard segmentation. The \"subject\" consists of the MR image and the corresponding affine registered atlas probabilities (or priors). A subject specific patch dictionary is created by learning relevant patches from the atlas. Then the subject patches are modeled as sparse combinations of learned atlas patches. The same sparse combination is applied to the segmentation patches of the atlas to generate tissue memberships of the subject. The novel combination of prior probabilities in the example patches enables us to distinguish tissues having similar intensities but having different spatial location. We show that our method outperforms two state-of-the-art whole brain tissue segmentation methods. We experimented on 12 subjects having manual tissue delineations, obtaining mean Dice coefficients of 0:91 and 0:87 for cortical gray matter and cerebral white matter, respectively. In addition, experiments on subjects with ventriculomegaly shows significantly better segmentation using our approach than the competing methods.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"8679 ","pages":"248-255"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-10581-9_31","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32803935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-09-18DOI: 10.1007/978-3-642-24319-6_28
M. Chung, Seongho Seo, N. Adluru, H. K. Vorperian
{"title":"Hot Spots Conjecture and Its Application to Modeling Tubular Structures","authors":"M. Chung, Seongho Seo, N. Adluru, H. K. Vorperian","doi":"10.1007/978-3-642-24319-6_28","DOIUrl":"https://doi.org/10.1007/978-3-642-24319-6_28","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"1 1","pages":"225-232"},"PeriodicalIF":0.0,"publicationDate":"2011-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88183667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-01-01DOI: 10.1007/978-3-642-24319-6_1
Yonghong Shi, Shu Liao, Dinggang Shen
This paper presents a novel fast registration method for aligning the planning image onto each treatment image of a patient for adaptive radiation therapy of the prostate cancer. Specifically, an online correspondence interpolation method is presented to learn the statistical correlation of the deformations between prostate boundary and non-boundary regions from a population of training patients, as well as from the online-collected treatment images of the same patient. With this learned statistical correlation, the estimated boundary deformations can be used to rapidly predict regional deformations between prostates in the planning and treatment images. In particular, the population-based correlation can be initially used to interpolate the dense correspondences when the number of available treatment images from the current patient is small. With the acquisition of more treatment images from the current patient, the patient-specific information gradually plays a more important role to reflect the prostate shape changes of the current patient during the treatment. Eventually, only the patient-specific correlation is used to guide the regional correspondence prediction, once a sufficient number of treatment images have been acquired and segmented from the current patient. Experimental results show that the proposed method can achieve much faster registration speed yet with comparable registration accuracy compared with the thin plate spline (TPS) based interpolation approach.
{"title":"Learning Statistical Correlation of Prostate Deformations for Fast Registration.","authors":"Yonghong Shi, Shu Liao, Dinggang Shen","doi":"10.1007/978-3-642-24319-6_1","DOIUrl":"10.1007/978-3-642-24319-6_1","url":null,"abstract":"<p><p>This paper presents a novel fast registration method for aligning the planning image onto each treatment image of a patient for adaptive radiation therapy of the prostate cancer. Specifically, an online correspondence interpolation method is presented to learn the statistical correlation of the deformations between prostate boundary and non-boundary regions from a population of training patients, as well as from the online-collected treatment images of the same patient. With this learned statistical correlation, the estimated boundary deformations can be used to rapidly predict regional deformations between prostates in the planning and treatment images. In particular, the population-based correlation can be initially used to interpolate the dense correspondences when the number of available treatment images from the current patient is small. With the acquisition of more treatment images from the current patient, the patient-specific information gradually plays a more important role to reflect the prostate shape changes of the current patient during the treatment. Eventually, only the patient-specific correlation is used to guide the regional correspondence prediction, once a sufficient number of treatment images have been acquired and segmented from the current patient. Experimental results show that the proposed method can achieve much faster registration speed yet with comparable registration accuracy compared with the thin plate spline (TPS) based interpolation approach.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"7009 ","pages":"1-9"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4179108/pdf/nihms329083.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32717742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-01-01DOI: 10.1007/978-3-642-15948-0_8
Marc Niethammer, David Borland, J S Marron, John Woosley, Nancy E Thomas
This paper presents a method for automatic color and intensity normalization of digitized histology slides stained with two different agents. In comparison to previous approaches, prior information on the stain vectors is used in the estimation process, resulting in improved stability of the estimates. Due to the prevalence of hematoxylin and eosin staining for histology slides, the proposed method has significant practical utility. In particular, it can be used as a first step to standardize appearances across slides, that is very effective at countering effects due to differing stain amounts and protocols, and to slide fading. The approach is validated using synthetic experiments and 13 real datasets.
{"title":"Appearance Normalization of Histology Slides.","authors":"Marc Niethammer, David Borland, J S Marron, John Woosley, Nancy E Thomas","doi":"10.1007/978-3-642-15948-0_8","DOIUrl":"10.1007/978-3-642-15948-0_8","url":null,"abstract":"<p><p>This paper presents a method for automatic color and intensity normalization of digitized histology slides stained with two different agents. In comparison to previous approaches, prior information on the stain vectors is used in the estimation process, resulting in improved stability of the estimates. Due to the prevalence of hematoxylin and eosin staining for histology slides, the proposed method has significant practical utility. In particular, it can be used as a first step to standardize appearances across slides, that is very effective at countering effects due to differing stain amounts and protocols, and to slide fading. The approach is validated using synthetic experiments and 13 real datasets.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"6357 ","pages":"58-66"},"PeriodicalIF":0.0,"publicationDate":"2010-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4211434/pdf/nihms337285.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32783085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}