Pub Date : 2013-09-01DOI: 10.1109/ICIP.2013.6738838
Ting Liu, Mojtaba Seyedhosseini, Mark Ellisman, Tolga Tasdizen
Automated electron microscopy (EM) image analysis techniques can be tremendously helpful for connectomics research. In this paper, we extend our previous work [1] and propose a fully automatic method to utilize inter-section information for intra-section neuron segmentation of EM image stacks. A watershed merge forest is built via the watershed transform with each tree representing the region merging hierarchy of one 2D section in the stack. A section classifier is learned to identify the most likely region correspondence between adjacent sections. The inter-section information from such correspondence is incorporated to update the potentials of tree nodes. We resolve the merge forest using these potentials together with consistency constraints to acquire the final segmentation of the whole stack. We demonstrate that our method leads to notable segmentation accuracy improvement by experimenting with two types of EM image data sets.
{"title":"WATERSHED MERGE FOREST CLASSIFICATION FOR ELECTRON MICROSCOPY IMAGE STACK SEGMENTATION.","authors":"Ting Liu, Mojtaba Seyedhosseini, Mark Ellisman, Tolga Tasdizen","doi":"10.1109/ICIP.2013.6738838","DOIUrl":"https://doi.org/10.1109/ICIP.2013.6738838","url":null,"abstract":"<p><p>Automated electron microscopy (EM) image analysis techniques can be tremendously helpful for connectomics research. In this paper, we extend our previous work [1] and propose a fully automatic method to utilize inter-section information for intra-section neuron segmentation of EM image stacks. A watershed merge forest is built via the watershed transform with each tree representing the region merging hierarchy of one 2D section in the stack. A section classifier is learned to identify the most likely region correspondence between adjacent sections. The inter-section information from such correspondence is incorporated to update the potentials of tree nodes. We resolve the merge forest using these potentials together with consistency constraints to acquire the final segmentation of the whole stack. We demonstrate that our method leads to notable segmentation accuracy improvement by experimenting with two types of EM image data sets.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2013 ","pages":"4069-4073"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICIP.2013.6738838","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32887793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaoyu Ding, Wen-Sheng Chu, Fernando De la Torre, Jeffery F Cohn, Qiao Wang
Automatic facial Action Unit (AU) detection from video is a long-standing problem in facial expression analysis. AU detection is typically posed as a classification problem between frames or segments of positive examples and negative ones, where existing work emphasizes the use of different features or classifiers. In this paper, we propose a method called Cascade of Tasks (CoT) that combines the use of different tasks (i.e., frame, segment and transition) for AU event detection. We train CoT in a sequential manner embracing diversity, which ensures robustness and generalization to unseen data. In addition to conventional frame-based metrics that evaluate frames independently, we propose a new event-based metric to evaluate detection performance at event-level. We show how the CoT method consistently outperforms state-of-the-art approaches in both frame-based and event-based metrics, across three public datasets that differ in complexity: CK+, FERA and RU-FACS.
从视频中自动检测面部动作单元(AU)是面部表情分析中一个长期存在的问题。AU 检测通常被视为正面例子和负面例子的帧或片段之间的分类问题,现有工作强调使用不同的特征或分类器。在本文中,我们提出了一种名为 "任务级联"(CoT)的方法,该方法结合使用不同的任务(即帧、片段和过渡)来进行 AU 事件检测。我们以一种包含多样性的顺序方式训练 CoT,从而确保对未见数据的鲁棒性和泛化。除了独立评估帧的传统基于帧的指标外,我们还提出了一种新的基于事件的指标,以评估事件级的检测性能。我们展示了在三个复杂度不同的公共数据集上,CoT 方法如何在基于帧和基于事件的指标上始终优于最先进的方法:CK+、FERA 和 RU-FACS。
{"title":"Facial Action Unit Event Detection by Cascade of Tasks.","authors":"Xiaoyu Ding, Wen-Sheng Chu, Fernando De la Torre, Jeffery F Cohn, Qiao Wang","doi":"10.1109/ICCV.2013.298","DOIUrl":"10.1109/ICCV.2013.298","url":null,"abstract":"<p><p>Automatic facial Action Unit (AU) detection from video is a long-standing problem in facial expression analysis. AU detection is typically posed as a classification problem between frames or segments of positive examples and negative ones, where existing work emphasizes the use of different features or classifiers. In this paper, we propose a method called Cascade of Tasks (CoT) that combines the use of different tasks (i.e., frame, segment and transition) for AU event detection. We train CoT in a sequential manner embracing diversity, which ensures robustness and generalization to unseen data. In addition to conventional frame-based metrics that evaluate frames independently, we propose a new event-based metric to evaluate detection performance at event-level. We show how the CoT method consistently outperforms state-of-the-art approaches in both frame-based and event-based metrics, across three public datasets that differ in complexity: CK+, FERA and RU-FACS.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2013 ","pages":"2400-2407"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4174346/pdf/nihms-555617.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32703899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hang Chang, Yin Zhou, Paul Spellman, Bahram Parvin
Image-based classification of tissue histology, in terms of distinct histopathology (e.g., tumor or necrosis regions), provides a series of indices for tumor composition. Furthermore, aggregation of these indices from each whole slide image (WSI) in a large cohort can provide predictive models of clinical outcome. However, the performance of the existing techniques is hindered as a result of large technical variations (e.g., fixation, staining) and biological heterogeneities (e.g., cell type, cell state) that are always present in a large cohort. We suggest that, compared with human engineered features widely adopted in existing systems, unsupervised feature learning is more tolerant to batch effect (e.g., technical variations associated with sample preparation) and pertinent features can be learned without user intervention. This leads to a novel approach for classification of tissue histology based on unsupervised feature learning and spatial pyramid matching (SPM), which utilize sparse tissue morphometric signatures at various locations and scales. This approach has been evaluated on two distinct datasets consisting of different tumor types collected from The Cancer Genome Atlas (TCGA), and the experimental results indicate that the proposed approach is (i) extensible to different tumor types; (ii) robust in the presence of wide technical variations and biological heterogeneities; and (iii) scalable with varying training sample sizes.
{"title":"Stacked Predictive Sparse Coding for Classification of Distinct Regions of Tumor Histopathology.","authors":"Hang Chang, Yin Zhou, Paul Spellman, Bahram Parvin","doi":"10.1109/ICCV.2013.28","DOIUrl":"https://doi.org/10.1109/ICCV.2013.28","url":null,"abstract":"<p><p>Image-based classification of tissue histology, in terms of distinct histopathology (e.g., tumor or necrosis regions), provides a series of indices for tumor composition. Furthermore, aggregation of these indices from each whole slide image (WSI) in a large cohort can provide predictive models of clinical outcome. However, the performance of the existing techniques is hindered as a result of large technical variations (e.g., fixation, staining) and biological heterogeneities (e.g., cell type, cell state) that are always present in a large cohort. We suggest that, compared with human engineered features widely adopted in existing systems, unsupervised feature learning is more tolerant to batch effect (e.g., technical variations associated with sample preparation) and pertinent features can be learned without user intervention. This leads to a novel approach for classification of tissue histology based on unsupervised feature learning and spatial pyramid matching (SPM), which utilize sparse tissue morphometric signatures at various locations and scales. This approach has been evaluated on two distinct datasets consisting of different tumor types collected from The Cancer Genome Atlas (TCGA), and the experimental results indicate that the proposed approach is (i) extensible to different tumor types; (ii) robust in the presence of wide technical variations and biological heterogeneities; and (iii) scalable with varying training sample sizes.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":" ","pages":"169-176"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICCV.2013.28","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32293888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ICCV.2011.6126468
Vikram Appia, Anthony Yezzi
We present an active geodesic contour model in which we constrain the evolving active contour to be a geodesic with respect to a weighted edge-based energy through its entire evolution rather than just at its final state (as in the traditional geodesic active contour models). Since the contour is always a geodesic throughout the evolution, we automatically get local optimality with respect to an edge fitting criterion. This enables us to construct a purely region-based energy minimization model without having to devise arbitrary weights in the combination of our energy function to balance edge-based terms with the region-based terms. We show that this novel approach of combining edge information as the geodesic constraint in optimizing a purely region-based energy yields a new class of active contours which exhibit both local and global behaviors that are naturally responsive to intuitive types of user interaction. We also show the relationship of this new class of globally constrained active contours with traditional minimal path methods, which seek global minimizers of purely edge-based energies without incorporating region-based criteria. Finally, we present some numerical examples to illustrate the benefits of this approach over traditional active contour models.
{"title":"Active Geodesics: Region-based Active Contour Segmentation with a Global Edge-based Constraint.","authors":"Vikram Appia, Anthony Yezzi","doi":"10.1109/ICCV.2011.6126468","DOIUrl":"https://doi.org/10.1109/ICCV.2011.6126468","url":null,"abstract":"<p><p>We present an <i>active geodesic</i> contour model in which we constrain the evolving active contour to be a geodesic with respect to a weighted edge-based energy through its entire evolution rather than just at its final state (as in the traditional <i>geodesic active contour</i> models). Since the contour is always a geodesic throughout the evolution, we automatically get local optimality with respect to an edge fitting criterion. This enables us to construct a purely region-based energy minimization model without having to devise arbitrary weights in the combination of our energy function to balance edge-based terms with the region-based terms. We show that this novel approach of combining edge information as the <i>geodesic constraint</i> in optimizing a purely region-based energy yields a new class of active contours which exhibit both local and global behaviors that are naturally responsive to intuitive types of user interaction. We also show the relationship of this new class of globally constrained active contours with traditional minimal path methods, which seek global minimizers of purely edge-based energies without incorporating region-based criteria. Finally, we present some numerical examples to illustrate the benefits of this approach over traditional active contour models.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2011 ","pages":"1975-1980"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICCV.2011.6126468","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32786559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-01-01DOI: 10.1109/ICCV.2011.6126288
Hua Wang, Feiping Nie, Heng Huang, Shannon Risacher, Chris Ding, Andrew J Saykin, Li Shen
Alzheimer's disease (AD) is a neurodegenerative disorder characterized by progressive impairment of memory and other cognitive functions, which makes regression analysis a suitable model to study whether neuroimaging measures can help predict memory performance and track the progression of AD. Existing memory performance prediction methods via regression, however, do not take into account either the interconnected structures within imaging data or those among memory scores, which inevitably restricts their predictive capabilities. To bridge this gap, we propose a novel Sparse Multi-tAsk Regression and feaTure selection (SMART) method to jointly analyze all the imaging and clinical data under a single regression framework and with shared underlying sparse representations. Two convex regularizations are combined and used in the model to enable sparsity as well as facilitate multi-task learning. The effectiveness of the proposed method is demonstrated by both clearly improved prediction performances in all empirical test cases and a compact set of selected RAVLT-relevant MRI predictors that accord with prior studies.
{"title":"Sparse Multi-Task Regression and Feature Selection to Identify Brain Imaging Predictors for Memory Performance.","authors":"Hua Wang, Feiping Nie, Heng Huang, Shannon Risacher, Chris Ding, Andrew J Saykin, Li Shen","doi":"10.1109/ICCV.2011.6126288","DOIUrl":"https://doi.org/10.1109/ICCV.2011.6126288","url":null,"abstract":"<p><p>Alzheimer's disease (AD) is a neurodegenerative disorder characterized by progressive impairment of memory and other cognitive functions, which makes regression analysis a suitable model to study whether neuroimaging measures can help predict memory performance and track the progression of AD. Existing memory performance prediction methods via regression, however, do not take into account either the interconnected structures within imaging data or those among memory scores, which inevitably restricts their predictive capabilities. To bridge this gap, we propose a novel Sparse Multi-tAsk Regression and feaTure selection (SMART) method to jointly analyze all the imaging and clinical data under a single regression framework and with shared underlying sparse representations. Two convex regularizations are combined and used in the model to enable sparsity as well as facilitate multi-task learning. The effectiveness of the proposed method is demonstrated by both clearly improved prediction performances in all empirical test cases and a compact set of selected RAVLT-relevant MRI predictors that accord with prior studies.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":" ","pages":"557-562"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICCV.2011.6126288","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32720873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-01-01DOI: 10.1109/ICCV.2011.6126319
Paulo F U Gotardo, Aleix M Martinez
Non-rigid structure from motion (NRSFM) is a difficult, underconstrained problem in computer vision. The standard approach in NRSFM constrains 3D shape deformation using a linear combination of K basis shapes; the solution is then obtained as the low-rank factorization of an input observation matrix. An important but overlooked problem with this approach is that non-linear deformations are often observed; these deformations lead to a weakened low-rank constraint due to the need to use additional basis shapes to linearly model points that move along curves. Here, we demonstrate how the kernel trick can be applied in standard NRSFM. As a result, we model complex, deformable 3D shapes as the outputs of a non-linear mapping whose inputs are points within a low-dimensional shape space. This approach is flexible and can use different kernels to build different non-linear models. Using the kernel trick, our model complements the low-rank constraint by capturing non-linear relationships in the shape coefficients of the linear model. The net effect can be seen as using non-linear dimensionality reduction to further compress the (shape) space of possible solutions.
来自运动的非刚性结构(NRSFM)是计算机视觉领域中一个困难且约束不足的问题。NRSFM 的标准方法是使用 K 个基本形状的线性组合来约束三维形状变形;然后通过输入观测矩阵的低阶因式分解获得解决方案。这种方法存在一个重要但被忽视的问题,那就是经常会观察到非线性形变;由于需要使用额外的基形对沿曲线运动的点进行线性建模,这些形变会导致低阶约束减弱。在这里,我们展示了如何在标准 NRSFM 中应用核技巧。因此,我们将复杂、可变形的三维形状建模为非线性映射的输出,而非线性映射的输入是低维形状空间中的点。这种方法非常灵活,可以使用不同的内核建立不同的非线性模型。利用核技巧,我们的模型通过捕捉线性模型形状系数中的非线性关系来补充低等级约束。净效果可以看作是利用非线性降维进一步压缩可能解决方案的(形状)空间。
{"title":"Kernel Non-Rigid Structure from Motion.","authors":"Paulo F U Gotardo, Aleix M Martinez","doi":"10.1109/ICCV.2011.6126319","DOIUrl":"10.1109/ICCV.2011.6126319","url":null,"abstract":"<p><p>Non-rigid structure from motion (NRSFM) is a difficult, underconstrained problem in computer vision. The standard approach in NRSFM constrains 3D shape deformation using a linear combination of K basis shapes; the solution is then obtained as the low-rank factorization of an input observation matrix. An important but overlooked problem with this approach is that non-linear deformations are often observed; these deformations lead to a weakened low-rank constraint due to the need to use additional basis shapes to linearly model points that move along curves. Here, we demonstrate how the kernel trick can be applied in standard NRSFM. As a result, we model complex, deformable 3D shapes as the outputs of a non-linear mapping whose inputs are points within a low-dimensional shape space. This approach is flexible and can use different kernels to build different non-linear models. Using the kernel trick, our model complements the low-rank constraint by capturing non-linear relationships in the shape coefficients of the linear model. The net effect can be seen as using non-linear dimensionality reduction to further compress the (shape) space of possible solutions.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":" ","pages":"802-809"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3758879/pdf/nihms482972.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"31705935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-29DOI: 10.1109/iccv.2009.5459320
Filiz Bunyak, Kannappan Palaniappan
Graph partitioning active contours (GPAC) is a recently introduced approach that elegantly embeds the graph-based image segmentation problem within a continuous optimization framework. GPAC can be used within parametric snake-based or implicit level set-based active contour continuous paradigms for image partitioning. However, GPAC similar to many other graph-based approaches has quadratic memory requirements which severely limits the scalability of the algorithm to practical problem domains. An N xN image requires O(N(4)) computation and memory to create and store the full graph of pixel inter-relationships even before the start of the contour optimization process. For example, an 1024x1024 grayscale image needs over one terabyte of memory. Approximations using tile/block-based or superpixel-based multiscale grouping of the pixels reduces this complexity by trading off accuracy. This paper describes a new algorithm that implements the exact GPAC algorithm using a constant memory requirement of a few kilobytes, independent of image size.
{"title":"Efficient Segmentation Using Feature-based Graph Partitioning Active Contours.","authors":"Filiz Bunyak, Kannappan Palaniappan","doi":"10.1109/iccv.2009.5459320","DOIUrl":"https://doi.org/10.1109/iccv.2009.5459320","url":null,"abstract":"<p><p>Graph partitioning active contours (GPAC) is a recently introduced approach that elegantly embeds the graph-based image segmentation problem within a continuous optimization framework. GPAC can be used within parametric snake-based or implicit level set-based active contour continuous paradigms for image partitioning. However, GPAC similar to many other graph-based approaches has quadratic memory requirements which severely limits the scalability of the algorithm to practical problem domains. An N xN image requires O(N(4)) computation and memory to create and store the full graph of pixel inter-relationships even before the start of the contour optimization process. For example, an 1024x1024 grayscale image needs over one terabyte of memory. Approximations using tile/block-based or superpixel-based multiscale grouping of the pixels reduces this complexity by trading off accuracy. This paper describes a new algorithm that implements the exact GPAC algorithm using a constant memory requirement of a few kilobytes, independent of image size.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2009 ","pages":"873-880"},"PeriodicalIF":0.0,"publicationDate":"2009-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/iccv.2009.5459320","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"28987743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-10-14DOI: 10.1109/ICCV.2007.4408918
Ozlem Subakan, Bing Jian, Baba C Vemuri, C Eduardo Vallejos
Many computer vision and image processing tasks require the preservation of local discontinuities, terminations and bifurcations. Denoising with feature preservation is a challenging task and in this paper, we present a novel technique for preserving complex oriented structures such as junctions and corners present in images. This is achieved in a two stage process namely, (1) All image data are pre-processed to extract local orientation information using a steerable Gabor filter bank. The orientation distribution at each lattice point is then represented by a continuous mixture of Gaussians. The continuous mixture representation can be cast as the Laplace transform of the mixing density over the space of positive definite (covariance) matrices. This mixing density is assumed to be a parameterized distribution, namely, a mixture of Wisharts whose Laplace transform is evaluated in a closed form expression called the Rigaut type function, a scalar-valued function of the parameters of the Wishart distribution. Computation of the weights in the mixture Wisharts is formulated as a sparse deconvolution problem. (2) The feature preserving denoising is then achieved via iterative convolution of the given image data with the Rigaut type function. We present experimental results on noisy data, real 2D images and 3D MRI data acquired from plant roots depicting bifurcating roots. Superior performance of our technique is depicted via comparison to the state-of-the-art anisotropic diffusion filter.
{"title":"Feature Preserving Image Smoothing Using a Continuous Mixture of Tensors.","authors":"Ozlem Subakan, Bing Jian, Baba C Vemuri, C Eduardo Vallejos","doi":"10.1109/ICCV.2007.4408918","DOIUrl":"https://doi.org/10.1109/ICCV.2007.4408918","url":null,"abstract":"<p><p>Many computer vision and image processing tasks require the preservation of local discontinuities, terminations and bifurcations. Denoising with feature preservation is a challenging task and in this paper, we present a novel technique for preserving complex oriented structures such as junctions and corners present in images. This is achieved in a two stage process namely, (1) All image data are pre-processed to extract local orientation information using a steerable Gabor filter bank. The orientation distribution at each lattice point is then represented by a continuous mixture of Gaussians. The continuous mixture representation can be cast as the Laplace transform of the mixing density over the space of positive definite (covariance) matrices. This mixing density is assumed to be a parameterized distribution, namely, a mixture of Wisharts whose Laplace transform is evaluated in a closed form expression called the Rigaut type function, a scalar-valued function of the parameters of the Wishart distribution. Computation of the weights in the mixture Wisharts is formulated as a sparse deconvolution problem. (2) The feature preserving denoising is then achieved via iterative convolution of the given image data with the Rigaut type function. We present experimental results on noisy data, real 2D images and 3D MRI data acquired from plant roots depicting bifurcating roots. Superior performance of our technique is depicted via comparison to the state-of-the-art anisotropic diffusion filter.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"11 ","pages":"nihpa163297"},"PeriodicalIF":0.0,"publicationDate":"2007-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICCV.2007.4408918","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"28645836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-10-01DOI: 10.1109/ICCV.2007.4409157
B T Thomas Yeo, Mert Sabuncu, Hartmut Mohlberg, Katrin Amunts, Karl Zilles, Polina Golland, Bruce Fischl
We argue that registration should be thought of as a means to an end, and not as a goal by itself. In particular, we consider the problem of predicting the locations of hidden labels of a test image using observable features, given a training set with both the hidden labels and observable features. For example, the hidden labels could be segmentation labels or activation regions in fMRI, while the observable features could be sulcal geometry or MR intensity. We analyze a probabilistic framework for computing an optimal atlas, and the subsequent registration of a new subject using only the observable features to optimize the hidden label alignment to the training set. We compare two approaches for co-registering training images for the atlas construction: the traditional approach of only using observable features and a novel approach of only using hidden labels. We argue that the alternative approach is superior particularly when the relationship between the hidden labels and observable features is complex and unknown. As an application, we consider the task of registering cortical folds to optimize Brodmann area localization. We show that the alignment of the Brodmann areas improves by up to 25% when using the alternative atlas compared with the traditional atlas. To the best of our knowledge, these are the most accurate Brodmann area localization results (achieved via cortical fold registration) reported to date.
{"title":"What Data to Co-register for Computing Atlases.","authors":"B T Thomas Yeo, Mert Sabuncu, Hartmut Mohlberg, Katrin Amunts, Karl Zilles, Polina Golland, Bruce Fischl","doi":"10.1109/ICCV.2007.4409157","DOIUrl":"https://doi.org/10.1109/ICCV.2007.4409157","url":null,"abstract":"<p><p>We argue that registration should be thought of as a means to an end, and not as a goal by itself. In particular, we consider the problem of predicting the locations of hidden labels of a test image using observable features, given a training set with both the hidden labels and observable features. For example, the hidden labels could be segmentation labels or activation regions in fMRI, while the observable features could be sulcal geometry or MR intensity. We analyze a probabilistic framework for computing an optimal atlas, and the subsequent registration of a new subject using only the observable features to optimize the hidden label alignment to the training set. We compare two approaches for co-registering training images for the atlas construction: the traditional approach of only using observable features and a novel approach of only using hidden labels. We argue that the alternative approach is superior particularly when the relationship between the hidden labels and observable features is complex and unknown. As an application, we consider the task of registering cortical folds to optimize Brodmann area localization. We show that the alignment of the Brodmann areas improves by up to 25% when using the alternative atlas compared with the traditional atlas. To the best of our knowledge, these are the most accurate Brodmann area localization results (achieved via cortical fold registration) reported to date.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2007 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICCV.2007.4409157","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33393769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-10-01DOI: 10.1109/ICCV.2007.4409137
Peng Yu, Boon Thye Thomas Yeo, P Ellen Grant, Bruce Fischl, Polina Golland
We introduce the use of over-complete spherical wavelets for shape analysis of 2D closed surfaces. Bi-orthogonal spherical wavelets have been shown to be powerful tools in the segmentation and shape analysis of 2D closed surfaces, but unfortunately they suffer from aliasing problems and are therefore not invariant under rotations of the underlying surface parameterization. In this paper, we demonstrate the theoretical advantage of over-complete wavelets over bi-orthogonal wavelets and illustrate their utility on both synthetic and real data. In particular, we show that over-complete spherical wavelets allow us to build more stable cortical folding development models, and detect a wider array of regions of folding development in a newborn dataset.
{"title":"Cortical Folding Development Study based on Over-Complete Spherical Wavelets.","authors":"Peng Yu, Boon Thye Thomas Yeo, P Ellen Grant, Bruce Fischl, Polina Golland","doi":"10.1109/ICCV.2007.4409137","DOIUrl":"10.1109/ICCV.2007.4409137","url":null,"abstract":"<p><p>We introduce the use of over-complete spherical wavelets for shape analysis of 2D closed surfaces. Bi-orthogonal spherical wavelets have been shown to be powerful tools in the segmentation and shape analysis of 2D closed surfaces, but unfortunately they suffer from aliasing problems and are therefore not invariant under rotations of the underlying surface parameterization. In this paper, we demonstrate the theoretical advantage of over-complete wavelets over bi-orthogonal wavelets and illustrate their utility on both synthetic and real data. In particular, we show that over-complete spherical wavelets allow us to build more stable cortical folding development models, and detect a wider array of regions of folding development in a newborn dataset.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2007 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4465965/pdf/nihms686956.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33393768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}