首页 > 最新文献

Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition最新文献

英文 中文
From label fusion to correspondence fusion: a new approach to unbiased groupwise registration. 从标签融合到对应融合:一种无偏分组配准的新方法。
Paul A Yushkevich, Hongzhi Wang, John Pluta, Brian B Avants

Label fusion strategies are used in multi-atlas image segmentation approaches to compute a consensus segmentation of an image, given a set of candidate segmentations produced by registering the image to a set of atlases [19, 11, 8]. Effective label fusion strategies, such as local similarity-weighted voting [1, 13] substantially reduce segmentation errors compared to single-atlas segmentation. This paper extends the label fusion idea to the problem of finding correspondences across a set of images. Instead of computing a consensus segmentation, weighted voting is used to estimate a consensus coordinate map between a target image and a reference space. Two variants of the problem are considered: (1) where correspondences between a set of atlases are known and are propagated to the target image; (2) where correspondences are estimated across a set of images without prior knowledge. Evaluation in synthetic data shows that correspondences recovered by fusion methods are more accurate than those based on registration to a population template. In a 2D example in real MRI data, fusion methods result in more consistent mappings between manual segmentations of the hippocampus.

标签融合策略用于多地图集图像分割方法中,给定一组通过将图像注册到一组地图集产生的候选分割,以计算图像的共识分割[19,11,8]。有效的标签融合策略,如局部相似加权投票[1,13],与单图谱分割相比,大大减少了分割错误。本文将标签融合思想扩展到在一组图像中寻找对应关系的问题。该算法使用加权投票来估计目标图像和参考空间之间的一致坐标映射,而不是计算一致分割。考虑了问题的两种变体:(1)一组地图集之间的对应关系是已知的,并传播到目标图像;(2)在没有先验知识的情况下估计一组图像的对应关系。综合数据的评价表明,融合方法恢复的对应关系比基于种群模板的配准方法更准确。在真实MRI数据的二维示例中,融合方法导致海马的手动分割之间的映射更加一致。
{"title":"From label fusion to correspondence fusion: a new approach to unbiased groupwise registration.","authors":"Paul A Yushkevich,&nbsp;Hongzhi Wang,&nbsp;John Pluta,&nbsp;Brian B Avants","doi":"10.1109/CVPR.2012.6247771","DOIUrl":"https://doi.org/10.1109/CVPR.2012.6247771","url":null,"abstract":"<p><p>Label fusion strategies are used in multi-atlas image segmentation approaches to compute a consensus segmentation of an image, given a set of candidate segmentations produced by registering the image to a set of atlases [19, 11, 8]. Effective label fusion strategies, such as local similarity-weighted voting [1, 13] substantially reduce segmentation errors compared to single-atlas segmentation. This paper extends the label fusion idea to the problem of finding correspondences across a set of images. Instead of computing a consensus segmentation, weighted voting is used to estimate a consensus coordinate map between a target image and a reference space. Two variants of the problem are considered: (1) where correspondences between a set of atlases are known and are propagated to the target image; (2) where correspondences are estimated across a set of images without prior knowledge. Evaluation in synthetic data shows that correspondences recovered by fusion methods are more accurate than those based on registration to a population template. In a 2D example in real MRI data, fusion methods result in more consistent mappings between manual segmentations of the hippocampus.</p>","PeriodicalId":74560,"journal":{"name":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":" ","pages":"956-963"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPR.2012.6247771","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32057030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Random walks based multi-image segmentation: Quasiconvexity results and GPU-based solutions. 基于随机游走的多图像分割:准凸性结果和基于 GPU 的解决方案
Maxwell D Collins, Jia Xu, Leo Grady, Vikas Singh

We recast the Cosegmentation problem using Random Walker (RW) segmentation as the core segmentation algorithm, rather than the traditional MRF approach adopted in the literature so far. Our formulation is similar to previous approaches in the sense that it also permits Cosegmentation constraints (which impose consistency between the extracted objects from ≥ 2 images) using a nonparametric model. However, several previous nonparametric cosegmentation methods have the serious limitation that they require adding one auxiliary node (or variable) for every pair of pixels that are similar (which effectively limits such methods to describing only those objects that have high entropy appearance models). In contrast, our proposed model completely eliminates this restrictive dependence -the resulting improvements are quite significant. Our model further allows an optimization scheme exploiting quasiconvexity for model-based segmentation with no dependence on the scale of the segmented foreground. Finally, we show that the optimization can be expressed in terms of linear algebra operations on sparse matrices which are easily mapped to GPU architecture. We provide a highly specialized CUDA library for Cosegmentation exploiting this special structure, and report experimental results showing these advantages.

我们使用随机沃克(RW)分割作为核心分割算法,而不是迄今为止文献中采用的传统 MRF 方法,重新构建了 Cosegmentation 问题。我们的方法与之前的方法类似,也允许使用非参数模型进行 Cosegmentation 约束(即从 ≥ 2 幅图像中提取的对象之间必须保持一致)。然而,之前的几种非参数共同分割方法都有一个严重的局限性,即它们需要为每一对相似的像素添加一个辅助节点(或变量)(这实际上限制了这些方法只能描述那些具有高熵外观模型的物体)。相比之下,我们提出的模型完全消除了这种限制性的依赖关系--由此带来的改进相当显著。我们的模型还允许利用准凸性优化基于模型的分割方案,而不依赖于分割前景的比例。最后,我们展示了优化可以用稀疏矩阵上的线性代数运算来表示,这些运算很容易映射到 GPU 架构上。我们利用这一特殊结构为 Cosegmentation 提供了高度专业化的 CUDA 库,并报告了显示这些优势的实验结果。
{"title":"Random walks based multi-image segmentation: Quasiconvexity results and GPU-based solutions.","authors":"Maxwell D Collins, Jia Xu, Leo Grady, Vikas Singh","doi":"10.1109/CVPR.2012.6247859","DOIUrl":"10.1109/CVPR.2012.6247859","url":null,"abstract":"<p><p>We recast the <i>Cosegmentation</i> problem using Random Walker (RW) segmentation as the core segmentation algorithm, rather than the traditional MRF approach adopted in the literature so far. Our formulation is similar to previous approaches in the sense that it also permits Cosegmentation constraints (which impose consistency between the extracted objects from ≥ 2 images) using a nonparametric model. However, several previous nonparametric cosegmentation methods have the serious limitation that they require adding one auxiliary node (or variable) for every pair of pixels that are similar (which effectively limits such methods to describing only those objects that have high entropy appearance models). In contrast, our proposed model completely eliminates this restrictive dependence -the resulting improvements are quite significant. Our model further allows an optimization scheme exploiting quasiconvexity for model-based segmentation with no dependence on the scale of the segmented foreground. Finally, we show that the optimization can be expressed in terms of linear algebra operations on sparse matrices which are easily mapped to GPU architecture. We provide a highly specialized CUDA library for Cosegmentation exploiting this special structure, and report experimental results showing these advantages.</p>","PeriodicalId":74560,"journal":{"name":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"2012 ","pages":"1656-1663"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4178955/pdf/nihms425305.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32715010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust and Efficient Regularized Boosting Using Total Bregman Divergence. 基于全Bregman散度的鲁棒高效正则化推进。
Meizhu Liu, Baba C Vemuri

Boosting is a well known machine learning technique used to improve the performance of weak learners and has been successfully applied to computer vision, medical image analysis, computational biology and other fields. A critical step in boosting algorithms involves update of the data sample distribution, however, most existing boosting algorithms use updating mechanisms that lead to overfitting and instabilities during evolution of the distribution which in turn results in classification inaccuracies. Regularized boosting has been proposed in literature as a means to overcome these difficulties. In this paper, we propose a novel total Bregman divergence (tBD) regularized LPBoost, termed tBRLPBoost. tBD is a recently proposed divergence in literature, which is statistically robust and we prove that tBRLPBoost requires a constant number of iterations to learn a strong classifier and hence is computationally more efficient compared to other regularized Boosting algorithms. Also, unlike other boosting methods that are only effective on a handful of datasets, tBRLPBoost works well on a variety of datasets. We present results of testing our algorithm on many public domain databases and comparisons to several other state-of-the-art methods. Numerical results show that the proposed algorithm has much improved performance in efficiency and accuracy over other methods.

Boosting是一种众所周知的机器学习技术,用于提高弱学习者的性能,并已成功应用于计算机视觉、医学图像分析、计算生物学等领域。数据样本分布的更新是增强算法的一个关键步骤,然而,大多数现有的增强算法使用的更新机制会导致分布演变过程中的过拟合和不稳定,从而导致分类不准确。文献中提出了一种克服这些困难的方法。在本文中,我们提出了一种新的全Bregman散度(tBD)正则化LPBoost,称为tBRLPBoost。tBD是最近在文献中提出的分歧,它具有统计鲁棒性,我们证明了tBRLPBoost需要恒定次数的迭代来学习强分类器,因此与其他正则化boost算法相比,计算效率更高。此外,与其他仅对少数数据集有效的增强方法不同,tBRLPBoost可以很好地处理各种数据集。我们展示了在许多公共领域数据库上测试我们的算法的结果,并与其他几种最先进的方法进行了比较。数值结果表明,该算法在效率和精度上都比其他方法有很大提高。
{"title":"Robust and Efficient Regularized Boosting Using Total Bregman Divergence.","authors":"Meizhu Liu, Baba C Vemuri","doi":"10.1109/CVPR.2011.5995686","DOIUrl":"10.1109/CVPR.2011.5995686","url":null,"abstract":"<p><p>Boosting is a well known machine learning technique used to improve the performance of weak learners and has been successfully applied to computer vision, medical image analysis, computational biology and other fields. A critical step in boosting algorithms involves update of the data sample distribution, however, most existing boosting algorithms use updating mechanisms that lead to overfitting and instabilities during evolution of the distribution which in turn results in classification inaccuracies. Regularized boosting has been proposed in literature as a means to overcome these difficulties. In this paper, we propose a novel total Bregman divergence (tBD) regularized LPBoost, termed tBRLPBoost. tBD is a recently proposed divergence in literature, which is statistically robust and we prove that tBRLPBoost requires a constant number of iterations to learn a strong classifier and hence is computationally more efficient compared to other regularized Boosting algorithms. Also, unlike other boosting methods that are only effective on a handful of datasets, tBRLPBoost works well on a variety of datasets. We present results of testing our algorithm on many public domain databases and comparisons to several other state-of-the-art methods. Numerical results show that the proposed algorithm has much improved performance in efficiency and accuracy over other methods.</p>","PeriodicalId":74560,"journal":{"name":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"2011 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2011-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPR.2011.5995686","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"31962745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Landmark/Image-based Deformable Registration of Gene Expression Data. 基于地标/图像的基因表达数据的可变形配准。
Uday Kurkure, Yen H Le, Nikos Paragios, James P Carson, Tao Ju, Ioannis A Kakadiaris

Analysis of gene expression patterns in brain images obtained from high-throughput in situ hybridization requires accurate and consistent annotations of anatomical regions/subregions. Such annotations are obtained by mapping an anatomical atlas onto the gene expression images through intensity- and/or landmark-based registration methods or deformable model-based segmentation methods. Due to the complex appearance of the gene expression images, these approaches require a pre-processing step to determine landmark correspondences in order to incorporate landmark-based geometric constraints. In this paper, we propose a novel method for landmark-constrained, intensity-based registration without determining landmark correspondences a priori. The proposed method performs dense image registration and identifies the landmark correspondences, simultaneously, using a single higher-order Markov Random Field model. In addition, a machine learning technique is used to improve the discriminating properties of local descriptors for landmark matching by projecting them in a Hamming space of lower dimension. We qualitatively show that our method achieves promising results and also compares well, quantitatively, with the expert's annotations, outperforming previous methods.

分析高通量原位杂交获得的脑图像中的基因表达模式需要准确和一致的解剖区域/亚区域注释。这种注释是通过基于强度和/或地标的配准方法或基于可变形模型的分割方法将解剖图谱映射到基因表达图像上获得的。由于基因表达图像的复杂外观,这些方法需要预处理步骤来确定地标对应,以便纳入基于地标的几何约束。在本文中,我们提出了一种新的地标约束的、基于强度的配准方法,而不需要先验地确定地标对应关系。该方法使用单个高阶马尔可夫随机场模型进行密集图像配准并同时识别地标对应。此外,利用机器学习技术,通过在较低维的汉明空间中投影局部描述符,提高了局部描述符对地标匹配的判别性。我们定性地表明,我们的方法取得了很好的结果,并且在定量上与专家的注释进行了比较,优于以前的方法。
{"title":"Landmark/Image-based Deformable Registration of Gene Expression Data.","authors":"Uday Kurkure,&nbsp;Yen H Le,&nbsp;Nikos Paragios,&nbsp;James P Carson,&nbsp;Tao Ju,&nbsp;Ioannis A Kakadiaris","doi":"10.1109/CVPR.2011.5995708","DOIUrl":"https://doi.org/10.1109/CVPR.2011.5995708","url":null,"abstract":"<p><p>Analysis of gene expression patterns in brain images obtained from high-throughput in situ hybridization requires accurate and consistent annotations of anatomical regions/subregions. Such annotations are obtained by mapping an anatomical atlas onto the gene expression images through intensity- and/or landmark-based registration methods or deformable model-based segmentation methods. Due to the complex appearance of the gene expression images, these approaches require a pre-processing step to determine landmark correspondences in order to incorporate landmark-based geometric constraints. In this paper, we propose a novel method for landmark-constrained, intensity-based registration without determining landmark correspondences a priori. The proposed method performs dense image registration and identifies the landmark correspondences, simultaneously, using a single higher-order Markov Random Field model. In addition, a machine learning technique is used to improve the discriminating properties of local descriptors for landmark matching by projecting them in a Hamming space of lower dimension. We qualitatively show that our method achieves promising results and also compares well, quantitatively, with the expert's annotations, outperforming previous methods.</p>","PeriodicalId":74560,"journal":{"name":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":" ","pages":"1089-1096"},"PeriodicalIF":0.0,"publicationDate":"2011-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPR.2011.5995708","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30504520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Feature Guided Motion Artifact Reduction with Structure-Awareness in 4D CT Images. 基于结构感知的4D CT图像特征引导运动伪影还原。
Dongfeng Han, John Bayouth, Qi Song, Sudershan Bhatia, Milan Sonka, Xiaodong Wu

In this paper, we propose a novel method to reduce the magnitude of 4D CT artifacts by stitching two images with a data-driven regularization constrain, which helps preserve the local anatomy structures. Our method first computes an interface seam for the stitching in the overlapping region of the first image, which passes through the "smoothest" region, to reduce the structure complexity along the stitching interface. Then, we compute the displacements of the seam by matching the corresponding interface seam in the second image. We use sparse 3D features as the structure cues to guide the seam matching, in which a regularization term is incorporated to keep the structure consistency. The energy function is minimized by solving a multiple-label problem in Markov Random Fields with an anatomical structure preserving regularization term. The displacements are propagated to the rest of second image and the two image are stitched along the interface seams based on the computed displacement field. The method was tested on both simulated data and clinical 4D CT images. The experiments on simulated data demonstrated that the proposed method was able to reduce the landmark distance error on average from 2.9 mm to 1.3 mm, outperforming the registration-based method by about 55%. For clinical 4D CT image data, the image quality was evaluated by three medical experts, and all identified much fewer artifacts from the resulting images by our method than from those by the compared method.

在本文中,我们提出了一种新的方法,通过数据驱动的正则化约束拼接两幅图像来降低4D CT伪影的大小,这有助于保留局部解剖结构。我们的方法首先在第一幅图像的重叠区域计算一个接口缝用于拼接,该图像通过“最平滑”区域,以降低沿拼接界面的结构复杂性。然后,通过在第二幅图像中匹配相应的界面接缝来计算接缝的位移。我们使用稀疏的3D特征作为结构线索来指导接缝匹配,并在其中加入正则化项来保持结构的一致性。通过求解具有解剖结构保持正则化项的马尔可夫随机场的多标签问题来最小化能量函数。将位移传播到第二图像的其余部分,并根据计算的位移场沿界面接缝缝合两图像。在模拟数据和临床4D CT图像上对该方法进行了验证。仿真数据实验表明,该方法能够将地标距离误差从2.9 mm平均降低到1.3 mm,优于基于配准的方法约55%。对于临床4D CT图像数据,图像质量由三位医学专家进行了评估,通过我们的方法从结果图像中识别出的伪影都比通过比较方法识别出的伪影少得多。
{"title":"Feature Guided Motion Artifact Reduction with Structure-Awareness in 4D CT Images.","authors":"Dongfeng Han,&nbsp;John Bayouth,&nbsp;Qi Song,&nbsp;Sudershan Bhatia,&nbsp;Milan Sonka,&nbsp;Xiaodong Wu","doi":"10.1109/CVPR.2011.5995561","DOIUrl":"https://doi.org/10.1109/CVPR.2011.5995561","url":null,"abstract":"<p><p>In this paper, we propose a novel method to reduce the magnitude of 4D CT artifacts by stitching two images with a data-driven regularization constrain, which helps preserve the local anatomy structures. Our method first computes an interface seam for the stitching in the overlapping region of the first image, which passes through the \"smoothest\" region, to reduce the structure complexity along the stitching interface. Then, we compute the displacements of the seam by matching the corresponding interface seam in the second image. We use sparse 3D features as the structure cues to guide the seam matching, in which a regularization term is incorporated to keep the structure consistency. The energy function is minimized by solving a multiple-label problem in Markov Random Fields with an anatomical structure preserving regularization term. The displacements are propagated to the rest of second image and the two image are stitched along the interface seams based on the computed displacement field. The method was tested on both simulated data and clinical 4D CT images. The experiments on simulated data demonstrated that the proposed method was able to reduce the landmark distance error on average from 2.9 mm to 1.3 mm, outperforming the registration-based method by about 55%. For clinical 4D CT image data, the image quality was evaluated by three medical experts, and all identified much fewer artifacts from the resulting images by our method than from those by the compared method.</p>","PeriodicalId":74560,"journal":{"name":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":" ","pages":"1057-1064"},"PeriodicalIF":0.0,"publicationDate":"2011-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPR.2011.5995561","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40134137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Deciphering the Face. 解读脸。
Aleix M Martinez

We argue that to make robust computer vision algorithms for face analysis and recognition, these should be based on configural and shape features. In this model, the most important task to be solved by computer vision researchers is that of accurate detection of facial features, rather than recognition. We base our arguments on recent results in cognitive science and neuroscience. In particular, we show that different facial expressions of emotion have diverse uses in human behavior/cognition and that a facial expression may be associated to multiple emotional categories. These two results are in contradiction with the continuous models in cognitive science, the limbic assumption in neuroscience and the multidimensional approaches typically employed in computer vision. Thus, we propose an alternative hybrid continuous-categorical approach to the perception of facial expressions and show that configural and shape features are most important for the recognition of emotional constructs by humans. We illustrate how these image cues can be successfully exploited by computer vision algorithms. Throughout the paper, we discuss the implications of these results in applications in face recognition and human-computer interaction.

我们认为,为了使鲁棒的计算机视觉算法用于人脸分析和识别,这些算法应该基于结构和形状特征。在这个模型中,计算机视觉研究人员要解决的最重要的任务是准确地检测面部特征,而不是识别。我们的论点基于认知科学和神经科学的最新成果。特别是,我们表明不同的面部表情在人类行为/认知中有不同的用途,并且面部表情可能与多种情绪类别相关。这两个结果与认知科学中的连续模型、神经科学中的边缘假设以及计算机视觉中典型使用的多维方法相矛盾。因此,我们提出了一种替代的混合连续分类方法来感知面部表情,并表明结构和形状特征对人类识别情感结构最重要。我们说明如何这些图像线索可以成功地利用计算机视觉算法。在整个论文中,我们讨论了这些结果在人脸识别和人机交互应用中的意义。
{"title":"Deciphering the Face.","authors":"Aleix M Martinez","doi":"10.1109/CVPRW.2011.5981690","DOIUrl":"https://doi.org/10.1109/CVPRW.2011.5981690","url":null,"abstract":"<p><p>We argue that to make robust computer vision algorithms for face analysis and recognition, these should be based on configural and shape features. In this model, the most important task to be solved by computer vision researchers is that of accurate detection of facial features, rather than recognition. We base our arguments on recent results in cognitive science and neuroscience. In particular, we show that different facial expressions of emotion have diverse uses in human behavior/cognition and that a facial expression may be associated to multiple emotional categories. These two results are in contradiction with the continuous models in cognitive science, the limbic assumption in neuroscience and the multidimensional approaches typically employed in computer vision. Thus, we propose an alternative hybrid continuous-categorical approach to the perception of facial expressions and show that configural and shape features are most important for the recognition of emotional constructs by humans. We illustrate how these image cues can be successfully exploited by computer vision algorithms. Throughout the paper, we discuss the implications of these results in applications in face recognition and human-computer interaction.</p>","PeriodicalId":74560,"journal":{"name":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"2011 ","pages":"7-12"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPRW.2011.5981690","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32704038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Scale Invariant cosegmentation for image groups. 图像组的尺度不变共分割。
Lopamudra Mukherjee, Vikas Singh, Jiming Peng

Our primary interest is in generalizing the problem of Cosegmentation to a large group of images, that is, concurrent segmentation of common foreground region(s) from multiple images. We further wish for our algorithm to offer scale invariance (foregrounds may have arbitrary sizes in different images) and the running time to increase (no more than) near linearly in the number of images in the set. What makes this setting particularly challenging is that even if we ignore the scale invariance desiderata, the Cosegmentation problem, as formalized in many recent papers (except [1]), is already hard to solve optimally in the two image case. A straightforward extension of such models to multiple images leads to loose relaxations; and unless we impose a distributional assumption on the appearance model, existing mechanisms for image-pair-wise measurement of foreground appearance variations lead to significantly large problem sizes (even for moderate number of images). This paper presents a surprisingly easy to implement algorithm which performs well, and satisfies all requirements listed above (scale invariance, low computational requirements, and viability for the multiple image setting). We present qualitative and technical analysis of the properties of this framework.

我们的主要兴趣是将共分割问题推广到一大组图像,即从多个图像中并发分割共同前景区域。我们进一步希望我们的算法提供尺度不变性(前景在不同的图像中可能具有任意大小),并且运行时间在集合中的图像数量上近似线性地增加(不超过)。使这种设置特别具有挑战性的是,即使我们忽略了所需的尺度不变性,在许多最近的论文中(除了[1])形式化的共分割问题,在两幅图像的情况下已经很难得到最佳解决。将这种模型直接扩展到多个图像会导致松散松弛;除非我们对外观模型施加一个分布假设,否则现有的图像对前景外观变化测量机制会导致非常大的问题规模(即使对于中等数量的图像)。本文提出了一种令人惊讶的易于实现的算法,该算法性能良好,并且满足上述所有要求(比例不变性,低计算需求和多图像设置的可行性)。我们对该框架的特性进行了定性和技术分析。
{"title":"Scale Invariant cosegmentation for image groups.","authors":"Lopamudra Mukherjee,&nbsp;Vikas Singh,&nbsp;Jiming Peng","doi":"10.1109/CVPR.2011.5995420","DOIUrl":"https://doi.org/10.1109/CVPR.2011.5995420","url":null,"abstract":"<p><p>Our primary interest is in generalizing the problem of Cosegmentation to a large group of images, that is, concurrent segmentation of common foreground region(s) from multiple images. We further wish for our algorithm to offer scale invariance (foregrounds may have arbitrary sizes in different images) and the running time to increase (no more than) near linearly in the number of images in the set. What makes this setting particularly challenging is that even if we ignore the scale invariance desiderata, the Cosegmentation problem, as formalized in many recent papers (except [1]), is already hard to solve optimally in the two image case. A straightforward extension of such models to multiple images leads to loose relaxations; and unless we impose a distributional assumption on the appearance model, existing mechanisms for image-pair-wise measurement of foreground appearance variations lead to significantly large problem sizes (even for moderate number of images). This paper presents a surprisingly easy to implement algorithm which performs well, and satisfies all requirements listed above (scale invariance, low computational requirements, and viability for the multiple image setting). We present qualitative and technical analysis of the properties of this framework.</p>","PeriodicalId":74560,"journal":{"name":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":" ","pages":"1881-1888"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPR.2011.5995420","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30000757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 132
Rapid and accurate developmental stage recognition of C. elegans from high-throughput image data. 基于高通量图像数据的秀丽隐杆线虫发育阶段快速准确识别。
Amelia G White, Patricia G Cipriani, Huey-Ling Kao, Brandon Lees, Davi Geiger, Eduardo Sontag, Kristin C Gunsalus, Fabio Piano

We present a hierarchical principle for object recognition and its application to automatically classify developmental stages of C. elegans animals from a population of mixed stages. The object recognition machine consists of four hierarchical layers, each composed of units upon which evaluation functions output a label score, followed by a grouping mechanism that resolves ambiguities in the score by imposing local consistency constraints. Each layer then outputs groups of units, from which the units of the next layer are derived. Using this hierarchical principle, the machine builds up successively more sophisticated representations of the objects to be classified. The algorithm segments large and small objects, decomposes objects into parts, extracts features from these parts, and classifies them by SVM. We are using this system to analyze phenotypic data from C. elegans high-throughput genetic screens, and our system overcomes a previous bottleneck in image analysis by achieving near real-time scoring of image data. The system is in current use in a functioning C. elegans laboratory and has processed over two hundred thousand images for lab users.

我们提出了一种目标识别的分层原则,并将其应用于秀丽隐杆线虫动物从混合阶段群体中自动分类发育阶段。目标识别机器由四个分层层组成,每个层由评估函数输出标签分数的单元组成,然后是一个分组机制,通过施加局部一致性约束来解决分数中的歧义。然后每一层输出一组单位,从这些单位中导出下一层的单位。利用这一层次原则,机器建立了要分类的对象的连续更复杂的表示。该算法对大小目标进行分割,将目标分解为多个部分,从这些部分中提取特征,并通过SVM进行分类。我们正在使用该系统分析秀丽隐杆线虫高通量遗传筛选的表型数据,我们的系统克服了以前图像分析的瓶颈,实现了图像数据的近实时评分。该系统目前在一个正常运行的秀丽隐杆线虫实验室中使用,并已为实验室用户处理了20多万张图像。
{"title":"Rapid and accurate developmental stage recognition of C. elegans from high-throughput image data.","authors":"Amelia G White,&nbsp;Patricia G Cipriani,&nbsp;Huey-Ling Kao,&nbsp;Brandon Lees,&nbsp;Davi Geiger,&nbsp;Eduardo Sontag,&nbsp;Kristin C Gunsalus,&nbsp;Fabio Piano","doi":"10.1109/CVPR.2010.5540065","DOIUrl":"https://doi.org/10.1109/CVPR.2010.5540065","url":null,"abstract":"<p><p>We present a hierarchical principle for object recognition and its application to automatically classify developmental stages of C. elegans animals from a population of mixed stages. The object recognition machine consists of four hierarchical layers, each composed of units upon which evaluation functions output a label score, followed by a grouping mechanism that resolves ambiguities in the score by imposing local consistency constraints. Each layer then outputs groups of units, from which the units of the next layer are derived. Using this hierarchical principle, the machine builds up successively more sophisticated representations of the objects to be classified. The algorithm segments large and small objects, decomposes objects into parts, extracts features from these parts, and classifies them by SVM. We are using this system to analyze phenotypic data from C. elegans high-throughput genetic screens, and our system overcomes a previous bottleneck in image analysis by achieving near real-time scoring of image data. The system is in current use in a functioning C. elegans laboratory and has processed over two hundred thousand images for lab users.</p>","PeriodicalId":74560,"journal":{"name":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"2010 13-18 June 2010","pages":"3089-3096"},"PeriodicalIF":0.0,"publicationDate":"2010-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPR.2010.5540065","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40129650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Multi-domain, Higher Order Level Set Scheme for 3D Image Segmentation on the GPU. 基于GPU的三维图像分割多域高阶水平集方案。
Ojaswa Sharma, Qin Zhang, François Anton, Chandrajit Bajaj

Level set method based segmentation provides an efficient tool for topological and geometrical shape handling. Conventional level set surfaces are only C(0) continuous since the level set evolution involves linear interpolation to compute derivatives. Bajaj et al. present a higher order method to evaluate level set surfaces that are C(2) continuous, but are slow due to high computational burden. In this paper, we provide a higher order GPU based solver for fast and efficient segmentation of large volumetric images. We also extend the higher order method to multi-domain segmentation. Our streaming solver is efficient in memory usage.

基于水平集的分割方法为拓扑和几何形状的处理提供了有效的工具。传统的水平集曲面只有C(0)连续,因为水平集演化涉及到线性插值来计算导数。Bajaj等人提出了一种高阶方法来评估C(2)连续的水平集曲面,但由于计算负担高而速度很慢。在本文中,我们提供了一个基于高阶GPU的求解器,用于快速有效地分割大体积图像。我们还将高阶方法扩展到多域分割中。我们的流求解器在内存使用方面是高效的。
{"title":"Multi-domain, Higher Order Level Set Scheme for 3D Image Segmentation on the GPU.","authors":"Ojaswa Sharma,&nbsp;Qin Zhang,&nbsp;François Anton,&nbsp;Chandrajit Bajaj","doi":"10.1109/CVPR.2010.5539902","DOIUrl":"https://doi.org/10.1109/CVPR.2010.5539902","url":null,"abstract":"<p><p>Level set method based segmentation provides an efficient tool for topological and geometrical shape handling. Conventional level set surfaces are only C(0) continuous since the level set evolution involves linear interpolation to compute derivatives. Bajaj et al. present a higher order method to evaluate level set surfaces that are C(2) continuous, but are slow due to high computational burden. In this paper, we provide a higher order GPU based solver for fast and efficient segmentation of large volumetric images. We also extend the higher order method to multi-domain segmentation. Our streaming solver is efficient in memory usage.</p>","PeriodicalId":74560,"journal":{"name":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"2010 ","pages":"2211-2216"},"PeriodicalIF":0.0,"publicationDate":"2010-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPR.2010.5539902","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30946197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Image Atlas Construction via Intrinsic Averaging on the Manifold of Images. 基于图像流形内禀平均的图像图谱构建。
Yuchen Xie, Jeffrey Ho, Baba C Vemuri

In this paper, we propose a novel algorithm for computing an atlas from a collection of images. In the literature, atlases have almost always been computed as some types of means such as the straightforward Euclidean means or the more general Karcher means on Riemannian manifolds. In the context of images, the paper's main contribution is a geometric framework for computing image atlases through a two-step process: the localization of mean and the realization of it as an image. In the localization step, a few nearest neighbors of the mean among the input images are determined, and the realization step then proceeds to reconstruct the atlas image using these neighbors. Decoupling the localization step from the realization step provides the flexibility that allows us to formulate a general algorithm for computing image atlas. More specifically, we assume the input images belong to some smooth manifold M modulo image rotations. We use a graph structure to represent the manifold, and for the localization step, we formulate a convex optimization problem in ℝ(k) (k the number of input images) to determine the crucial neighbors that are used in the realization step to form the atlas image. The algorithm is both unbiased and rotation-invariant. We have evaluated the algorithm using synthetic and real images. In particular, experimental results demonstrate that the atlases computed using the proposed algorithm preserve important image features and generally enjoy better image quality in comparison with atlases computed using existing methods.

在本文中,我们提出了一种从图像集合中计算地图集的新算法。在文献中,地图集几乎总是被计算为某些类型的手段,例如直接的欧几里得手段或黎曼流形上更一般的Karcher手段。在图像的背景下,本文的主要贡献是通过两个步骤的过程来计算图像地图集的几何框架:均值的定位和它作为图像的实现。在定位步骤中,确定输入图像中均值的几个最近邻居,然后利用这些邻居进行地图集图像的重建。将定位步骤与实现步骤解耦提供了灵活性,使我们能够制定计算图像图集的通用算法。更具体地说,我们假设输入图像属于某个光滑流形M模图像旋转。我们使用图结构来表示流形,对于定位步骤,我们在f (k) (k为输入图像的数量)中制定一个凸优化问题,以确定在实现步骤中使用的关键邻居以形成地图集图像。该算法具有无偏性和旋转不变性。我们使用合成图像和真实图像对算法进行了评估。特别是,实验结果表明,与使用现有方法计算的地图集相比,使用该算法计算的地图集保留了重要的图像特征,并且通常具有更好的图像质量。
{"title":"Image Atlas Construction via Intrinsic Averaging on the Manifold of Images.","authors":"Yuchen Xie,&nbsp;Jeffrey Ho,&nbsp;Baba C Vemuri","doi":"10.1109/CVPR.2010.5540035","DOIUrl":"https://doi.org/10.1109/CVPR.2010.5540035","url":null,"abstract":"<p><p>In this paper, we propose a novel algorithm for computing an atlas from a collection of images. In the literature, atlases have almost always been computed as some types of means such as the straightforward Euclidean means or the more general Karcher means on Riemannian manifolds. In the context of images, the paper's main contribution is a geometric framework for computing image atlases through a two-step process: the localization of mean and the realization of it as an image. In the localization step, a few nearest neighbors of the mean among the input images are determined, and the realization step then proceeds to reconstruct the atlas image using these neighbors. Decoupling the localization step from the realization step provides the flexibility that allows us to formulate a general algorithm for computing image atlas. More specifically, we assume the input images belong to some smooth manifold M modulo image rotations. We use a graph structure to represent the manifold, and for the localization step, we formulate a convex optimization problem in ℝ(k) (k the number of input images) to determine the crucial neighbors that are used in the realization step to form the atlas image. The algorithm is both unbiased and rotation-invariant. We have evaluated the algorithm using synthetic and real images. In particular, experimental results demonstrate that the atlases computed using the proposed algorithm preserve important image features and generally enjoy better image quality in comparison with atlases computed using existing methods.</p>","PeriodicalId":74560,"journal":{"name":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"2010 ","pages":"2933-2939"},"PeriodicalIF":0.0,"publicationDate":"2010-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPR.2010.5540035","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29901163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
期刊
Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1