首页 > 最新文献

Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition最新文献

英文 中文
Robust Motion Estimation and Structure Recovery from Endoscopic Image Sequences With an Adaptive Scale Kernel Consensus Estimator. 基于自适应尺度核一致估计的内窥镜图像序列鲁棒运动估计与结构恢复。
Hanzi Wang, Daniel Mirota, Masaru Ishii, Gregory D Hager

To correctly estimate the camera motion parameters and reconstruct the structure of the surrounding tissues from endoscopic image sequences, we need not only to deal with outliers (e.g., mismatches), which may involve more than 50% of the data, but also to accurately distinguish inliers (correct matches) from outliers. In this paper, we propose a new robust estimator, Adaptive Scale Kernel Consensus (ASKC), which can tolerate more than 50 percent outliers while automatically estimating the scale of inliers. With ASKC, we develop a reliable feature tracking algorithm. This, in turn, allows us to develop a complete system for estimating endoscopic camera motion and reconstructing anatomical structures from endoscopic image sequences. Preliminary experiments on endoscopic sinus imagery have achieved promising results.

为了从内窥镜图像序列中正确估计相机运动参数并重建周围组织的结构,我们不仅需要处理可能涉及50%以上数据的异常值(例如,不匹配),还需要准确区分异常值(正确匹配)和异常值。在本文中,我们提出了一种新的鲁棒估计器,自适应尺度核共识(ASKC),它可以容忍超过50%的离群值,同时自动估计内线的规模。利用ASKC,我们开发了一种可靠的特征跟踪算法。这反过来又使我们能够开发一个完整的系统,用于估计内窥镜相机运动和从内窥镜图像序列重建解剖结构。鼻窦内窥镜成像的初步实验取得了可喜的结果。
{"title":"Robust Motion Estimation and Structure Recovery from Endoscopic Image Sequences With an Adaptive Scale Kernel Consensus Estimator.","authors":"Hanzi Wang,&nbsp;Daniel Mirota,&nbsp;Masaru Ishii,&nbsp;Gregory D Hager","doi":"10.1109/CVPR.2008.4587687","DOIUrl":"https://doi.org/10.1109/CVPR.2008.4587687","url":null,"abstract":"<p><p>To correctly estimate the camera motion parameters and reconstruct the structure of the surrounding tissues from endoscopic image sequences, we need not only to deal with outliers (e.g., mismatches), which may involve more than 50% of the data, but also to accurately distinguish inliers (correct matches) from outliers. In this paper, we propose a new robust estimator, Adaptive Scale Kernel Consensus (ASKC), which can tolerate more than 50 percent outliers while automatically estimating the scale of inliers. With ASKC, we develop a reliable feature tracking algorithm. This, in turn, allows us to develop a complete system for estimating endoscopic camera motion and reconstructing anatomical structures from endoscopic image sequences. Preliminary experiments on endoscopic sinus imagery have achieved promising results.</p>","PeriodicalId":74560,"journal":{"name":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":" ","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPR.2008.4587687","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29105882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Least Squares Congealing for Unsupervised Alignment of Images. 图像无监督对齐的最小二乘凝结算法。
Mark Cox, Sridha Sridharan, Simon Lucey, Jeffrey Cohn

In this paper, we present an approach we refer to as "least squares congealing" which provides a solution to the problem of aligning an ensemble of images in an unsupervised manner. Our approach circumvents many of the limitations existing in the canonical "congealing" algorithm. Specifically, we present an algorithm that:- (i) is able to simultaneously, rather than sequentially, estimate warp parameter updates, (ii) exhibits fast convergence and (iii) requires no pre-defined step size. We present alignment results which show an improvement in performance for the removal of unwanted spatial variation when compared with the related work of Learned-Miller on two datasets, the MNIST hand written digit database and the MultiPIE face database.

在本文中,我们提出了一种我们称之为“最小二乘凝结”的方法,它提供了一种以无监督方式对齐图像集合问题的解决方案。我们的方法规避了规范“凝结”算法中存在的许多限制。具体来说,我们提出了一种算法:- (i)能够同时而不是顺序地估计warp参数更新,(ii)表现出快速收敛,(iii)不需要预定义的步长。与Learned-Miller在两个数据集(MNIST手写数字数据库和MultiPIE人脸数据库)上的相关工作相比,我们提出的对齐结果显示,在去除不必要的空间变化方面的性能有所提高。
{"title":"Least Squares Congealing for Unsupervised Alignment of Images.","authors":"Mark Cox,&nbsp;Sridha Sridharan,&nbsp;Simon Lucey,&nbsp;Jeffrey Cohn","doi":"10.1109/CVPR.2008.4587573","DOIUrl":"https://doi.org/10.1109/CVPR.2008.4587573","url":null,"abstract":"<p><p>In this paper, we present an approach we refer to as \"least squares congealing\" which provides a solution to the problem of aligning an ensemble of images in an unsupervised manner. Our approach circumvents many of the limitations existing in the canonical \"congealing\" algorithm. Specifically, we present an algorithm that:- (i) is able to simultaneously, rather than sequentially, estimate warp parameter updates, (ii) exhibits fast convergence and (iii) requires no pre-defined step size. We present alignment results which show an improvement in performance for the removal of unwanted spatial variation when compared with the related work of Learned-Miller on two datasets, the MNIST hand written digit database and the MultiPIE face database.</p>","PeriodicalId":74560,"journal":{"name":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":" ","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPR.2008.4587573","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29184291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 94
Enforcing Convexity for Improved Alignment with Constrained Local Models. 增强凸性以改进约束局部模型的对齐。
Yang Wang, Simon Lucey, Jeffrey F Cohn

Constrained local models (CLMs) have recently demonstrated good performance in non-rigid object alignment/tracking in comparison to leading holistic approaches (e.g., AAMs). A major problem hindering the development of CLMs further, for non-rigid object alignment/tracking, is how to jointly optimize the global warp update across all local search responses. Previous methods have either used general purpose optimizers (e.g., simplex methods) or graph based optimization techniques. Unfortunately, problems exist with both these approaches when applied to CLMs. In this paper, we propose a new approach for optimizing the global warp update in an efficient manner by enforcing convexity at each local patch response surface. Furthermore, we show that the classic Lucas-Kanade approach to gradient descent image alignment can be viewed as a special case of our proposed framework. Finally, we demonstrate that our approach receives improved performance for the task of non-rigid face alignment/tracking on the MultiPIE database and the UNBC-McMaster archive.

与领先的整体方法(例如aam)相比,约束局部模型(clm)最近在非刚性对象对齐/跟踪方面表现出了良好的性能。对于非刚性对象对齐/跟踪,阻碍clm进一步发展的一个主要问题是如何在所有本地搜索响应中联合优化全局翘曲更新。以前的方法要么使用通用优化器(例如单纯形方法),要么使用基于图的优化技术。不幸的是,当应用于clm时,这两种方法都存在问题。在本文中,我们提出了一种新的优化全局翘曲更新的方法,该方法通过在每个局部斑块响应面上增强凸性来有效地优化全局翘曲更新。此外,我们表明经典的Lucas-Kanade梯度下降图像对齐方法可以被视为我们提出的框架的特殊情况。最后,我们证明了我们的方法在MultiPIE数据库和UNBC-McMaster存档上的非刚性面对齐/跟踪任务中获得了改进的性能。
{"title":"Enforcing Convexity for Improved Alignment with Constrained Local Models.","authors":"Yang Wang,&nbsp;Simon Lucey,&nbsp;Jeffrey F Cohn","doi":"10.1109/CVPR.2008.4587808","DOIUrl":"https://doi.org/10.1109/CVPR.2008.4587808","url":null,"abstract":"<p><p>Constrained local models (CLMs) have recently demonstrated good performance in non-rigid object alignment/tracking in comparison to leading holistic approaches (e.g., AAMs). A major problem hindering the development of CLMs further, for non-rigid object alignment/tracking, is how to jointly optimize the global warp update across all local search responses. Previous methods have either used general purpose optimizers (e.g., simplex methods) or graph based optimization techniques. Unfortunately, problems exist with both these approaches when applied to CLMs. In this paper, we propose a new approach for optimizing the global warp update in an efficient manner by enforcing convexity at each local patch response surface. Furthermore, we show that the classic Lucas-Kanade approach to gradient descent image alignment can be viewed as a special case of our proposed framework. Finally, we demonstrate that our approach receives improved performance for the task of non-rigid face alignment/tracking on the MultiPIE database and the UNBC-McMaster archive.</p>","PeriodicalId":74560,"journal":{"name":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":" ","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPR.2008.4587808","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29116345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 176
Local Minima Free Parameterized Appearance Models. 局部最小自由参数化外观模型。
Minh Hoai Nguyen, Fernando De la Torre

Parameterized Appearance Models (PAMs) (e.g. Eigentracking, Active Appearance Models, Morphable Models) are commonly used to model the appearance and shape variation of objects in images. While PAMs have numerous advantages relative to alternate approaches, they have at least two drawbacks. First, they are especially prone to local minima in the fitting process. Second, often few if any of the local minima of the cost function correspond to acceptable solutions. To solve these problems, this paper proposes a method to learn a cost function by explicitly optimizing that the local minima occur at and only at the places corresponding to the correct fitting parameters. To the best of our knowledge, this is the first paper to address the problem of learning a cost function to explicitly model local properties of the error surface to fit PAMs. Synthetic and real examples show improvement in alignment performance in comparison with traditional approaches.

参数化外观模型(PAMs)通常用于对图像中物体的外观和形状变化进行建模,如特征跟踪、活动外观模型、变形模型等。虽然pam相对于其他方法有许多优点,但它们至少有两个缺点。首先,它们在拟合过程中特别容易出现局部极小值。其次,成本函数的局部最小值通常很少(如果有的话)对应于可接受的解决方案。为了解决这些问题,本文提出了一种通过显式优化使局部极小值出现在且仅出现在正确拟合参数对应的位置来学习代价函数的方法。据我们所知,这是第一篇解决学习成本函数以显式地模拟误差曲面的局部属性以拟合PAMs问题的论文。综合算例和实际算例表明,与传统方法相比,该方法的对准性能得到了改善。
{"title":"Local Minima Free Parameterized Appearance Models.","authors":"Minh Hoai Nguyen,&nbsp;Fernando De la Torre","doi":"10.1109/CVPR.2008.4587524","DOIUrl":"https://doi.org/10.1109/CVPR.2008.4587524","url":null,"abstract":"<p><p>Parameterized Appearance Models (PAMs) (e.g. Eigentracking, Active Appearance Models, Morphable Models) are commonly used to model the appearance and shape variation of objects in images. While PAMs have numerous advantages relative to alternate approaches, they have at least two drawbacks. First, they are especially prone to local minima in the fitting process. Second, often few if any of the local minima of the cost function correspond to acceptable solutions. To solve these problems, this paper proposes a method to learn a cost function by explicitly optimizing that the local minima occur at and only at the places corresponding to the correct fitting parameters. To the best of our knowledge, this is the first paper to address the problem of learning a cost function to explicitly model local properties of the error surface to fit PAMs. Synthetic and real examples show improvement in alignment performance in comparison with traditional approaches.</p>","PeriodicalId":74560,"journal":{"name":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPR.2008.4587524","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29903230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Image Segmentation via Convolution of a Level-Set Function with a Rigaut Kernel. 基于Rigaut核的水平集函数卷积的图像分割。
Ozlem N Subakan, Baba C Vemuri

Image segmentation is a fundamental task in Computer Vision and there are numerous algorithms that have been successfully applied in various domains. There are still plenty of challenges to be met with. In this paper, we consider one such challenge, that of achieving segmentation while preserving complicated and detailed features present in the image, be it a gray level or a textured image. We present a novel approach that does not make use of any prior information about the objects in the image being segmented. Segmentation is achieved using local orientation information, which is obtained via the application of a steerable Gabor filter bank, in a statistical framework. This information is used to construct a spatially varying kernel called the Rigaut Kernel, which is then convolved with the signed distance function of an evolving contour (placed in the image) to achieve segmentation. We present numerous experimental results on real images, including a quantitative evaluation. Superior performance of our technique is depicted via comparison to the state-of-the-art algorithms in literature.

图像分割是计算机视觉的一项基本任务,已有许多算法在各个领域得到了成功的应用。仍有许多挑战需要应对。在本文中,我们考虑了这样一个挑战,即在保持图像中存在的复杂和详细特征(无论是灰度还是纹理图像)的同时实现分割。我们提出了一种新的方法,它不利用任何关于被分割图像中物体的先验信息。在统计框架中,通过应用可导向Gabor滤波器组获得局部方向信息来实现分割。这些信息被用来构造一个空间变化的核,称为Rigaut核,然后与一个不断发展的轮廓(放置在图像中)的带符号距离函数卷积以实现分割。我们提出了大量的实验结果在真实的图像,包括定量评估。通过与文献中最先进的算法的比较,描述了我们技术的优越性能。
{"title":"Image Segmentation via Convolution of a Level-Set Function with a Rigaut Kernel.","authors":"Ozlem N Subakan,&nbsp;Baba C Vemuri","doi":"10.1109/CVPR.2008.4587460","DOIUrl":"https://doi.org/10.1109/CVPR.2008.4587460","url":null,"abstract":"<p><p>Image segmentation is a fundamental task in Computer Vision and there are numerous algorithms that have been successfully applied in various domains. There are still plenty of challenges to be met with. In this paper, we consider one such challenge, that of achieving segmentation while preserving complicated and detailed features present in the image, be it a gray level or a textured image. We present a novel approach that does not make use of any prior information about the objects in the image being segmented. Segmentation is achieved using local orientation information, which is obtained via the application of a steerable Gabor filter bank, in a statistical framework. This information is used to construct a spatially varying kernel called the Rigaut Kernel, which is then convolved with the signed distance function of an evolving contour (placed in the image) to achieve segmentation. We present numerous experimental results on real images, including a quantitative evaluation. Superior performance of our technique is depicted via comparison to the state-of-the-art algorithms in literature.</p>","PeriodicalId":74560,"journal":{"name":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":" ","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2008-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPR.2008.4587460","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"27978673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Large Margin Pursuit for a Conic Section Classifier. 圆锥截面分类器的大余量追踪。
Santhosh Kodipaka, Arunava Banerjee, Baba C Vemuri

Learning a discriminant becomes substantially more difficult when the datasets are high-dimensional and the available samples are few. This is often the case in computer vision and medical diagnosis applications. A novel Conic Section classifier (CSC) was recently introduced in the literature to handle such datasets, wherein each class was represented by a conic section parameterized by its focus, directrix and eccentricity. The discriminant boundary was the locus of all points that are equi-eccentric relative to each class-representative conic section. Simpler boundaries were preferred for the sake of generalizability.In this paper, we improve the performance of the two-class classifier via a large margin pursuit. When formulated as a non-linear optimization problem, the margin computation is demonstrated to be hard, especially due to the high dimensionality of the data. Instead, we present a geometric algorithm to compute the distance of a point to the nonlinear discriminant boundary generated by the CSC in the input space. We then introduce a large margin pursuit in the learning phase so as to enhance the generalization capacity of the classifier. We validate the algorithm on real datasets and show favorable classification rates in comparison to many existing state-of-the-art binary classifiers as well as the CSC without margin pursuit.

当数据集是高维的,并且可用的样本很少时,学习判别法变得非常困难。这在计算机视觉和医学诊断应用中经常出现。最近,文献中引入了一种新的圆锥截面分类器(CSC)来处理这类数据集,其中每个类都由一个由其焦点、准线和偏心参数化的圆锥截面来表示。判别边界是相对于每个类代表的圆锥剖面的所有等偏心点的轨迹。为了一般化起见,更简单的边界是可取的。在本文中,我们通过大余量追踪来提高两类分类器的性能。当作为非线性优化问题表述时,余量计算被证明是困难的,特别是由于数据的高维数。相反,我们提出了一种几何算法来计算一个点到输入空间中由CSC生成的非线性判别边界的距离。然后,我们在学习阶段引入大余量追踪,以增强分类器的泛化能力。我们在真实数据集上验证了算法,并与许多现有的最先进的二进制分类器以及没有边际追求的CSC相比,显示了有利的分类率。
{"title":"Large Margin Pursuit for a Conic Section Classifier.","authors":"Santhosh Kodipaka,&nbsp;Arunava Banerjee,&nbsp;Baba C Vemuri","doi":"10.1109/CVPR.2008.4587406","DOIUrl":"https://doi.org/10.1109/CVPR.2008.4587406","url":null,"abstract":"<p><p>Learning a discriminant becomes substantially more difficult when the datasets are high-dimensional and the available samples are few. This is often the case in computer vision and medical diagnosis applications. A novel Conic Section classifier (CSC) was recently introduced in the literature to handle such datasets, wherein each class was represented by a conic section parameterized by its focus, directrix and eccentricity. The discriminant boundary was the locus of all points that are equi-eccentric relative to each class-representative conic section. Simpler boundaries were preferred for the sake of generalizability.In this paper, we improve the performance of the two-class classifier via a large margin pursuit. When formulated as a non-linear optimization problem, the margin computation is demonstrated to be hard, especially due to the high dimensionality of the data. Instead, we present a geometric algorithm to compute the distance of a point to the nonlinear discriminant boundary generated by the CSC in the input space. We then introduce a large margin pursuit in the learning phase so as to enhance the generalization capacity of the classifier. We validate the algorithm on real datasets and show favorable classification rates in comparison to many existing state-of-the-art binary classifiers as well as the CSC without margin pursuit.</p>","PeriodicalId":74560,"journal":{"name":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":" ","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2008-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPR.2008.4587406","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"28017160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Multi-Compartment Segmentation Framework With Homeomorphic Level Sets. 具有同胚水平集的多隔室分割框架。
Xian Fan, Pierre-Louis Bazin, Jerry L Prince

The simultaneous segmentation of multiple objects is an important problem in many imaging and computer vision applications. Various extensions of level set segmentation techniques to multiple objects have been proposed; however, no one method maintains object relationships, preserves topology, is computationally efficient, and provides an object-dependent internal and external force capability. In this paper, a framework for segmenting multiple objects that permits different forces to be applied to different boundaries while maintaining object topology and relationships is presented. Because of this framework, the segmentation of multiple objects each with multiple compartments is supported, and no overlaps or vacuums are generated. The computational complexity of this approach is independent of the number of objects to segment, thereby permitting the simultaneous segmentation of a large number of components. The properties of this approach and comparisons to existing methods are shown using a variety of images, both synthetic and real.

多目标的同时分割是许多成像和计算机视觉应用中的一个重要问题。人们提出了水平集分割技术对多目标的各种扩展;然而,没有一种方法可以维持对象关系、保持拓扑结构、计算效率高,并提供与对象相关的内力和外力能力。在本文中,提出了一个分割多个对象的框架,该框架允许在保持对象拓扑和关系的同时对不同的边界施加不同的力。由于这个框架,支持多个对象的分割,每个对象都有多个隔室,并且不会产生重叠或真空。该方法的计算复杂度与分割对象的数量无关,因此可以同时分割大量的组件。该方法的特性以及与现有方法的比较使用了各种图像,包括合成的和真实的。
{"title":"A Multi-Compartment Segmentation Framework With Homeomorphic Level Sets.","authors":"Xian Fan,&nbsp;Pierre-Louis Bazin,&nbsp;Jerry L Prince","doi":"10.1109/CVPR.2008.4587475","DOIUrl":"https://doi.org/10.1109/CVPR.2008.4587475","url":null,"abstract":"<p><p>The simultaneous segmentation of multiple objects is an important problem in many imaging and computer vision applications. Various extensions of level set segmentation techniques to multiple objects have been proposed; however, no one method maintains object relationships, preserves topology, is computationally efficient, and provides an object-dependent internal and external force capability. In this paper, a framework for segmenting multiple objects that permits different forces to be applied to different boundaries while maintaining object topology and relationships is presented. Because of this framework, the segmentation of multiple objects each with multiple compartments is supported, and no overlaps or vacuums are generated. The computational complexity of this approach is independent of the number of objects to segment, thereby permitting the simultaneous segmentation of a large number of components. The properties of this approach and comparisons to existing methods are shown using a variety of images, both synthetic and real.</p>","PeriodicalId":74560,"journal":{"name":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":" ","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2008-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPR.2008.4587475","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"31108040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Segmentation of Left Ventricle From 3D Cardiac MR Image Sequences Using A Subject-Specific Dynamical Model. 利用特定受试者的动态模型从三维心脏磁共振图像序列中分割左心室
Yun Zhu, Xenophon Papademetris, Albert Sinusas, James S Duncan

Statistical model-based segmentation of the left ventricle from cardiac images has received considerable attention in recent years. While a variety of statistical models have been shown to improve segmentation results, most of them are either static models (SM) which neglect the temporal coherence of a cardiac sequence or generic dynamical models (GDM) which neglect the inter-subject variability of cardiac shapes and deformations. In this paper, we use a subject-specific dynamical model (SSDM) that handles inter-subject variability and temporal dynamics (intra-subject variability) simultaneously. It can progressively identify the specific motion patterns of a new cardiac sequence based on the segmentations observed in the past frames. We formulate the integration of the SSDM into the segmentation process in a recursive Bayesian framework in order to segment each frame based on the intensity information from the current frame and the prediction from the past frames. We perform "Leave-one-out" test on 32 sequences to validate our approach. Quantitative analysis of experimental results shows that the segmentation with the SSDM outperforms those with the SM and GDM by having better global and local consistencies with the manual segmentation.

近年来,基于统计模型的心脏图像左心室分割技术受到了广泛关注。虽然各种统计模型已被证明能改善分割结果,但它们大多是静态模型(SM)或通用动态模型(GDM),前者忽视了心脏序列的时间连贯性,后者则忽视了心脏形状和变形的受试者间变异性。在本文中,我们使用了一种同时处理受试者间变异性和时间动态性(受试者内变异性)的受试者特定动态模型(SSDM)。它可以根据过去帧中观察到的分割,逐步识别新心脏序列的特定运动模式。我们将 SSDM 整合到递归贝叶斯框架的分割过程中,以便根据当前帧的强度信息和过去帧的预测来分割每一帧。我们对 32 个序列进行了 "Leave-one-out "测试,以验证我们的方法。对实验结果的定量分析表明,使用 SSDM 进行的分割优于使用 SM 和 GDM 进行的分割,因为它与人工分割具有更好的全局和局部一致性。
{"title":"Segmentation of Left Ventricle From 3D Cardiac MR Image Sequences Using A Subject-Specific Dynamical Model.","authors":"Yun Zhu, Xenophon Papademetris, Albert Sinusas, James S Duncan","doi":"10.1109/CVPR.2008.4587433","DOIUrl":"10.1109/CVPR.2008.4587433","url":null,"abstract":"<p><p>Statistical model-based segmentation of the left ventricle from cardiac images has received considerable attention in recent years. While a variety of statistical models have been shown to improve segmentation results, most of them are either static models (SM) which neglect the temporal coherence of a cardiac sequence or generic dynamical models (GDM) which neglect the inter-subject variability of cardiac shapes and deformations. In this paper, we use a subject-specific dynamical model (SSDM) that handles inter-subject variability and temporal dynamics (intra-subject variability) simultaneously. It can progressively identify the specific motion patterns of a new cardiac sequence based on the segmentations observed in the past frames. We formulate the integration of the SSDM into the segmentation process in a recursive Bayesian framework in order to segment each frame based on the intensity information from the current frame and the prediction from the past frames. We perform \"Leave-one-out\" test on 32 sequences to validate our approach. Quantitative analysis of experimental results shows that the segmentation with the SSDM outperforms those with the SM and GDM by having better global and local consistencies with the manual segmentation.</p>","PeriodicalId":74560,"journal":{"name":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":" ","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2008-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2801445/pdf/nihms159128.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"28628401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shape L'Âne Rouge: Sliding Wavelets for Indexing and Retrieval. Shape L'Âne Rouge:用于索引和检索的滑动小波
Adrian Peter, Anand Rangarajan, Jeffrey Ho

Shape representation and retrieval of stored shape models are becoming increasingly more prominent in fields such as medical imaging, molecular biology and remote sensing. We present a novel framework that directly addresses the necessity for a rich and compressible shape representation, while simultaneously providing an accurate method to index stored shapes. The core idea is to represent point-set shapes as the square root of probability densities expanded in a wavelet basis. We then use this representation to develop a natural similarity metric that respects the geometry of these probability distributions, i.e. under the wavelet expansion, densities are points on a unit hypersphere and the distance between densities is given by the separating arc length. The process uses a linear assignment solver for non-rigid alignment between densities prior to matching; this has the connotation of "sliding" wavelet coefficients akin to the sliding block puzzle L'Âne Rouge. We illustrate the utility of this framework by matching shapes from the MPEG-7 data set and provide comparisons to other similarity measures, such as Euclidean distance shape distributions.

在医学成像、分子生物学和遥感等领域,形状表示和存储形状模型的检索正变得越来越重要。我们提出了一个新颖的框架,直接解决了丰富且可压缩的形状表示的必要性,同时提供了一种精确的方法来索引存储的形状。其核心思想是将点集形状表示为在小波基础上扩展的概率密度的平方根。然后,我们使用这种表示方法来开发一种自然的相似度量,这种方法尊重这些概率分布的几何形状,即在小波展开下,密度是单位超球上的点,密度之间的距离由分离弧长给出。在匹配之前,该过程使用线性赋值求解器进行密度之间的非刚性对齐;这具有 "滑动 "小波系数的含义,类似于滑动块拼图《L'Âne Rouge》。我们通过对 MPEG-7 数据集中的形状进行匹配来说明这一框架的实用性,并与欧氏距离形状分布等其他相似性测量方法进行比较。
{"title":"Shape L'Âne Rouge: Sliding Wavelets for Indexing and Retrieval.","authors":"Adrian Peter, Anand Rangarajan, Jeffrey Ho","doi":"10.1109/CVPR.2008.4587838","DOIUrl":"10.1109/CVPR.2008.4587838","url":null,"abstract":"<p><p>Shape representation and retrieval of stored shape models are becoming increasingly more prominent in fields such as medical imaging, molecular biology and remote sensing. We present a novel framework that directly addresses the necessity for a rich and compressible shape representation, while simultaneously providing an accurate method to index stored shapes. The core idea is to represent point-set shapes as the square root of probability densities expanded in a wavelet basis. We then use this representation to develop a natural similarity metric that respects the geometry of these probability distributions, i.e. under the wavelet expansion, densities are points on a unit hypersphere and the distance between densities is given by the separating arc length. The process uses a linear assignment solver for non-rigid alignment between densities prior to matching; this has the connotation of \"sliding\" wavelet coefficients akin to the sliding block puzzle L'Âne Rouge. We illustrate the utility of this framework by matching shapes from the MPEG-7 data set and provide comparisons to other similarity measures, such as Euclidean distance shape distributions.</p>","PeriodicalId":74560,"journal":{"name":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"2008 4587838","pages":"4587838"},"PeriodicalIF":0.0,"publicationDate":"2008-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2921664/pdf/nihms223534.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29194142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Graph Cut Approach to Image Segmentation in Tensor Space. 张量空间图像分割的图割方法。
James Malcolm, Yogesh Rathi, Allen Tannenbaum

This paper proposes a novel method to apply the standard graph cut technique to segmenting multimodal tensor valued images. The Riemannian nature of the tensor space is explicitly taken into account by first mapping the data to a Euclidean space where non-parametric kernel density estimates of the regional distributions may be calculated from user initialized regions. These distributions are then used as regional priors in calculating graph edge weights. Hence this approach utilizes the true variation of the tensor data by respecting its Riemannian structure in calculating distances when forming probability distributions. Further, the non-parametric model generalizes to arbitrary tensor distribution unlike the Gaussian assumption made in previous works. Casting the segmentation problem in a graph cut framework yields a segmentation robust with respect to initialization on the data tested.

本文提出了一种将标准图割技术应用于多模态张量值图像分割的新方法。首先通过将数据映射到欧几里德空间来明确考虑张量空间的黎曼性质,在欧几里德空间中,可以从用户初始化的区域计算区域分布的非参数核密度估计。然后将这些分布用作计算图边权重的区域先验。因此,这种方法利用了张量数据的真实变化,在形成概率分布时,在计算距离时尊重其黎曼结构。此外,非参数模型可以推广到任意张量分布,而不像以前的研究中所做的高斯假设。将分割问题投射到图切框架中,就测试数据的初始化而言,产生了一个鲁棒的分割。
{"title":"A Graph Cut Approach to Image Segmentation in Tensor Space.","authors":"James Malcolm,&nbsp;Yogesh Rathi,&nbsp;Allen Tannenbaum","doi":"10.1109/CVPR.2007.383404","DOIUrl":"https://doi.org/10.1109/CVPR.2007.383404","url":null,"abstract":"<p><p>This paper proposes a novel method to apply the standard graph cut technique to segmenting multimodal tensor valued images. The Riemannian nature of the tensor space is explicitly taken into account by first mapping the data to a Euclidean space where non-parametric kernel density estimates of the regional distributions may be calculated from user initialized regions. These distributions are then used as regional priors in calculating graph edge weights. Hence this approach utilizes the true variation of the tensor data by respecting its Riemannian structure in calculating distances when forming probability distributions. Further, the non-parametric model generalizes to arbitrary tensor distribution unlike the Gaussian assumption made in previous works. Casting the segmentation problem in a graph cut framework yields a segmentation robust with respect to initialization on the data tested.</p>","PeriodicalId":74560,"journal":{"name":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":" ","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2008-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPR.2007.383404","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"31529292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
期刊
Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1