首页 > 最新文献

2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops最新文献

英文 中文
Physiological modelling for improved reliability in silhouette-driven gradient-based hand tracking 提高轮廓驱动的基于梯度的手部跟踪可靠性的生理建模
Paris Kaimakis, Joan Lasenby
We present a gradient-based motion capture system that robustly tracks a human hand, based on abstracted visual information - silhouettes. Despite the ambiguity in the visual data and despite the vulnerability of gradient-based methods in the face of such ambiguity, we minimise problems related to misfit by using a model of the hand's physiology, which is entirely non-visual, subject-invariant, and assumed to be known a priori. By modelling seven distinct aspects of the hand's physiology we derive prior densities which are incorporated into the tracking system within a Bayesian framework. We demonstrate how the posterior is formed, and how our formulation leads to the extraction of the maximum a posteriori estimate using a gradient-based search. Our results demonstrate an enormous improvement in tracking precision and reliability, while also achieving near real-time performance.
我们提出了一个基于梯度的运动捕捉系统,该系统基于抽象的视觉信息-轮廓,可以鲁棒地跟踪人手。尽管视觉数据存在模糊性,尽管基于梯度的方法在面对这种模糊性时存在脆弱性,但我们通过使用手部生理学模型来最小化与不匹配相关的问题,该模型完全是非视觉的,主体不变的,并且假设是先验的。通过模拟手部生理的七个不同方面,我们得出了在贝叶斯框架内纳入跟踪系统的先验密度。我们演示了后验是如何形成的,以及我们的公式是如何使用基于梯度的搜索提取最大后验估计的。我们的结果表明,在跟踪精度和可靠性方面有了巨大的改进,同时也实现了接近实时的性能。
{"title":"Physiological modelling for improved reliability in silhouette-driven gradient-based hand tracking","authors":"Paris Kaimakis, Joan Lasenby","doi":"10.1109/CVPRW.2009.5204252","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204252","url":null,"abstract":"We present a gradient-based motion capture system that robustly tracks a human hand, based on abstracted visual information - silhouettes. Despite the ambiguity in the visual data and despite the vulnerability of gradient-based methods in the face of such ambiguity, we minimise problems related to misfit by using a model of the hand's physiology, which is entirely non-visual, subject-invariant, and assumed to be known a priori. By modelling seven distinct aspects of the hand's physiology we derive prior densities which are incorporated into the tracking system within a Bayesian framework. We demonstrate how the posterior is formed, and how our formulation leads to the extraction of the maximum a posteriori estimate using a gradient-based search. Our results demonstrate an enormous improvement in tracking precision and reliability, while also achieving near real-time performance.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114930204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Event detection using local binary pattern based dynamic textures 使用基于局部二进制模式的动态纹理进行事件检测
Yunqian Ma, P. Císar̆
Detecting suspicious events from video surveillance cameras has been an important task recently. Many trajectory based descriptors were developed, such as to detect people running or moving in opposite direction. However, these trajectory based descriptors are not working well in the crowd environments like airports, rail stations, because those descriptors assume perfect motion/object segmentation. In this paper, we present an event detection method using dynamic texture descriptor. The dynamic texture descriptor is an extension of the local binary patterns. The image sequences are divided into regions. A flow is formed based on the similarity of the dynamic texture descriptors on the regions. We used real dataset for experiments. The results are promising.
从视频监控摄像机中检测可疑事件已成为近年来的一项重要任务。许多基于轨迹的描述符被开发出来,例如检测在相反方向奔跑或移动的人。然而,这些基于轨迹的描述符在机场、火车站等人群环境中并不能很好地工作,因为这些描述符假设了完美的运动/物体分割。本文提出了一种基于动态纹理描述符的事件检测方法。动态纹理描述符是局部二进制模式的扩展。将图像序列划分为多个区域。基于区域上动态纹理描述符的相似性形成流。我们使用真实数据集进行实验。结果是有希望的。
{"title":"Event detection using local binary pattern based dynamic textures","authors":"Yunqian Ma, P. Císar̆","doi":"10.1109/CVPRW.2009.5204204","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204204","url":null,"abstract":"Detecting suspicious events from video surveillance cameras has been an important task recently. Many trajectory based descriptors were developed, such as to detect people running or moving in opposite direction. However, these trajectory based descriptors are not working well in the crowd environments like airports, rail stations, because those descriptors assume perfect motion/object segmentation. In this paper, we present an event detection method using dynamic texture descriptor. The dynamic texture descriptor is an extension of the local binary patterns. The image sequences are divided into regions. A flow is formed based on the similarity of the dynamic texture descriptors on the regions. We used real dataset for experiments. The results are promising.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124238674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Robust facial action recognition from real-time 3D streams 鲁棒面部动作识别实时3D流
F. Tsalakanidou, S. Malassiotis
This paper presents a completely automated facial action and facial expression recognition system using 2D + 3D images recorded in real-time by a structured light sensor. It is based on local feature tracking and rule-based classification of geometric, appearance and surface curvature measurements. Good performance is achieved under relatively non-controlled conditions.
本文介绍了一种完全自动化的面部动作和面部表情识别系统,该系统利用结构光传感器实时记录的2D + 3D图像。它基于局部特征跟踪和基于规则的几何、外观和表面曲率测量分类。在相对不受控制的条件下取得良好的性能。
{"title":"Robust facial action recognition from real-time 3D streams","authors":"F. Tsalakanidou, S. Malassiotis","doi":"10.1109/CVPRW.2009.5204281","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204281","url":null,"abstract":"This paper presents a completely automated facial action and facial expression recognition system using 2D + 3D images recorded in real-time by a structured light sensor. It is based on local feature tracking and rule-based classification of geometric, appearance and surface curvature measurements. Good performance is achieved under relatively non-controlled conditions.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116822982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Incremental Bayesian learning of feature points from natural images 自然图像特征点的增量贝叶斯学习
M. Toivanen, J. Lampinen
Selecting automatically feature points of an object appearing in images is a difficult but vital task for learning the feature point based representation of the object model. In this work we present an incremental Bayesian model that learns the feature points of an object from natural un-annotated images by matching the corresponding points. The training set is recursively expanded and the model parameters updated after matching each image. The set of nodes in the first image is matched in the second image, by sampling the un-normalized posterior distribution with particle filters. For each matched node the model assigns a probability for it to be associated with the object, and having matched few images, the nodes with low association probabilities are replaced with new ones to increase the number of the object nodes. A feature point based representation of the object model is formed from the matched corresponding points. In the tested images, the model matches the corresponding points better than the well-known elastic bunch graph matching batch method and gives promising results in recognizing learned object models in novel images.
自动选择图像中出现的物体的特征点是学习基于特征点的物体模型表示的一项困难但又至关重要的任务。在这项工作中,我们提出了一个增量贝叶斯模型,该模型通过匹配相应的点,从自然的未注释图像中学习对象的特征点。对训练集进行递归扩展,并在匹配每张图像后更新模型参数。通过粒子滤波对非归一化后验分布进行采样,将第一幅图像中的节点集匹配到第二幅图像中。对于每个匹配的节点,模型分配一个与目标相关联的概率,如果匹配的图像较少,则将关联概率较低的节点替换为新的节点,以增加目标节点的数量。从匹配的对应点形成基于特征点的对象模型表示。在测试图像中,该模型比已知的弹性束图匹配批处理方法更好地匹配了相应的点,在识别新图像中的学习对象模型方面取得了良好的效果。
{"title":"Incremental Bayesian learning of feature points from natural images","authors":"M. Toivanen, J. Lampinen","doi":"10.1109/CVPRW.2009.5204292","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204292","url":null,"abstract":"Selecting automatically feature points of an object appearing in images is a difficult but vital task for learning the feature point based representation of the object model. In this work we present an incremental Bayesian model that learns the feature points of an object from natural un-annotated images by matching the corresponding points. The training set is recursively expanded and the model parameters updated after matching each image. The set of nodes in the first image is matched in the second image, by sampling the un-normalized posterior distribution with particle filters. For each matched node the model assigns a probability for it to be associated with the object, and having matched few images, the nodes with low association probabilities are replaced with new ones to increase the number of the object nodes. A feature point based representation of the object model is formed from the matched corresponding points. In the tested images, the model matches the corresponding points better than the well-known elastic bunch graph matching batch method and gives promising results in recognizing learned object models in novel images.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123432864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Computer vision on tap 计算机视觉开启
Kevin Chiu, R. Raskar
We demonstrate a concept of computer vision as a secure, live service on the Internet. We show a platform to distribute a real time vision algorithm using simple widely available Web technologies, such as Adobe Flash. We allow a user to access this service without downloading an executable or sharing the image stream with anyone. We support developers to publish without distribution complexity. Finally the platform supports user-permitted aggregation of data for computer vision research or analysis. We describe results for a simple distributed motion detection algorithm. We discuss future scenarios for organically extending the horizon of computer vision research.
我们展示了计算机视觉作为互联网上安全、实时服务的概念。我们展示了一个使用简单的广泛可用的Web技术(如Adobe Flash)分发实时视觉算法的平台。我们允许用户访问此服务,而无需下载可执行文件或与任何人共享图像流。我们支持开发者在没有发行复杂性的情况下发布产品。最后,该平台支持用户允许的数据聚合,用于计算机视觉研究或分析。我们描述了一个简单的分布式运动检测算法的结果。我们讨论了有机地扩展计算机视觉研究视野的未来场景。
{"title":"Computer vision on tap","authors":"Kevin Chiu, R. Raskar","doi":"10.1109/CVPRW.2009.5204229","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204229","url":null,"abstract":"We demonstrate a concept of computer vision as a secure, live service on the Internet. We show a platform to distribute a real time vision algorithm using simple widely available Web technologies, such as Adobe Flash. We allow a user to access this service without downloading an executable or sharing the image stream with anyone. We support developers to publish without distribution complexity. Finally the platform supports user-permitted aggregation of data for computer vision research or analysis. We describe results for a simple distributed motion detection algorithm. We discuss future scenarios for organically extending the horizon of computer vision research.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124577953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Beyond one-to-one feature correspondence: The need for many-to-many matching and image abstraction 除了一对一的特征对应:需要多对多匹配和图像抽象
Sven J. Dickinson
Summary form only given: In this paper briefly review three formulations of the many-to-many matching problem as applied to model acquisition, model indexing, and object recognition. In the first scenario, I will describe the problem of learning a prototypical shape model from a set of exemplars in which the exemplars may not share a single local feature in common. We formulate the problem as a search through the intractable space of feature combinations, or abstractions, to find the "lowest common abstraction" that is derivable from each input exemplar. This abstraction, in turn, defines a many-to-many feature correspondence among the extracted input features.
本文简要回顾了应用于模型获取、模型索引和目标识别的多对多匹配问题的三种表述。在第一个场景中,我将描述从一组示例中学习原型形状模型的问题,其中示例可能不共享单个共同的局部特征。我们将问题表述为在特征组合或抽象的难处理空间中进行搜索,以找到从每个输入范例中派生出来的“最低公共抽象”。这种抽象又在提取的输入特征之间定义了多对多的特征对应关系。
{"title":"Beyond one-to-one feature correspondence: The need for many-to-many matching and image abstraction","authors":"Sven J. Dickinson","doi":"10.1109/CVPRW.2009.5204333","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204333","url":null,"abstract":"Summary form only given: In this paper briefly review three formulations of the many-to-many matching problem as applied to model acquisition, model indexing, and object recognition. In the first scenario, I will describe the problem of learning a prototypical shape model from a set of exemplars in which the exemplars may not share a single local feature in common. We formulate the problem as a search through the intractable space of feature combinations, or abstractions, to find the \"lowest common abstraction\" that is derivable from each input exemplar. This abstraction, in turn, defines a many-to-many feature correspondence among the extracted input features.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121774606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automatic detection of body parts in x-ray images x射线图像中人体部位的自动检测
V. Jeanne, D. Ünay, Vincent Jacquet
The number of digital images that needs to be acquired, analyzed, classified, stored and retrieved in the medical centers is exponentially growing with the advances in medical imaging technology. Accordingly, medical image classification and retrieval has become a popular topic in the recent years. Despite many projects focusing on this problem, proposed solutions are still far from being sufficiently accurate for real-life implementations. Interpreting medical image classification and retrieval as a multi-class classification task, in this work, we investigate the performance of five different feature types in a SVM-based learning framework for classification of human body X-Ray images into classes corresponding to body parts. Our comprehensive experiments show that four conventional feature types provide performances comparable to the literature with low per-class accuracies, whereas local binary patterns produce not only very good global accuracy but also good class-specific accuracies with respect to the features used in the literature.
随着医学成像技术的进步,医疗中心需要采集、分析、分类、存储和检索的数字图像数量呈指数级增长。因此,医学图像的分类与检索成为近年来研究的热点。尽管许多项目都在关注这个问题,但是所提出的解决方案对于现实生活中的实现来说仍然远远不够精确。将医学图像分类和检索解释为一个多类分类任务,在这项工作中,我们研究了五种不同特征类型在基于svm的学习框架中的性能,用于将人体x射线图像分类为与身体部位相应的类。我们的综合实验表明,四种传统的特征类型提供了与文献相当的性能,每类精度较低,而局部二进制模式不仅产生了非常好的全局精度,而且对于文献中使用的特征也有很好的类别特定精度。
{"title":"Automatic detection of body parts in x-ray images","authors":"V. Jeanne, D. Ünay, Vincent Jacquet","doi":"10.1109/CVPRW.2009.5204353","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204353","url":null,"abstract":"The number of digital images that needs to be acquired, analyzed, classified, stored and retrieved in the medical centers is exponentially growing with the advances in medical imaging technology. Accordingly, medical image classification and retrieval has become a popular topic in the recent years. Despite many projects focusing on this problem, proposed solutions are still far from being sufficiently accurate for real-life implementations. Interpreting medical image classification and retrieval as a multi-class classification task, in this work, we investigate the performance of five different feature types in a SVM-based learning framework for classification of human body X-Ray images into classes corresponding to body parts. Our comprehensive experiments show that four conventional feature types provide performances comparable to the literature with low per-class accuracies, whereas local binary patterns produce not only very good global accuracy but also good class-specific accuracies with respect to the features used in the literature.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126577090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Robust estimation of stem cell lineages using local graph matching 利用局部图匹配稳健估计干细胞谱系
Min Liu, A. Roy-Chowdhury, G. Reddy
In this paper, we present a local graph matching based method for tracking cells and cell divisions. This will allow us to estimate the lineages of the cells in a 4D spatio-temporal image stack obtained using fluorescence imaging techniques. We work with plant cells, where the cells are tightly clustered in space and computing correspondences in space and time can be very challenging. The local graph matching method is able to compute the lineages even when significant portions of the images are corrupted due to sensor noise in the imaging process or segmentation errors. The geometric structure and topology of the cells' relative positions are efficiently exploited to solve the tracking problem using the local graph matching technique. The process not only computes the correspondences of cells across spatial and temporal image slices, but is also able to find out where and when cells divide, identify new cells and detect missing ones. Using this method we show experimental results to track the properly segmented cells, and compute cell lineages from images captured over 72 hours, even when some of those images are highly noisy (e.g., missing cells).
在本文中,我们提出了一种基于局部图匹配的细胞和细胞分裂跟踪方法。这将使我们能够在使用荧光成像技术获得的4D时空图像堆栈中估计细胞的谱系。我们研究的是植物细胞,这些细胞在空间上紧密地聚集在一起,计算空间和时间上的对应关系非常具有挑战性。局部图匹配方法即使在图像的重要部分由于成像过程中的传感器噪声或分割错误而损坏时也能够计算出谱系。利用细胞相对位置的几何结构和拓扑结构,利用局部图匹配技术有效地解决了跟踪问题。该过程不仅计算细胞在空间和时间图像切片上的对应关系,而且还能够找出细胞分裂的地点和时间,识别新细胞并检测缺失的细胞。使用这种方法,我们展示了实验结果来跟踪正确分割的细胞,并从超过72小时捕获的图像中计算细胞谱系,即使其中一些图像非常嘈杂(例如,缺失的细胞)。
{"title":"Robust estimation of stem cell lineages using local graph matching","authors":"Min Liu, A. Roy-Chowdhury, G. Reddy","doi":"10.1109/CVPRW.2009.5204045","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204045","url":null,"abstract":"In this paper, we present a local graph matching based method for tracking cells and cell divisions. This will allow us to estimate the lineages of the cells in a 4D spatio-temporal image stack obtained using fluorescence imaging techniques. We work with plant cells, where the cells are tightly clustered in space and computing correspondences in space and time can be very challenging. The local graph matching method is able to compute the lineages even when significant portions of the images are corrupted due to sensor noise in the imaging process or segmentation errors. The geometric structure and topology of the cells' relative positions are efficiently exploited to solve the tracking problem using the local graph matching technique. The process not only computes the correspondences of cells across spatial and temporal image slices, but is also able to find out where and when cells divide, identify new cells and detect missing ones. Using this method we show experimental results to track the properly segmented cells, and compute cell lineages from images captured over 72 hours, even when some of those images are highly noisy (e.g., missing cells).","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126935679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Automatic symmetry-integrated brain injury detection in MRI sequences MRI序列中对称性集成脑损伤自动检测
Yu Sun, B. Bhanu, Shiv Bhanu
This paper presents a fully automated symmetry-integrated brain injury detection method for magnetic resonance imaging (MRI) sequences. One of the limitations of current injury detection methods often involves a large amount of training data or a prior model that is only applicable to a limited domain of brain slices, with low computational efficiency and robustness. Our proposed approach can detect injuries from a wide variety of brain images since it makes use of symmetry as a dominant feature, and does not rely on any prior models and training phases. The approach consists of the following steps: (a) symmetry integrated segmentation of brain slices based on symmetry affinity matrix, (b) computation of kurtosis and skewness of symmetry affinity matrix to find potential asymmetric regions, (c) clustering of the pixels in symmetry affinity matrix using a 3D relaxation algorithm, (d) fusion of the results of (b) and (c) to obtain refined asymmetric regions, (e) Gaussian mixture model for unsupervised classification of potential asymmetric regions as the set of regions corresponding to brain injuries. Experimental results are carried out to demonstrate the efficacy of the approach.
提出了一种基于磁共振成像(MRI)序列的全自动对称集成脑损伤检测方法。当前损伤检测方法的局限性之一,往往是训练数据量大,或先验模型仅适用于脑切片的有限区域,计算效率低,鲁棒性差。我们提出的方法可以从各种各样的大脑图像中检测损伤,因为它利用对称作为主要特征,并且不依赖于任何先前的模型和训练阶段。该方法包括以下步骤:(a)基于对称亲和矩阵的脑切片对称集成分割,(b)计算对称亲和矩阵的峰度和偏度,寻找潜在的不对称区域,(c)使用三维松弛算法对对称亲和矩阵中的像素进行聚类,(d)将(b)和(c)的结果融合,得到精细的不对称区域。(e)高斯混合模型将潜在不对称区域作为脑损伤对应的区域集进行无监督分类。实验结果验证了该方法的有效性。
{"title":"Automatic symmetry-integrated brain injury detection in MRI sequences","authors":"Yu Sun, B. Bhanu, Shiv Bhanu","doi":"10.1109/CVPRW.2009.5204052","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204052","url":null,"abstract":"This paper presents a fully automated symmetry-integrated brain injury detection method for magnetic resonance imaging (MRI) sequences. One of the limitations of current injury detection methods often involves a large amount of training data or a prior model that is only applicable to a limited domain of brain slices, with low computational efficiency and robustness. Our proposed approach can detect injuries from a wide variety of brain images since it makes use of symmetry as a dominant feature, and does not rely on any prior models and training phases. The approach consists of the following steps: (a) symmetry integrated segmentation of brain slices based on symmetry affinity matrix, (b) computation of kurtosis and skewness of symmetry affinity matrix to find potential asymmetric regions, (c) clustering of the pixels in symmetry affinity matrix using a 3D relaxation algorithm, (d) fusion of the results of (b) and (c) to obtain refined asymmetric regions, (e) Gaussian mixture model for unsupervised classification of potential asymmetric regions as the set of regions corresponding to brain injuries. Experimental results are carried out to demonstrate the efficacy of the approach.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132318944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Deformable tree models for 2D and 3D branching structures extraction 可变形树模型用于二维和三维分支结构提取
J. Mille, L. Cohen
The proposed model is devoted to the segmentation and reconstruction of branching structures, like vascular trees. We rely on an explicit representation of a deformable tree, where topological relationships between segments are modeled. This allows easy posterior interactions and quantitative analysis, such as measuring diameters or lengths of vessels. Starting from a unique user-provided root point, an initial tree is built with a technique relying on minimal paths. Within the constructed tree, the central curve of each segment and an associated variable radius function evolve in order to satisfy a region homogeneity criterion.
该模型致力于分支结构(如维管树)的分割和重建。我们依赖于可变形树的显式表示,其中部分之间的拓扑关系被建模。这样可以方便地进行后路相互作用和定量分析,例如测量血管的直径或长度。从用户提供的唯一根点开始,使用依赖于最小路径的技术构建初始树。在构造的树内,每段的中心曲线和相应的变半径函数演化以满足区域均匀性准则。
{"title":"Deformable tree models for 2D and 3D branching structures extraction","authors":"J. Mille, L. Cohen","doi":"10.1109/CVPRW.2009.5204049","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204049","url":null,"abstract":"The proposed model is devoted to the segmentation and reconstruction of branching structures, like vascular trees. We rely on an explicit representation of a deformable tree, where topological relationships between segments are modeled. This allows easy posterior interactions and quantitative analysis, such as measuring diameters or lengths of vessels. Starting from a unique user-provided root point, an initial tree is built with a technique relying on minimal paths. Within the constructed tree, the central curve of each segment and an associated variable radius function evolve in order to satisfy a region homogeneity criterion.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130811485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
期刊
2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1