首页 > 最新文献

2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops最新文献

英文 中文
Automatic symmetry-integrated brain injury detection in MRI sequences MRI序列中对称性集成脑损伤自动检测
Yu Sun, B. Bhanu, Shiv Bhanu
This paper presents a fully automated symmetry-integrated brain injury detection method for magnetic resonance imaging (MRI) sequences. One of the limitations of current injury detection methods often involves a large amount of training data or a prior model that is only applicable to a limited domain of brain slices, with low computational efficiency and robustness. Our proposed approach can detect injuries from a wide variety of brain images since it makes use of symmetry as a dominant feature, and does not rely on any prior models and training phases. The approach consists of the following steps: (a) symmetry integrated segmentation of brain slices based on symmetry affinity matrix, (b) computation of kurtosis and skewness of symmetry affinity matrix to find potential asymmetric regions, (c) clustering of the pixels in symmetry affinity matrix using a 3D relaxation algorithm, (d) fusion of the results of (b) and (c) to obtain refined asymmetric regions, (e) Gaussian mixture model for unsupervised classification of potential asymmetric regions as the set of regions corresponding to brain injuries. Experimental results are carried out to demonstrate the efficacy of the approach.
提出了一种基于磁共振成像(MRI)序列的全自动对称集成脑损伤检测方法。当前损伤检测方法的局限性之一,往往是训练数据量大,或先验模型仅适用于脑切片的有限区域,计算效率低,鲁棒性差。我们提出的方法可以从各种各样的大脑图像中检测损伤,因为它利用对称作为主要特征,并且不依赖于任何先前的模型和训练阶段。该方法包括以下步骤:(a)基于对称亲和矩阵的脑切片对称集成分割,(b)计算对称亲和矩阵的峰度和偏度,寻找潜在的不对称区域,(c)使用三维松弛算法对对称亲和矩阵中的像素进行聚类,(d)将(b)和(c)的结果融合,得到精细的不对称区域。(e)高斯混合模型将潜在不对称区域作为脑损伤对应的区域集进行无监督分类。实验结果验证了该方法的有效性。
{"title":"Automatic symmetry-integrated brain injury detection in MRI sequences","authors":"Yu Sun, B. Bhanu, Shiv Bhanu","doi":"10.1109/CVPRW.2009.5204052","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204052","url":null,"abstract":"This paper presents a fully automated symmetry-integrated brain injury detection method for magnetic resonance imaging (MRI) sequences. One of the limitations of current injury detection methods often involves a large amount of training data or a prior model that is only applicable to a limited domain of brain slices, with low computational efficiency and robustness. Our proposed approach can detect injuries from a wide variety of brain images since it makes use of symmetry as a dominant feature, and does not rely on any prior models and training phases. The approach consists of the following steps: (a) symmetry integrated segmentation of brain slices based on symmetry affinity matrix, (b) computation of kurtosis and skewness of symmetry affinity matrix to find potential asymmetric regions, (c) clustering of the pixels in symmetry affinity matrix using a 3D relaxation algorithm, (d) fusion of the results of (b) and (c) to obtain refined asymmetric regions, (e) Gaussian mixture model for unsupervised classification of potential asymmetric regions as the set of regions corresponding to brain injuries. Experimental results are carried out to demonstrate the efficacy of the approach.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132318944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Image matching in large scale indoor environment 大规模室内环境下的图像匹配
Hongwen Kang, Alexei A. Efros, M. Hebert, T. Kanade
In this paper, we propose a data driven approach to first-person vision. We propose a novel image matching algorithm, named Re-Search, that is designed to cope with self-repetitive structures and confusing patterns in the indoor environment. This algorithm uses state-of-art image search techniques, and it matches a query image with a two-pass strategy. In the first pass, a conventional image search algorithm is used to search for a small number of images that are most similar to the query image. In the second pass, the retrieval results from the first step are used to discover features that are more distinctive in the local context. We demonstrate and evaluate the Re-Search algorithm in the context of indoor localization, with the illustration of potential applications in object pop-out and data-driven zoom-in.
在本文中,我们提出了一种数据驱动的方法来实现第一人称视觉。我们提出了一种新的图像匹配算法,名为Re-Search,旨在处理室内环境中自我重复的结构和混乱的模式。该算法采用了最先进的图像搜索技术,并采用两次匹配策略对查询图像进行匹配。在第一遍中,使用传统的图像搜索算法搜索与查询图像最相似的少量图像。在第二步中,使用第一步的检索结果来发现在局部上下文中更独特的特征。我们在室内定位的背景下演示和评估了Re-Search算法,并举例说明了物体弹出和数据驱动放大的潜在应用。
{"title":"Image matching in large scale indoor environment","authors":"Hongwen Kang, Alexei A. Efros, M. Hebert, T. Kanade","doi":"10.1109/CVPRW.2009.5204357","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204357","url":null,"abstract":"In this paper, we propose a data driven approach to first-person vision. We propose a novel image matching algorithm, named Re-Search, that is designed to cope with self-repetitive structures and confusing patterns in the indoor environment. This algorithm uses state-of-art image search techniques, and it matches a query image with a two-pass strategy. In the first pass, a conventional image search algorithm is used to search for a small number of images that are most similar to the query image. In the second pass, the retrieval results from the first step are used to discover features that are more distinctive in the local context. We demonstrate and evaluate the Re-Search algorithm in the context of indoor localization, with the illustration of potential applications in object pop-out and data-driven zoom-in.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113965266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
A level set-based global shape prior and its application to image segmentation 基于水平集的全局形状先验及其在图像分割中的应用
Lei Zhang, Q. Ji
Global shape prior knowledge is a special kind of semantic information that can be incorporated into an image segmentation process to handle the difficulties caused by such problems as occlusion, cluttering, noise, and/or low contrast boundaries. In this work, we propose a global shape prior representation and incorporate it into a level set based image segmentation framework. This global shape prior can effectively help remove the cluttered elongate structures and island-like artifacts from the evolving contours. We apply this global shape prior to segmentation of three sequences of electron tomography membrane images. The segmentation results are evaluated both quantitatively and qualitatively by visual inspection. Accurate segmentation results are achieved in the testing sequences, which demonstrates the capability of the proposed global shape prior representation.
全局形状先验知识是一种特殊的语义信息,可以将其纳入图像分割过程中,以处理遮挡、杂波、噪声和/或低对比度边界等问题所带来的困难。在这项工作中,我们提出了一种全局形状先验表示,并将其纳入到基于水平集的图像分割框架中。这种全局形状先验可以有效地帮助从不断变化的轮廓中去除杂乱的细长结构和岛状伪影。我们应用这个全局形状之前的三个序列的电子断层扫描膜图像分割。通过目测对分割结果进行定量和定性评价。在测试序列中获得了准确的分割结果,证明了所提出的全局形状先验表示的能力。
{"title":"A level set-based global shape prior and its application to image segmentation","authors":"Lei Zhang, Q. Ji","doi":"10.1109/CVPRW.2009.5204275","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204275","url":null,"abstract":"Global shape prior knowledge is a special kind of semantic information that can be incorporated into an image segmentation process to handle the difficulties caused by such problems as occlusion, cluttering, noise, and/or low contrast boundaries. In this work, we propose a global shape prior representation and incorporate it into a level set based image segmentation framework. This global shape prior can effectively help remove the cluttered elongate structures and island-like artifacts from the evolving contours. We apply this global shape prior to segmentation of three sequences of electron tomography membrane images. The segmentation results are evaluated both quantitatively and qualitatively by visual inspection. Accurate segmentation results are achieved in the testing sequences, which demonstrates the capability of the proposed global shape prior representation.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123845411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A coding scheme for indexing multimodal biometric databases 索引多模态生物特征数据库的编码方案
A. Gyaourova, A. Ross
In biometric identification systems, the identity associated with the input data is determined by comparing it against every entry in the database. This exhaustive matching process increases the response time of the system and, potentially, the rate of erroneous identification. A method that narrows the list of potential identities will allow the input data to be matched against a smaller number of identities. We describe a method for indexing large-scale multimodal biometric databases based on the generation of an index code for each enrolled identity. In the proposed method, the input biometric data is first matched against a small set of reference images. The set of ensuing match scores is used as an index code. The index codes of multiple modalities are then integrated using three different fusion techniques in order to further improve the indexing performance. Experiments on a chimeric face and fingerprint bimodal database indicate a 76% reduction in the search space at 100% hit rate. These results suggest that indexing has the potential to substantially improve the response time of multimodal biometric systems without compromising the accuracy of identification.
在生物识别系统中,与输入数据相关联的身份是通过与数据库中的每个条目进行比较来确定的。这种详尽的匹配过程增加了系统的响应时间,并可能增加错误识别的比率。缩小潜在身份列表的方法将允许输入数据与较少数量的身份进行匹配。我们描述了一种基于为每个注册身份生成索引代码的大型多模态生物特征数据库的索引方法。在所提出的方法中,首先将输入的生物特征数据与一小组参考图像进行匹配。随后的匹配分数集用作索引代码。然后利用三种不同的融合技术对多模态索引码进行集成,以进一步提高索引性能。在嵌合人脸和指纹双峰数据库上的实验表明,在100%的命中率下,搜索空间减少了76%。这些结果表明,索引有可能大大提高多模态生物识别系统的响应时间,而不会影响识别的准确性。
{"title":"A coding scheme for indexing multimodal biometric databases","authors":"A. Gyaourova, A. Ross","doi":"10.1109/CVPRW.2009.5204311","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204311","url":null,"abstract":"In biometric identification systems, the identity associated with the input data is determined by comparing it against every entry in the database. This exhaustive matching process increases the response time of the system and, potentially, the rate of erroneous identification. A method that narrows the list of potential identities will allow the input data to be matched against a smaller number of identities. We describe a method for indexing large-scale multimodal biometric databases based on the generation of an index code for each enrolled identity. In the proposed method, the input biometric data is first matched against a small set of reference images. The set of ensuing match scores is used as an index code. The index codes of multiple modalities are then integrated using three different fusion techniques in order to further improve the indexing performance. Experiments on a chimeric face and fingerprint bimodal database indicate a 76% reduction in the search space at 100% hit rate. These results suggest that indexing has the potential to substantially improve the response time of multimodal biometric systems without compromising the accuracy of identification.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"187 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123920830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Global and local quality measures for NIR iris video 近红外虹膜视频的全球和本地质量措施
Jinyu Zuo, N. Schmid
In the field of iris-based recognition, evaluation of quality of images has a number of important applications. These include image acquisition, enhancement, and data fusion. Iris image quality metrics designed for these applications are used as figures of merit to quantify degradations or improvements in iris images due to various image processing operations. This paper elaborates on the factors and introduces new global and local factors that can be used to evaluate iris video and image quality. The main contributions of the paper are as follows. (1) A fast global quality evaluation procedure for selecting the best frames from a video or an image sequence is introduced. (2) A number of new local quality measures for the iris biometrics are introduced. The performance of the individual quality measures is carefully analyzed. Since performance of iris recognition systems is evaluated in terms of the distributions of matching scores and recognition probability of error, from a good iris image quality metric it is also expected that its performance is linked to the recognition performance of the biometric recognition system.
在基于虹膜的识别领域中,图像质量评价有着许多重要的应用。这包括图像采集、增强和数据融合。为这些应用设计的虹膜图像质量指标被用作价值指标,以量化由于各种图像处理操作而导致的虹膜图像的降级或改进。本文对这些因素进行了详细的阐述,并介绍了可以用来评价虹膜视频和图像质量的新的全局和局部因素。本文的主要贡献如下:(1)介绍了一种从视频或图像序列中选择最佳帧的快速全局质量评估程序。(2)介绍了虹膜生物识别的一些新的局部质量度量方法。仔细分析了各项质量措施的效果。由于虹膜识别系统的性能是根据匹配分数的分布和识别错误的概率来评估的,因此从一个好的虹膜图像质量指标来看,它的性能也被期望与生物特征识别系统的识别性能相关联。
{"title":"Global and local quality measures for NIR iris video","authors":"Jinyu Zuo, N. Schmid","doi":"10.1109/CVPRW.2009.5204310","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204310","url":null,"abstract":"In the field of iris-based recognition, evaluation of quality of images has a number of important applications. These include image acquisition, enhancement, and data fusion. Iris image quality metrics designed for these applications are used as figures of merit to quantify degradations or improvements in iris images due to various image processing operations. This paper elaborates on the factors and introduces new global and local factors that can be used to evaluate iris video and image quality. The main contributions of the paper are as follows. (1) A fast global quality evaluation procedure for selecting the best frames from a video or an image sequence is introduced. (2) A number of new local quality measures for the iris biometrics are introduced. The performance of the individual quality measures is carefully analyzed. Since performance of iris recognition systems is evaluated in terms of the distributions of matching scores and recognition probability of error, from a good iris image quality metric it is also expected that its performance is linked to the recognition performance of the biometric recognition system.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"355 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115931296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Egocentric recognition of handled objects: Benchmark and analysis 被处理对象的自我中心识别:基准与分析
Xiaofeng Ren, Matthai Philipose
Recognizing objects being manipulated in hands can provide essential information about a person's activities and have far-reaching impacts on the application of vision in everyday life. The egocentric viewpoint from a wearable camera has unique advantages in recognizing handled objects, such as having a close view and seeing objects in their natural positions. We collect a comprehensive dataset and analyze the feasibilities and challenges of the egocentric recognition of handled objects. We use a lapel-worn camera and record uncompressed video streams as human subjects manipulate objects in daily activities. We use 42 day-to-day objects that vary in size, shape, color and textureness. 10 video sequences are shot for each object under different illuminations and backgrounds. We use this dataset and a SIFT-based recognition system to analyze and quantitatively characterize the main challenges in egocentric object recognition, such as motion blur and hand occlusion, along with its unique constraints, such as hand color, location prior and temporal consistency. SIFT-based recognition has an average recognition rate of 12%, and reaches 20% through enforcing temporal consistency. We use simulations to estimate the upper bound for SIFT-based recognition at 64%, the loss of accuracy due to background clutter at 20%, and that of hand occlusion at 13%. Our quantitative evaluations show that the egocentric recognition of handled objects is a challenging but feasible problem with many unique characteristics and many opportunities for future research.
识别手中被操纵的物体可以提供关于人的活动的基本信息,并对视觉在日常生活中的应用产生深远的影响。可穿戴相机的自我中心视角在识别处理过的物体方面具有独特的优势,例如可以近距离观察和看到物体的自然位置。我们收集了一个完整的数据集,分析了以自我为中心识别被处理物体的可行性和挑战。我们使用一个挂在翻领上的摄像头,记录人类在日常活动中操纵物体时未压缩的视频流。我们使用42种日常物品,它们的大小、形状、颜色和质感各不相同。在不同光照和背景下,为每个物体拍摄10个视频序列。我们使用该数据集和基于sift的识别系统来分析和定量表征自我中心目标识别中的主要挑战,如运动模糊和手部遮挡,以及其独特的约束,如手部颜色、位置先验和时间一致性。基于sift的识别平均识别率为12%,通过增强时间一致性,识别率达到20%。我们使用模拟来估计基于sift的识别的上限为64%,背景杂波导致的准确度损失为20%,手部遮挡导致的准确度损失为13%。我们的定量评估表明,自我中心识别是一个具有挑战性但可行的问题,具有许多独特的特征和许多未来研究的机会。
{"title":"Egocentric recognition of handled objects: Benchmark and analysis","authors":"Xiaofeng Ren, Matthai Philipose","doi":"10.1109/CVPRW.2009.5204360","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204360","url":null,"abstract":"Recognizing objects being manipulated in hands can provide essential information about a person's activities and have far-reaching impacts on the application of vision in everyday life. The egocentric viewpoint from a wearable camera has unique advantages in recognizing handled objects, such as having a close view and seeing objects in their natural positions. We collect a comprehensive dataset and analyze the feasibilities and challenges of the egocentric recognition of handled objects. We use a lapel-worn camera and record uncompressed video streams as human subjects manipulate objects in daily activities. We use 42 day-to-day objects that vary in size, shape, color and textureness. 10 video sequences are shot for each object under different illuminations and backgrounds. We use this dataset and a SIFT-based recognition system to analyze and quantitatively characterize the main challenges in egocentric object recognition, such as motion blur and hand occlusion, along with its unique constraints, such as hand color, location prior and temporal consistency. SIFT-based recognition has an average recognition rate of 12%, and reaches 20% through enforcing temporal consistency. We use simulations to estimate the upper bound for SIFT-based recognition at 64%, the loss of accuracy due to background clutter at 20%, and that of hand occlusion at 13%. Our quantitative evaluations show that the egocentric recognition of handled objects is a challenging but feasible problem with many unique characteristics and many opportunities for future research.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122960861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 114
3D segmentation of rodent brains using deformable models and variational methods 利用变形模型和变分方法对啮齿动物大脑进行三维分割
Shaoting Zhang, Jinghao Zhou, Xiaoxu Wang, Sukmoon Chang, Dimitris N. Metaxas, George J. Pappas, F. Delis, N. Volkow, Gene-Jack Wang, P. Thanos, C. Kambhamettu
3D functional segmentation of brain images is important in understating the relationships between anatomy and mental diseases in brains. Volumetric analysis of various brain structures such as the cerebellum plays a critical role in studying the structural changes in brain regions as a function of development, trauma, or neurodegeneration. Although various segmentation methods in clinical studies have been proposed, many of them require a priori knowledge about the locations of the structures of interest, which prevents the fully automatic segmentation. Besides, the topological changes of structures are difficult to detect. In this paper, we present a novel method for detecting and locating the brain structures of interest that can be used for the fully automatic 3D functional segmentation of rodent brain MR images. The presented method is based on active shape model (ASM), Metamorph models and variational techniques. It focuses on detecting the topological changes of brain structures based on a novel volume ratio criteria. The mean successful rate of the topological change detection shows 86.6% accuracy compared to the expert identified ground truth.
脑图像的三维功能分割对于理解大脑解剖学与精神疾病之间的关系具有重要意义。对各种大脑结构(如小脑)的体积分析在研究大脑区域的结构变化作为发育、创伤或神经变性的功能方面起着至关重要的作用。尽管在临床研究中提出了各种分割方法,但其中许多方法需要先验地了解感兴趣结构的位置,这阻碍了全自动分割。此外,结构的拓扑变化难以检测。在本文中,我们提出了一种检测和定位感兴趣的大脑结构的新方法,该方法可用于啮齿动物大脑MR图像的全自动3D功能分割。该方法基于主动形状模型(ASM)、变形模型和变分技术。它的重点是基于一种新的体积比标准来检测大脑结构的拓扑变化。与专家识别的地面真值相比,拓扑变化检测的平均成功率为86.6%。
{"title":"3D segmentation of rodent brains using deformable models and variational methods","authors":"Shaoting Zhang, Jinghao Zhou, Xiaoxu Wang, Sukmoon Chang, Dimitris N. Metaxas, George J. Pappas, F. Delis, N. Volkow, Gene-Jack Wang, P. Thanos, C. Kambhamettu","doi":"10.1109/CVPRW.2009.5204051","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204051","url":null,"abstract":"3D functional segmentation of brain images is important in understating the relationships between anatomy and mental diseases in brains. Volumetric analysis of various brain structures such as the cerebellum plays a critical role in studying the structural changes in brain regions as a function of development, trauma, or neurodegeneration. Although various segmentation methods in clinical studies have been proposed, many of them require a priori knowledge about the locations of the structures of interest, which prevents the fully automatic segmentation. Besides, the topological changes of structures are difficult to detect. In this paper, we present a novel method for detecting and locating the brain structures of interest that can be used for the fully automatic 3D functional segmentation of rodent brain MR images. The presented method is based on active shape model (ASM), Metamorph models and variational techniques. It focuses on detecting the topological changes of brain structures based on a novel volume ratio criteria. The mean successful rate of the topological change detection shows 86.6% accuracy compared to the expert identified ground truth.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126440900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Quasiconvex alignment of multimodal skin images for quantitative dermatology 定量皮肤科多模态皮肤图像的拟凸对齐
S. Madan, Kristin J. Dana, G. O. Cula
In quantitative dermatology, high resolution sensors provide images that capture fine scale features like pores, birthmarks, and moles. Breathing and minute movements result in misregistration of micro level features. Many computer vision methods for dermatology such as change detection, appearance capture, and multi sensor fusion require high accuracy point-wise registration of micro level features. However, most computer vision algorithms are based on macro level features such as eyes, nose, and lips, and aren't suitable for registering micro level features. In this paper, we develop a practical robust algorithm to align face regions using skin texture with mostly indistinct micro level features. In computer vision, these regions would typically be considered featureless regions. Our method approximates the face surface as a collection of quasi-planar skin patches and uses quasiconvex optimization and the L∞ norm for estimation of spatially varying homographies. We have assembled a unique dataset of high resolution dermatology images comprised of over 100 human subjects. The image pairs vary in imaging modality (crossed, parallel and no polarization) and are misregistered due to the natural non-rigid human movement between image capture. This method of polarization based image capture is commonly used in dermatology to image surface and subsurface structure. Using this dataset, we show high quality alignment of “featureless” regions and demonstrate that the algorithm works robustly over a large set of subjects with different skin texture appearance, not just a few test images.
在定量皮肤病学中,高分辨率传感器提供的图像可以捕捉细微的尺度特征,如毛孔、胎记和痣。呼吸和微小的动作导致微观层面特征的错误注册。许多用于皮肤病学的计算机视觉方法,如变化检测、外观捕获和多传感器融合,都需要高精度的逐点配准微观特征。然而,大多数计算机视觉算法都是基于宏观层面的特征,如眼睛、鼻子和嘴唇,不适合登记微观层面的特征。在本文中,我们开发了一种实用的鲁棒算法,利用具有模糊微观特征的皮肤纹理来对齐人脸区域。在计算机视觉中,这些区域通常被认为是无特征区域。我们的方法将人脸表面近似为准平面皮肤斑块的集合,并使用拟凸优化和L∞范数来估计空间变化的同构。我们已经组装了一个独特的高分辨率皮肤病学图像数据集,其中包括100多名人类受试者。图像对在成像方式(交叉,平行和无极化)上有所不同,并且由于图像捕获之间的自然非刚性人体运动而导致误配。这种基于偏振的图像捕获方法在皮肤病学中常用来对表面和亚表面结构进行成像。使用该数据集,我们展示了“无特征”区域的高质量对齐,并证明该算法在具有不同皮肤纹理外观的大量受试者上稳健地工作,而不仅仅是少数测试图像。
{"title":"Quasiconvex alignment of multimodal skin images for quantitative dermatology","authors":"S. Madan, Kristin J. Dana, G. O. Cula","doi":"10.1109/CVPRW.2009.5204346","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204346","url":null,"abstract":"In quantitative dermatology, high resolution sensors provide images that capture fine scale features like pores, birthmarks, and moles. Breathing and minute movements result in misregistration of micro level features. Many computer vision methods for dermatology such as change detection, appearance capture, and multi sensor fusion require high accuracy point-wise registration of micro level features. However, most computer vision algorithms are based on macro level features such as eyes, nose, and lips, and aren't suitable for registering micro level features. In this paper, we develop a practical robust algorithm to align face regions using skin texture with mostly indistinct micro level features. In computer vision, these regions would typically be considered featureless regions. Our method approximates the face surface as a collection of quasi-planar skin patches and uses quasiconvex optimization and the L∞ norm for estimation of spatially varying homographies. We have assembled a unique dataset of high resolution dermatology images comprised of over 100 human subjects. The image pairs vary in imaging modality (crossed, parallel and no polarization) and are misregistered due to the natural non-rigid human movement between image capture. This method of polarization based image capture is commonly used in dermatology to image surface and subsurface structure. Using this dataset, we show high quality alignment of “featureless” regions and demonstrate that the algorithm works robustly over a large set of subjects with different skin texture appearance, not just a few test images.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114686771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A projector-camera system for creating a display with water drops 一种用水滴制作显示器的投影摄影机系统
P. Barnum, S. Narasimhan, T. Kanade
Various non-traditional media, such as water drops, mist, and fire, have been used to create vibrant two and three dimensional displays. Usually such displays require a great deal of design and engineering. In this work, we show a computer vision based approach to easily calibrate and learn the properties of a three-dimensional water drop display, using a few pieces of off-the-shelf hardware. Our setup consists of a camera, projector, laser plane, and water drop generator. Based on the geometric calibration between the hardware, a user can “paint” the drops from the point of view of the camera, causing the projector to illuminate them with the correct color at the correct time. We first demonstrate an algorithm for the case where no drop occludes another from the point of view of either camera or projector. If there is no occlusion, the system can be trained once, and the projector plays a precomputed movie. We then show our work toward a display with real rain. In real time, our system tracks and predicts the future location of hundreds of drops per second, then projects rays to hit or miss each drop.
各种非传统的媒介,如水滴、雾和火,被用来创造充满活力的二维和三维展示。通常这样的显示需要大量的设计和工程。在这项工作中,我们展示了一种基于计算机视觉的方法,可以使用一些现成的硬件轻松校准和学习三维水滴显示器的属性。我们的设置包括相机,投影仪,激光飞机和水滴发生器。基于硬件之间的几何校准,用户可以从摄像机的角度“绘制”水滴,使投影仪在正确的时间以正确的颜色照亮它们。我们首先演示了一种算法,从相机或投影仪的角度来看,没有水滴遮挡另一个。如果没有遮挡,系统可以训练一次,放映机播放预先计算好的电影。然后我们将我们的作品展示给真正的雨。我们的系统可以实时跟踪和预测每秒钟数百个水滴的未来位置,然后投射光线来命中或错过每一个水滴。
{"title":"A projector-camera system for creating a display with water drops","authors":"P. Barnum, S. Narasimhan, T. Kanade","doi":"10.1109/cvpr.2009.5204316","DOIUrl":"https://doi.org/10.1109/cvpr.2009.5204316","url":null,"abstract":"Various non-traditional media, such as water drops, mist, and fire, have been used to create vibrant two and three dimensional displays. Usually such displays require a great deal of design and engineering. In this work, we show a computer vision based approach to easily calibrate and learn the properties of a three-dimensional water drop display, using a few pieces of off-the-shelf hardware. Our setup consists of a camera, projector, laser plane, and water drop generator. Based on the geometric calibration between the hardware, a user can “paint” the drops from the point of view of the camera, causing the projector to illuminate them with the correct color at the correct time. We first demonstrate an algorithm for the case where no drop occludes another from the point of view of either camera or projector. If there is no occlusion, the system can be trained once, and the projector plays a precomputed movie. We then show our work toward a display with real rain. In real time, our system tracks and predicts the future location of hundreds of drops per second, then projects rays to hit or miss each drop.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121864976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Efficient acquisition of human existence priors from motion trajectories 从运动轨迹中有效地获取人类存在先验
H. Habe, Hidehito Nakagawa, M. Kidode
This paper reports a method for acquiring the prior probability of human existence by using past human trajectories and the color of an image. The priors play important roles in human detection as well as in scene understanding. The proposed method is based on the assumption that a person can exist again in an area where he/she existed in the past. In order to acquire the priors efficiently, a high prior probability is assigned to an area having the same color as past human trajectories. We use a particle filter for representing the prior probability. Therefore, we can represent a complex prior probability using only a few parameters. Through experiments, we confirmed that our proposed method can acquire the prior probability efficiently and it can realize highly accurate human detection using the obtained prior probability.
本文报告了一种利用过去的人类轨迹和图像的颜色来获取人类存在的先验概率的方法。先验在人类检测和场景理解中起着重要的作用。所提出的方法是基于一个假设,即一个人可以再次存在于他/她过去存在过的地方。为了有效地获取先验,对与过去人类轨迹具有相同颜色的区域分配高先验概率。我们使用粒子滤波来表示先验概率。因此,我们可以只用几个参数来表示一个复杂的先验概率。通过实验,我们证实了我们的方法可以有效地获取先验概率,并且可以利用得到的先验概率实现高精度的人体检测。
{"title":"Efficient acquisition of human existence priors from motion trajectories","authors":"H. Habe, Hidehito Nakagawa, M. Kidode","doi":"10.2197/ipsjtcva.2.145","DOIUrl":"https://doi.org/10.2197/ipsjtcva.2.145","url":null,"abstract":"This paper reports a method for acquiring the prior probability of human existence by using past human trajectories and the color of an image. The priors play important roles in human detection as well as in scene understanding. The proposed method is based on the assumption that a person can exist again in an area where he/she existed in the past. In order to acquire the priors efficiently, a high prior probability is assigned to an area having the same color as past human trajectories. We use a particle filter for representing the prior probability. Therefore, we can represent a complex prior probability using only a few parameters. Through experiments, we confirmed that our proposed method can acquire the prior probability efficiently and it can realize highly accurate human detection using the obtained prior probability.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133757466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1