首页 > 最新文献

2014 22nd International Conference on Pattern Recognition最新文献

英文 中文
Low Rank Global Geometric Consistency for Partial-Duplicate Image Search 局部重复图像搜索的低秩全局几何一致性
Pub Date : 2014-12-08 DOI: 10.1109/ICPR.2014.675
Li Yang, Yang Lin, Zhouchen Lin, H. Zha
All existing feature point based partial-duplicate image retrieval systems are confronted with the false feature point matching problem. To resolve this issue, geometric contexts are widely used to verify the geometric consistency in order to remove false matches. However, most of the existing methods focus on local geometric contexts rather than global. Seeking global contexts has attracted a lot of attention in recent years. This paper introduces a novel global geometric consistency, based on the low rankness of squared distance matrices of feature points, to detect false matches. We cast the problem of detecting false matches as a problem of decomposing a squared distance matrix into a low rank matrix, which models the global geometric consistency, and a sparse matrix, which models the mismatched feature points. So we arrive at a model of Robust Principal Component Analysis. Our Low Rank Global Geometric Consistency (LRGGC) is simple yet effective and theoretically sound. Extensive experimental results show that our LRGGC is much more accurate than state of the art geometric verification methods in detecting false matches and is robust to all kinds of similarity transformation (scaling, rotation, and translation) and even slight change in 3D views. Its speed is also highly competitive even compared with local geometric consistency based methods.
现有的基于特征点的部分重复图像检索系统都面临着特征点匹配错误的问题。为了解决这一问题,人们广泛使用几何上下文来验证几何一致性,以消除虚假匹配。然而,现有的方法大多侧重于局部几何背景,而不是全局的。近年来,寻找全球背景引起了很多关注。本文引入了一种新的全局几何一致性方法,利用特征点的距离平方矩阵的低秩来检测错误匹配。我们将假匹配检测问题转化为将距离平方矩阵分解为低秩矩阵和稀疏矩阵的问题,低秩矩阵建模全局几何一致性,稀疏矩阵建模不匹配的特征点。因此,我们得到了一个鲁棒主成分分析模型。我们的低秩全局几何一致性(LRGGC)简单而有效,理论上是合理的。大量的实验结果表明,我们的LRGGC在检测错误匹配方面比目前最先进的几何验证方法要准确得多,并且对各种相似变换(缩放、旋转和平移)甚至3D视图的微小变化都具有鲁棒性。即使与基于局部几何一致性的方法相比,其速度也具有很强的竞争力。
{"title":"Low Rank Global Geometric Consistency for Partial-Duplicate Image Search","authors":"Li Yang, Yang Lin, Zhouchen Lin, H. Zha","doi":"10.1109/ICPR.2014.675","DOIUrl":"https://doi.org/10.1109/ICPR.2014.675","url":null,"abstract":"All existing feature point based partial-duplicate image retrieval systems are confronted with the false feature point matching problem. To resolve this issue, geometric contexts are widely used to verify the geometric consistency in order to remove false matches. However, most of the existing methods focus on local geometric contexts rather than global. Seeking global contexts has attracted a lot of attention in recent years. This paper introduces a novel global geometric consistency, based on the low rankness of squared distance matrices of feature points, to detect false matches. We cast the problem of detecting false matches as a problem of decomposing a squared distance matrix into a low rank matrix, which models the global geometric consistency, and a sparse matrix, which models the mismatched feature points. So we arrive at a model of Robust Principal Component Analysis. Our Low Rank Global Geometric Consistency (LRGGC) is simple yet effective and theoretically sound. Extensive experimental results show that our LRGGC is much more accurate than state of the art geometric verification methods in detecting false matches and is robust to all kinds of similarity transformation (scaling, rotation, and translation) and even slight change in 3D views. Its speed is also highly competitive even compared with local geometric consistency based methods.","PeriodicalId":142159,"journal":{"name":"2014 22nd International Conference on Pattern Recognition","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131940811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Background Subtraction with Dynamic Noise Sampling and Complementary Learning 基于动态噪声采样和互补学习的背景减法
Pub Date : 2014-12-08 DOI: 10.1109/ICPR.2014.406
Weifeng Ge, Yuhan Dong, Zhenhua Guo, Youbin Chen
Background subtraction is a popular technique used in accurate foreground extraction with a stationary background. Since most outdoor surveillance videos are taken in complex environments, their "stationary" backgrounds change in some unknown patterns, which make the perfect foreground extraction very difficult. Based on visual background extractor (ViBe) scheme, in this paper we propose a new background subtraction algorithm which includes two innovative mechanisms and several other improved technique tricks. The paper inherits and develops background modeling based on pixel sample values, and use dynamic noise sampling and complementary learning to overcome the pixel-wise background model's intrinsic shortcomings. Besides, the algorithm works on the quantitative analysis without any estimation of the probability density function (pdf). Hence, it takes relatively low computational cost. Extensive experiments on a popular public dataset show that the proposed method has much better precision than ViBe, and could get the best precision and the highest average ranking compared with 27 state-of-the-art algorithms presented on the change detection website.
背景减法是一种常用的技术,用于在固定背景下精确提取前景。由于大多数户外监控视频是在复杂的环境中拍摄的,它们的“静止”背景会以一些未知的模式变化,这使得完美的前景提取变得非常困难。本文基于视觉背景提取器(ViBe)方案,提出了一种新的背景减去算法,该算法包含两个创新机制和一些改进技术。本文继承和发展了基于像素样本值的背景建模,并利用动态噪声采样和互补学习克服了基于像素的背景模型的固有缺点。此外,该算法无需估计概率密度函数(pdf)即可进行定量分析。因此,计算成本相对较低。在一个流行的公共数据集上进行的大量实验表明,所提出的方法比ViBe具有更好的精度,并且与变化检测网站上的27种最先进的算法相比,可以获得最好的精度和最高的平均排名。
{"title":"Background Subtraction with Dynamic Noise Sampling and Complementary Learning","authors":"Weifeng Ge, Yuhan Dong, Zhenhua Guo, Youbin Chen","doi":"10.1109/ICPR.2014.406","DOIUrl":"https://doi.org/10.1109/ICPR.2014.406","url":null,"abstract":"Background subtraction is a popular technique used in accurate foreground extraction with a stationary background. Since most outdoor surveillance videos are taken in complex environments, their \"stationary\" backgrounds change in some unknown patterns, which make the perfect foreground extraction very difficult. Based on visual background extractor (ViBe) scheme, in this paper we propose a new background subtraction algorithm which includes two innovative mechanisms and several other improved technique tricks. The paper inherits and develops background modeling based on pixel sample values, and use dynamic noise sampling and complementary learning to overcome the pixel-wise background model's intrinsic shortcomings. Besides, the algorithm works on the quantitative analysis without any estimation of the probability density function (pdf). Hence, it takes relatively low computational cost. Extensive experiments on a popular public dataset show that the proposed method has much better precision than ViBe, and could get the best precision and the highest average ranking compared with 27 state-of-the-art algorithms presented on the change detection website.","PeriodicalId":142159,"journal":{"name":"2014 22nd International Conference on Pattern Recognition","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130967436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Velocity-Based Multiple Change-Point Inference for Unsupervised Segmentation of Human Movement Behavior 基于速度的多变化点推理人体运动行为无监督分割
Pub Date : 2014-12-08 DOI: 10.1109/ICPR.2014.781
Lisa Senger, M. Schröer, J. H. Metzen, E. Kirchner
In order to transfer complex human behavior to a robot, segmentation methods are needed which are able to detect central movement patterns that can be combined to generate a wide range of behaviors. We propose an algorithm that segments human movements into behavior building blocks in a fully automatic way, called velocity-based Multiple Change-point Inference (vMCI). Based on characteristic bell-shaped velocity patterns that can be found in point-to-point arm movements, the algorithm infers segment borders using Bayesian inference. Different segment lengths and variations in the movement execution can be handled. Moreover, the number of segments the movement is composed of need not be known in advance. Several experiments are performed on synthetic and motion capturing data of human movements to compare vMCI with other techniques for unsupervised segmentation. The results show that vMCI is able to detect segment borders even in noisy data and in demonstrations with smooth transitions between segments.
为了将复杂的人类行为转移到机器人身上,需要能够检测中心运动模式的分割方法,这些模式可以组合起来产生广泛的行为。我们提出了一种以全自动方式将人类运动分割成行为构建块的算法,称为基于速度的多变化点推理(vMCI)。基于点对点手臂运动的钟形速度模式特征,该算法使用贝叶斯推理来推断分段边界。可以处理不同的片段长度和运动执行中的变化。此外,不需要事先知道运动所组成的段数。在人体运动的合成数据和动作捕捉数据上进行了实验,比较了vMCI与其他无监督分割技术的差异。结果表明,即使在有噪声的数据和具有平滑过渡的演示中,vMCI也能检测到段的边界。
{"title":"Velocity-Based Multiple Change-Point Inference for Unsupervised Segmentation of Human Movement Behavior","authors":"Lisa Senger, M. Schröer, J. H. Metzen, E. Kirchner","doi":"10.1109/ICPR.2014.781","DOIUrl":"https://doi.org/10.1109/ICPR.2014.781","url":null,"abstract":"In order to transfer complex human behavior to a robot, segmentation methods are needed which are able to detect central movement patterns that can be combined to generate a wide range of behaviors. We propose an algorithm that segments human movements into behavior building blocks in a fully automatic way, called velocity-based Multiple Change-point Inference (vMCI). Based on characteristic bell-shaped velocity patterns that can be found in point-to-point arm movements, the algorithm infers segment borders using Bayesian inference. Different segment lengths and variations in the movement execution can be handled. Moreover, the number of segments the movement is composed of need not be known in advance. Several experiments are performed on synthetic and motion capturing data of human movements to compare vMCI with other techniques for unsupervised segmentation. The results show that vMCI is able to detect segment borders even in noisy data and in demonstrations with smooth transitions between segments.","PeriodicalId":142159,"journal":{"name":"2014 22nd International Conference on Pattern Recognition","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114236715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Principal Local Binary Patterns for Face Representation and Recognition 人脸表示与识别的主要局部二值模式
Pub Date : 2014-12-08 DOI: 10.1109/ICPR.2014.779
J. Yi, Fei Su
Based on fitting the Local Binary Patterns (LBP) histogram into the bag-of-words paradigm, we propose an LBP variant termed Principal Local Binary Patterns (PLBP) which are learned in an unsupervised way from the data. The learning problem turns out to be the same as the Principal Component Analysis (PCA) and thus can be solved very efficiently. Unlike the manually specified patterns in LBP which distribute very non-uniformly, the learned patterns in PLBP can adapt with the distribution of the data so that they distribute very uniformly, which preserves more information than LBP in the binary coding process. Moreover, PLBP can take advantage of much larger neighborhood than LBP to describe the point, which provides more information. Therefore, PLBP contains more information than LBP to discriminate different classes. The experimental results of face recognition on the FERET and LFW datasets clearly confirm the discrimination power and robustness of PLBP. It achieves very competing performance on both datasets and it is very simple and efficient to compute.
在将局部二值模式(LBP)直方图拟合到词袋范式的基础上,提出了一种LBP变体,称为主局部二值模式(PLBP),该模式以无监督的方式从数据中学习。学习问题与主成分分析(PCA)相同,因此可以非常有效地解决。与LBP中人工指定的模式分布非常不均匀不同,PLBP中学习到的模式可以适应数据的分布,使其分布非常均匀,在二进制编码过程中比LBP保留了更多的信息。此外,PLBP可以利用比LBP更大的邻域来描述点,从而提供更多的信息。因此,PLBP比LBP包含更多的信息来区分不同的类别。在FERET和LFW数据集上的人脸识别实验结果清楚地证实了PLBP的识别能力和鲁棒性。它在两个数据集上实现了非常具有竞争力的性能,并且计算非常简单高效。
{"title":"Principal Local Binary Patterns for Face Representation and Recognition","authors":"J. Yi, Fei Su","doi":"10.1109/ICPR.2014.779","DOIUrl":"https://doi.org/10.1109/ICPR.2014.779","url":null,"abstract":"Based on fitting the Local Binary Patterns (LBP) histogram into the bag-of-words paradigm, we propose an LBP variant termed Principal Local Binary Patterns (PLBP) which are learned in an unsupervised way from the data. The learning problem turns out to be the same as the Principal Component Analysis (PCA) and thus can be solved very efficiently. Unlike the manually specified patterns in LBP which distribute very non-uniformly, the learned patterns in PLBP can adapt with the distribution of the data so that they distribute very uniformly, which preserves more information than LBP in the binary coding process. Moreover, PLBP can take advantage of much larger neighborhood than LBP to describe the point, which provides more information. Therefore, PLBP contains more information than LBP to discriminate different classes. The experimental results of face recognition on the FERET and LFW datasets clearly confirm the discrimination power and robustness of PLBP. It achieves very competing performance on both datasets and it is very simple and efficient to compute.","PeriodicalId":142159,"journal":{"name":"2014 22nd International Conference on Pattern Recognition","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116303776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cost-Sensitive Transformation for Chinese Address Recognition 中文地址识别的成本敏感转换
Pub Date : 2014-12-08 DOI: 10.1109/ICPR.2014.499
Shujing Lu, Xiaohua Wei, Yue Lu
This paper proposes a cost-sensitive transformation for improving handwritten address recognition performance by converting a general-purpose handwritten Chinese character recognition engine to a special-purpose one. The class probabilities produced by character recognition engine for predicting a sample to candidate classes are transformed to the expected costs based on Naive Bayes optimal theoretical predictions firstly. And then candidate probabilities are reestimated based on the expected costs. Two general-purpose offline handwritten Chinese character recognition engines, PAIS and HAW, are tested in our experiments by applying them in handwritten Chinese address recognition system. 1822 live handwritten Chinese address images are tested with multiple cost matrices. Experimental results show that cost-sensitive transformation improves the recognition performance of general purpose recognition engines on handwritten Chinese address recognition.
为了提高手写地址识别性能,本文提出了一种代价敏感的转换方法,将通用的手写汉字识别引擎转换为专用的手写汉字识别引擎。首先,基于朴素贝叶斯最优理论预测,将字符识别引擎预测样本到候选类别所产生的类别概率转化为期望代价。然后根据预期成本重新估计候选概率。本文对两种通用的离线手写汉字识别引擎PAIS和HAW进行了实验测试,并将其应用于手写中文地址识别系统。用多个代价矩阵对1822张手写中文地址图像进行了测试。实验结果表明,成本敏感变换提高了通用识别引擎对手写体中文地址识别的识别性能。
{"title":"Cost-Sensitive Transformation for Chinese Address Recognition","authors":"Shujing Lu, Xiaohua Wei, Yue Lu","doi":"10.1109/ICPR.2014.499","DOIUrl":"https://doi.org/10.1109/ICPR.2014.499","url":null,"abstract":"This paper proposes a cost-sensitive transformation for improving handwritten address recognition performance by converting a general-purpose handwritten Chinese character recognition engine to a special-purpose one. The class probabilities produced by character recognition engine for predicting a sample to candidate classes are transformed to the expected costs based on Naive Bayes optimal theoretical predictions firstly. And then candidate probabilities are reestimated based on the expected costs. Two general-purpose offline handwritten Chinese character recognition engines, PAIS and HAW, are tested in our experiments by applying them in handwritten Chinese address recognition system. 1822 live handwritten Chinese address images are tested with multiple cost matrices. Experimental results show that cost-sensitive transformation improves the recognition performance of general purpose recognition engines on handwritten Chinese address recognition.","PeriodicalId":142159,"journal":{"name":"2014 22nd International Conference on Pattern Recognition","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122443449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Image Retrieval Based on Anisotropic Scaling and Shearing Invariant Geometric Coherence 基于各向异性缩放和剪切不变几何相干的图像检索
Pub Date : 2014-12-08 DOI: 10.1109/ICPR.2014.677
Xiaomeng Wu, K. Kashino
Imposing a spatial coherence constraint on image matching is becoming a necessity for local feature based object retrieval. We tackle the affine invariance problem of the prior spatial coherence model and propose a novel approach for geometrically stable image retrieval. Compared with related studies focusing simply on translation, rotation, and isotropic scaling, our approach can deal with more significant transformations including anisotropic scaling and shearing. Our contribution consists of revisiting the first-order affine adaptation approach and extending its application to represent the geometric coherence of a second-order local feature structure. We comprehensively evaluated our approach using Flickr Logos 32, Holiday, and Oxford Buildings benchmarks. Extensive experimentation and comparisons with state-of-the-art spatial coherence models demonstrate the superiority of our approach in image retrieval tasks.
在图像匹配中施加空间相干性约束是基于局部特征的目标检索的必要条件。我们解决了先验空间相干模型的仿射不变性问题,提出了一种几何稳定图像检索的新方法。与单纯关注平移、旋转和各向同性缩放的相关研究相比,我们的方法可以处理更重要的转换,包括各向异性缩放和剪切。我们的贡献包括重新审视一阶仿射适应方法,并将其应用扩展到表示二阶局部特征结构的几何相干性。我们使用Flickr Logos 32、Holiday和Oxford Buildings基准对我们的方法进行了全面评估。广泛的实验和与最先进的空间相干模型的比较证明了我们的方法在图像检索任务中的优越性。
{"title":"Image Retrieval Based on Anisotropic Scaling and Shearing Invariant Geometric Coherence","authors":"Xiaomeng Wu, K. Kashino","doi":"10.1109/ICPR.2014.677","DOIUrl":"https://doi.org/10.1109/ICPR.2014.677","url":null,"abstract":"Imposing a spatial coherence constraint on image matching is becoming a necessity for local feature based object retrieval. We tackle the affine invariance problem of the prior spatial coherence model and propose a novel approach for geometrically stable image retrieval. Compared with related studies focusing simply on translation, rotation, and isotropic scaling, our approach can deal with more significant transformations including anisotropic scaling and shearing. Our contribution consists of revisiting the first-order affine adaptation approach and extending its application to represent the geometric coherence of a second-order local feature structure. We comprehensively evaluated our approach using Flickr Logos 32, Holiday, and Oxford Buildings benchmarks. Extensive experimentation and comparisons with state-of-the-art spatial coherence models demonstrate the superiority of our approach in image retrieval tasks.","PeriodicalId":142159,"journal":{"name":"2014 22nd International Conference on Pattern Recognition","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122755991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Handwritten Text Segmentation Using Elastic Shape Analysis 使用弹性形状分析的手写文本分割
Pub Date : 2014-12-08 DOI: 10.1109/ICPR.2014.432
S. Kurtek, Anuj Srivastava
Segmentation of handwritten text into individual characters is an important step in many handwriting recognition tasks. In this paper, we present two segmentation algorithms based on elastic shape analysis of parameterized, planar curves. The shape analysis methodology provides matching, comparison and averaging of handwritten curves in a unified framework, which are very useful tools for designing segmentation algorithms. The first type of segmentation can be performed by splitting a full word into individual characters using a matching function. Another type of segmentation can be obtained by matching parts of the handwritten words to a given individual character. We validate the two proposed algorithms on real handwritten signatures and words coming from the SVC 2004 and the UNIPEN ICROW 2003 datasets. We show that the proposed methods are able to successfully segment text coming from highly variable handwriting styles.
将手写文本分割成单个字符是许多手写识别任务中的重要步骤。本文提出了两种基于参数化平面曲线弹性形状分析的分割算法。形状分析方法在统一的框架内提供了手写曲线的匹配、比较和平均,是设计分割算法的有用工具。第一种切分可以通过使用匹配函数将一个完整的单词分割成单个字符来执行。另一种类型的分割可以通过将手写单词的部分与给定的单个字符匹配来获得。我们在来自SVC 2004和UNIPEN ICROW 2003数据集的真实手写签名和单词上验证了这两种算法。我们表明,所提出的方法能够成功地分割文本来自高度可变的手写风格。
{"title":"Handwritten Text Segmentation Using Elastic Shape Analysis","authors":"S. Kurtek, Anuj Srivastava","doi":"10.1109/ICPR.2014.432","DOIUrl":"https://doi.org/10.1109/ICPR.2014.432","url":null,"abstract":"Segmentation of handwritten text into individual characters is an important step in many handwriting recognition tasks. In this paper, we present two segmentation algorithms based on elastic shape analysis of parameterized, planar curves. The shape analysis methodology provides matching, comparison and averaging of handwritten curves in a unified framework, which are very useful tools for designing segmentation algorithms. The first type of segmentation can be performed by splitting a full word into individual characters using a matching function. Another type of segmentation can be obtained by matching parts of the handwritten words to a given individual character. We validate the two proposed algorithms on real handwritten signatures and words coming from the SVC 2004 and the UNIPEN ICROW 2003 datasets. We show that the proposed methods are able to successfully segment text coming from highly variable handwriting styles.","PeriodicalId":142159,"journal":{"name":"2014 22nd International Conference on Pattern Recognition","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122237854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Effective Part Localization in Latent-SVM Training 潜在支持向量机训练中的有效部件定位
Pub Date : 2014-12-08 DOI: 10.1109/ICPR.2014.732
Yaodong Chen, Renfa Li
Deformable part models show a remarkable detection performance for a variety of object categories. During training these models rely on energy-based methods and heuristic initialization to search and localize parts, equivalent to learning local object features. Due to weak supervision, however, those learnt part detectors contain lots of noise and are not enough reliable to classify the object. This paper investigates part localization problem and extends the latent-SVM by incorporating local consistency of image features. The objective is to adaptively select part sub-windows that overlap semantically meaningful components as much as possible, which leads to a more reliable learning of the part detectors in a weakly-supervised setting. The main idea of our method is estimating part-specific color/texture models as well as edge distribution within each training example, followed by a foreground segmentation for part localization. The experimental results show that we achieve an overall improvement of about 3% mAP over the latent-SVM on non-rigid objects.
可变形零件模型对各种物体类别都有很好的检测性能。在训练过程中,这些模型依靠基于能量的方法和启发式初始化来搜索和定位部件,相当于学习局部对象的特征。然而,由于监督较弱,这些学习到的部分检测器含有大量的噪声,对目标的分类不够可靠。本文研究了零件定位问题,并结合图像特征的局部一致性对潜在支持向量机进行了扩展。目标是自适应地选择尽可能多地重叠语义上有意义的组件的零件子窗口,这导致在弱监督设置中更可靠地学习零件检测器。该方法的主要思想是在每个训练样本中估计特定于零件的颜色/纹理模型以及边缘分布,然后进行前景分割以进行零件定位。实验结果表明,在非刚性物体上,我们比潜在支持向量机实现了约3% mAP的总体改进。
{"title":"Effective Part Localization in Latent-SVM Training","authors":"Yaodong Chen, Renfa Li","doi":"10.1109/ICPR.2014.732","DOIUrl":"https://doi.org/10.1109/ICPR.2014.732","url":null,"abstract":"Deformable part models show a remarkable detection performance for a variety of object categories. During training these models rely on energy-based methods and heuristic initialization to search and localize parts, equivalent to learning local object features. Due to weak supervision, however, those learnt part detectors contain lots of noise and are not enough reliable to classify the object. This paper investigates part localization problem and extends the latent-SVM by incorporating local consistency of image features. The objective is to adaptively select part sub-windows that overlap semantically meaningful components as much as possible, which leads to a more reliable learning of the part detectors in a weakly-supervised setting. The main idea of our method is estimating part-specific color/texture models as well as edge distribution within each training example, followed by a foreground segmentation for part localization. The experimental results show that we achieve an overall improvement of about 3% mAP over the latent-SVM on non-rigid objects.","PeriodicalId":142159,"journal":{"name":"2014 22nd International Conference on Pattern Recognition","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129140805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dominant Sets as a Framework for Cluster Ensembles: An Evolutionary Game Theory Approach 作为集群集成框架的优势集:一种进化博弈论方法
Pub Date : 2014-12-08 DOI: 10.1109/ICPR.2014.595
Alireza Chakeri, L. Hall
Ensemble clustering aggregates partitions obtained from several individual clustering algorithms. This can improve the accuracy of results from individual methods and provide robustness against variability in the methods applied. Theorems show one can find dominant sets (clusters) very efficiently by using an evolutionary game theoretic approach. Experiments on an MRI data set consisting of about 4 million data are detailed. The distributed dominant set framework generates partitions of quality slightly better than clustering all the data using fuzzy C means.
集成聚类聚合从几个单独的聚类算法得到的分区。这可以提高单个方法结果的准确性,并对所应用方法的可变性提供鲁棒性。定理表明,通过使用进化博弈论方法,可以非常有效地找到优势集(聚类)。在一个包含约400万数据的MRI数据集上进行了详细的实验。分布式优势集框架生成的分区质量略好于使用模糊C均值聚类所有数据。
{"title":"Dominant Sets as a Framework for Cluster Ensembles: An Evolutionary Game Theory Approach","authors":"Alireza Chakeri, L. Hall","doi":"10.1109/ICPR.2014.595","DOIUrl":"https://doi.org/10.1109/ICPR.2014.595","url":null,"abstract":"Ensemble clustering aggregates partitions obtained from several individual clustering algorithms. This can improve the accuracy of results from individual methods and provide robustness against variability in the methods applied. Theorems show one can find dominant sets (clusters) very efficiently by using an evolutionary game theoretic approach. Experiments on an MRI data set consisting of about 4 million data are detailed. The distributed dominant set framework generates partitions of quality slightly better than clustering all the data using fuzzy C means.","PeriodicalId":142159,"journal":{"name":"2014 22nd International Conference on Pattern Recognition","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132243579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Compound Exemplar Based Object Detection by Incremental Random Forest 基于复合样例的增量随机森林目标检测
Pub Date : 2014-12-08 DOI: 10.1109/ICPR.2014.417
Kai Ma, J. Ben-Arie
This paper describes a new hybrid detection method that combines exemplar based approach with discriminative patch selection. More specifically, we applied a modified random forest for retrieval of input similar local patches of stored exemplars while rejecting background patches. A recursive algorithm based on dynamic programming 2D matching optimization is applied after the aforementioned patch retrieving stage in order to enforce geometric constraints of object patches. Our proposed approach demonstrates experimentally that it performs well while maintaining the capability for incremental learning.
本文提出了一种将基于样例的方法与判别式斑块选择相结合的混合检测方法。更具体地说,我们应用了一个改进的随机森林来检索存储样本的输入相似的局部补丁,同时拒绝背景补丁。在上述补丁检索阶段之后,采用基于动态规划的二维匹配优化递归算法来加强目标补丁的几何约束。实验表明,该方法在保持增量学习能力的同时表现良好。
{"title":"Compound Exemplar Based Object Detection by Incremental Random Forest","authors":"Kai Ma, J. Ben-Arie","doi":"10.1109/ICPR.2014.417","DOIUrl":"https://doi.org/10.1109/ICPR.2014.417","url":null,"abstract":"This paper describes a new hybrid detection method that combines exemplar based approach with discriminative patch selection. More specifically, we applied a modified random forest for retrieval of input similar local patches of stored exemplars while rejecting background patches. A recursive algorithm based on dynamic programming 2D matching optimization is applied after the aforementioned patch retrieving stage in order to enforce geometric constraints of object patches. Our proposed approach demonstrates experimentally that it performs well while maintaining the capability for incremental learning.","PeriodicalId":142159,"journal":{"name":"2014 22nd International Conference on Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130612985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
2014 22nd International Conference on Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1