首页 > 最新文献

2013 IEEE International Conference on Computer Vision最新文献

英文 中文
Joint Learning of Discriminative Prototypes and Large Margin Nearest Neighbor Classifiers 判别原型与大边界最近邻分类器的联合学习
Pub Date : 2013-12-01 DOI: 10.1109/ICCV.2013.386
Martin Köstinger, Paul Wohlhart, P. Roth, H. Bischof
In this paper, we raise important issues concerning the evaluation complexity of existing Mahalanobis metric learning methods. The complexity scales linearly with the size of the dataset. This is especially cumbersome on large scale or for real-time applications with limited time budget. To alleviate this problem we propose to represent the dataset by a fixed number of discriminative prototypes. In particular, we introduce a new method that jointly chooses the positioning of prototypes and also optimizes the Mahalanobis distance metric with respect to these. We show that choosing the positioning of the prototypes and learning the metric in parallel leads to a drastically reduced evaluation effort while maintaining the discriminative essence of the original dataset. Moreover, for most problems our method performing k-nearest prototype (k-NP) classification on the condensed dataset leads to even better generalization compared to k-NN classification using all data. Results on a variety of challenging benchmarks demonstrate the power of our method. These include standard machine learning datasets as well as the challenging Public Figures Face Database. On the competitive machine learning benchmarks we are comparable to the state-of-the-art while being more efficient. On the face benchmark we clearly outperform the state-of-the-art in Mahalanobis metric learning with drastically reduced evaluation effort.
本文提出了现有马哈拉诺比度量学习方法评估复杂性的重要问题。复杂度与数据集的大小成线性关系。这在大规模或时间预算有限的实时应用程序中尤其麻烦。为了缓解这个问题,我们提出用固定数量的判别原型来表示数据集。特别地,我们引入了一种新的方法来共同选择原型的定位,并在此基础上优化马氏距离度量。我们表明,选择原型的位置和并行学习度量可以大大减少评估工作量,同时保持原始数据集的判别本质。此外,对于大多数问题,我们的方法在压缩数据集上执行k-最近原型(k-NP)分类,与使用所有数据的k-NN分类相比,具有更好的泛化效果。在各种具有挑战性的基准测试上的结果证明了我们方法的强大功能。这些包括标准的机器学习数据集以及具有挑战性的公众人物面部数据库。在竞争激烈的机器学习基准上,我们可以与最先进的机器学习相媲美,同时效率更高。在面部基准上,我们明显优于马氏度量学习的最先进技术,大大减少了评估工作量。
{"title":"Joint Learning of Discriminative Prototypes and Large Margin Nearest Neighbor Classifiers","authors":"Martin Köstinger, Paul Wohlhart, P. Roth, H. Bischof","doi":"10.1109/ICCV.2013.386","DOIUrl":"https://doi.org/10.1109/ICCV.2013.386","url":null,"abstract":"In this paper, we raise important issues concerning the evaluation complexity of existing Mahalanobis metric learning methods. The complexity scales linearly with the size of the dataset. This is especially cumbersome on large scale or for real-time applications with limited time budget. To alleviate this problem we propose to represent the dataset by a fixed number of discriminative prototypes. In particular, we introduce a new method that jointly chooses the positioning of prototypes and also optimizes the Mahalanobis distance metric with respect to these. We show that choosing the positioning of the prototypes and learning the metric in parallel leads to a drastically reduced evaluation effort while maintaining the discriminative essence of the original dataset. Moreover, for most problems our method performing k-nearest prototype (k-NP) classification on the condensed dataset leads to even better generalization compared to k-NN classification using all data. Results on a variety of challenging benchmarks demonstrate the power of our method. These include standard machine learning datasets as well as the challenging Public Figures Face Database. On the competitive machine learning benchmarks we are comparable to the state-of-the-art while being more efficient. On the face benchmark we clearly outperform the state-of-the-art in Mahalanobis metric learning with drastically reduced evaluation effort.","PeriodicalId":6351,"journal":{"name":"2013 IEEE International Conference on Computer Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76760411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Stable Hyper-pooling and Query Expansion for Event Detection 事件检测的稳定超池和查询扩展
Pub Date : 2013-12-01 DOI: 10.1109/ICCV.2013.229
Matthijs Douze, Jérôme Revaud, C. Schmid, H. Jégou
This paper makes two complementary contributions to event retrieval in large collections of videos. First, we propose hyper-pooling strategies that encode the frame descriptors into a representation of the video sequence in a stable manner. Our best choices compare favorably with regular pooling techniques based on k-means quantization. Second, we introduce a technique to improve the ranking. It can be interpreted either as a query expansion method or as a similarity adaptation based on the local context of the query video descriptor. Experiments on public benchmarks show that our methods are complementary and improve event retrieval results, without sacrificing efficiency.
本文对大型视频集合中的事件检索做了两个互补的贡献。首先,我们提出了超池策略,该策略以稳定的方式将帧描述符编码为视频序列的表示。我们的最佳选择与基于k均值量化的常规池化技术相比具有优势。其次,我们引入了一种提高排名的技术。它既可以理解为查询扩展方法,也可以理解为基于查询视频描述符的局部上下文的相似性适应。在公共基准测试上的实验表明,我们的方法是互补的,在不牺牲效率的情况下提高了事件检索结果。
{"title":"Stable Hyper-pooling and Query Expansion for Event Detection","authors":"Matthijs Douze, Jérôme Revaud, C. Schmid, H. Jégou","doi":"10.1109/ICCV.2013.229","DOIUrl":"https://doi.org/10.1109/ICCV.2013.229","url":null,"abstract":"This paper makes two complementary contributions to event retrieval in large collections of videos. First, we propose hyper-pooling strategies that encode the frame descriptors into a representation of the video sequence in a stable manner. Our best choices compare favorably with regular pooling techniques based on k-means quantization. Second, we introduce a technique to improve the ranking. It can be interpreted either as a query expansion method or as a similarity adaptation based on the local context of the query video descriptor. Experiments on public benchmarks show that our methods are complementary and improve event retrieval results, without sacrificing efficiency.","PeriodicalId":6351,"journal":{"name":"2013 IEEE International Conference on Computer Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78129247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Find the Best Path: An Efficient and Accurate Classifier for Image Hierarchies 寻找最佳路径:一种高效准确的图像层次分类器
Pub Date : 2013-12-01 DOI: 10.1109/ICCV.2013.40
Min Sun, Wanming Huang, S. Savarese
Many methods have been proposed to solve the image classification problem for a large number of categories. Among them, methods based on tree-based representations achieve good trade-off between accuracy and test time efficiency. While focusing on learning a tree-shaped hierarchy and the corresponding set of classifiers, most of them [11, 2, 14] use a greedy prediction algorithm for test time efficiency. We argue that the dramatic decrease in accuracy at high efficiency is caused by the specific design choice of the learning and greedy prediction algorithms. In this work, we propose a classifier which achieves a better trade-off between efficiency and accuracy with a given tree-shaped hierarchy. First, we convert the classification problem as finding the best path in the hierarchy, and a novel branch-and-bound-like algorithm is introduced to efficiently search for the best path. Second, we jointly train the classifiers using a novel Structured SVM (SSVM) formulation with additional bound constraints. As a result, our method achieves a significant 4.65%, 5.43%, and 4.07% (relative 24.82%, 41.64%, and 109.79%) improvement in accuracy at high efficiency compared to state-of-the-art greedy "tree-based" methods [14] on Caltech-256 [15], SUN [32] and Image Net 1K [9] dataset, respectively. Finally, we show that our branch-and-bound-like algorithm naturally ranks the paths in the hierarchy (Fig. 8) so that users can further process them.
为了解决大量类别的图像分类问题,已经提出了许多方法。其中,基于树表示的方法在准确率和测试时间效率之间取得了很好的平衡。在集中学习树形层次结构和相应的分类器集的同时,大多数[11,2,14]使用贪婪预测算法来提高测试时间效率。我们认为,高效率下准确率的急剧下降是由学习和贪婪预测算法的特定设计选择引起的。在这项工作中,我们提出了一个分类器,它在给定的树形层次结构中实现了效率和准确性之间的更好权衡。首先,我们将分类问题转化为在层次结构中寻找最佳路径,并引入了一种新颖的类分支定界算法来有效地搜索最佳路径。其次,我们使用具有附加约束的新型结构化支持向量机(SSVM)公式联合训练分类器。结果,与最先进的贪婪“基于树”的方法[14]相比,我们的方法在Caltech-256[15]、SUN[32]和Image Net 1K[9]数据集上的准确率分别提高了4.65%、5.43%和4.07%(相对于24.82%、41.64%和109.79%)。最后,我们展示了我们的分支绑定算法自然地对层次结构中的路径进行排序(图8),以便用户可以进一步处理它们。
{"title":"Find the Best Path: An Efficient and Accurate Classifier for Image Hierarchies","authors":"Min Sun, Wanming Huang, S. Savarese","doi":"10.1109/ICCV.2013.40","DOIUrl":"https://doi.org/10.1109/ICCV.2013.40","url":null,"abstract":"Many methods have been proposed to solve the image classification problem for a large number of categories. Among them, methods based on tree-based representations achieve good trade-off between accuracy and test time efficiency. While focusing on learning a tree-shaped hierarchy and the corresponding set of classifiers, most of them [11, 2, 14] use a greedy prediction algorithm for test time efficiency. We argue that the dramatic decrease in accuracy at high efficiency is caused by the specific design choice of the learning and greedy prediction algorithms. In this work, we propose a classifier which achieves a better trade-off between efficiency and accuracy with a given tree-shaped hierarchy. First, we convert the classification problem as finding the best path in the hierarchy, and a novel branch-and-bound-like algorithm is introduced to efficiently search for the best path. Second, we jointly train the classifiers using a novel Structured SVM (SSVM) formulation with additional bound constraints. As a result, our method achieves a significant 4.65%, 5.43%, and 4.07% (relative 24.82%, 41.64%, and 109.79%) improvement in accuracy at high efficiency compared to state-of-the-art greedy \"tree-based\" methods [14] on Caltech-256 [15], SUN [32] and Image Net 1K [9] dataset, respectively. Finally, we show that our branch-and-bound-like algorithm naturally ranks the paths in the hierarchy (Fig. 8) so that users can further process them.","PeriodicalId":6351,"journal":{"name":"2013 IEEE International Conference on Computer Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78193589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
What Do You Do? Occupation Recognition in a Photo via Social Context 你是怎么做的?基于社会语境的照片职业识别
Pub Date : 2013-12-01 DOI: 10.1109/ICCV.2013.451
Ming Shao, Liangyue Li, Y. Fu
In this paper, we investigate the problem of recognizing occupations of multiple people with arbitrary poses in a photo. Previous work utilizing single person's nearly frontal clothing information and fore/background context preliminarily proves that occupation recognition is computationally feasible in computer vision. However, in practice, multiple people with arbitrary poses are common in a photo, and recognizing their occupations is even more challenging. We argue that with appropriately built visual attributes, co-occurrence, and spatial configuration model that is learned through structure SVM, we can recognize multiple people's occupations in a photo simultaneously. To evaluate our method's performance, we conduct extensive experiments on a new well-labeled occupation database with 14 representative occupations and over 7K images. Results on this database validate our method's effectiveness and show that occupation recognition is solvable in a more general case.
在本文中,我们研究了在一张照片中任意姿势的多人的职业识别问题。先前利用单个人近正面服装信息和前/背景信息的工作初步证明了职业识别在计算机视觉中是计算可行的。然而,在实践中,一张照片中有很多人摆着任意的姿势是很常见的,识别他们的职业更具挑战性。我们认为,通过结构支持向量机学习适当构建的视觉属性、共现性和空间配置模型,可以同时识别一张照片中多人的职业。为了评估我们的方法的性能,我们在一个新的有良好标记的职业数据库上进行了广泛的实验,该数据库包含14个具有代表性的职业和超过7K的图像。该数据库的结果验证了我们的方法的有效性,并表明职业识别在更一般的情况下是可解决的。
{"title":"What Do You Do? Occupation Recognition in a Photo via Social Context","authors":"Ming Shao, Liangyue Li, Y. Fu","doi":"10.1109/ICCV.2013.451","DOIUrl":"https://doi.org/10.1109/ICCV.2013.451","url":null,"abstract":"In this paper, we investigate the problem of recognizing occupations of multiple people with arbitrary poses in a photo. Previous work utilizing single person's nearly frontal clothing information and fore/background context preliminarily proves that occupation recognition is computationally feasible in computer vision. However, in practice, multiple people with arbitrary poses are common in a photo, and recognizing their occupations is even more challenging. We argue that with appropriately built visual attributes, co-occurrence, and spatial configuration model that is learned through structure SVM, we can recognize multiple people's occupations in a photo simultaneously. To evaluate our method's performance, we conduct extensive experiments on a new well-labeled occupation database with 14 representative occupations and over 7K images. Results on this database validate our method's effectiveness and show that occupation recognition is solvable in a more general case.","PeriodicalId":6351,"journal":{"name":"2013 IEEE International Conference on Computer Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75121827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Saliency Detection in Large Point Sets 大型点集的显著性检测
Pub Date : 2013-12-01 DOI: 10.1109/ICCV.2013.446
Elizabeth Shtrom, G. Leifman, A. Tal
While saliency in images has been extensively studied in recent years, there is very little work on saliency of point sets. This is despite the fact that point sets and range data are becoming ever more widespread and have myriad applications. In this paper we present an algorithm for detecting the salient points in unorganized 3D point sets. Our algorithm is designed to cope with extremely large sets, which may contain tens of millions of points. Such data is typical of urban scenes, which have recently become commonly available on the web. No previous work has handled such data. For general data sets, we show that our results are competitive with those of saliency detection of surfaces, although we do not have any connectivity information. We demonstrate the utility of our algorithm in two applications: producing a set of the most informative viewpoints and suggesting an informative city tour given a city scan.
虽然近年来对图像的显著性进行了广泛的研究,但对点集的显著性研究却很少。尽管事实是,点集和距离数据正变得越来越普遍,并有无数的应用。本文提出了一种检测无组织三维点集中显著点的算法。我们的算法设计用于处理可能包含数千万个点的超大集。这些数据是典型的城市场景,最近在网络上变得普遍。以前的工作没有处理过这样的数据。对于一般数据集,我们表明我们的结果与表面显著性检测的结果具有竞争力,尽管我们没有任何连接信息。我们在两个应用程序中演示了我们的算法的实用性:生成一组最具信息量的视点,并在给定城市扫描的情况下建议进行信息量大的城市游览。
{"title":"Saliency Detection in Large Point Sets","authors":"Elizabeth Shtrom, G. Leifman, A. Tal","doi":"10.1109/ICCV.2013.446","DOIUrl":"https://doi.org/10.1109/ICCV.2013.446","url":null,"abstract":"While saliency in images has been extensively studied in recent years, there is very little work on saliency of point sets. This is despite the fact that point sets and range data are becoming ever more widespread and have myriad applications. In this paper we present an algorithm for detecting the salient points in unorganized 3D point sets. Our algorithm is designed to cope with extremely large sets, which may contain tens of millions of points. Such data is typical of urban scenes, which have recently become commonly available on the web. No previous work has handled such data. For general data sets, we show that our results are competitive with those of saliency detection of surfaces, although we do not have any connectivity information. We demonstrate the utility of our algorithm in two applications: producing a set of the most informative viewpoints and suggesting an informative city tour given a city scan.","PeriodicalId":6351,"journal":{"name":"2013 IEEE International Conference on Computer Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75257843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
Similarity Metric Learning for Face Recognition 人脸识别的相似度度量学习
Pub Date : 2013-12-01 DOI: 10.1109/ICCV.2013.299
Qiong Cao, Yiming Ying, Peng Li
Recently, there is a considerable amount of efforts devoted to the problem of unconstrained face verification, where the task is to predict whether pairs of images are from the same person or not. This problem is challenging and difficult due to the large variations in face images. In this paper, we develop a novel regularization framework to learn similarity metrics for unconstrained face verification. We formulate its objective function by incorporating the robustness to the large intra-personal variations and the discriminative power of novel similarity metrics. In addition, our formulation is a convex optimization problem which guarantees the existence of its global solution. Experiments show that our proposed method achieves the state-of-the-art results on the challenging Labeled Faces in the Wild (LFW) database [10].
最近,有相当多的研究致力于无约束的人脸验证问题,其任务是预测成对的图像是否来自同一个人。由于人脸图像的差异很大,这一问题具有挑战性和难度。在本文中,我们开发了一种新的正则化框架来学习无约束人脸验证的相似性度量。我们通过结合对大型个人内部变化的鲁棒性和新型相似性度量的判别能力来制定其目标函数。此外,我们的公式是一个凸优化问题,保证了其全局解的存在性。实验表明,我们提出的方法在LFW数据库中取得了最先进的结果[10]。
{"title":"Similarity Metric Learning for Face Recognition","authors":"Qiong Cao, Yiming Ying, Peng Li","doi":"10.1109/ICCV.2013.299","DOIUrl":"https://doi.org/10.1109/ICCV.2013.299","url":null,"abstract":"Recently, there is a considerable amount of efforts devoted to the problem of unconstrained face verification, where the task is to predict whether pairs of images are from the same person or not. This problem is challenging and difficult due to the large variations in face images. In this paper, we develop a novel regularization framework to learn similarity metrics for unconstrained face verification. We formulate its objective function by incorporating the robustness to the large intra-personal variations and the discriminative power of novel similarity metrics. In addition, our formulation is a convex optimization problem which guarantees the existence of its global solution. Experiments show that our proposed method achieves the state-of-the-art results on the challenging Labeled Faces in the Wild (LFW) database [10].","PeriodicalId":6351,"journal":{"name":"2013 IEEE International Conference on Computer Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75270525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 207
Forward Motion Deblurring 向前运动去模糊
Pub Date : 2013-12-01 DOI: 10.1109/ICCV.2013.185
Shicheng Zheng, Li Xu, Jiaya Jia
We handle a special type of motion blur considering that cameras move primarily forward or backward. Solving this type of blur is of unique practical importance since nearly all car, traffic and bike-mounted cameras follow out-of-plane translational motion. We start with the study of geometric models and analyze the difficulty of existing methods to deal with them. We also propose a solution accounting for depth variation. Homographies associated with different 3D planes are considered and solved for in an optimization framework. Our method is verified on several natural image examples that cannot be satisfyingly dealt with by previous methods.
考虑到相机主要向前或向后移动,我们处理了一种特殊类型的运动模糊。解决这种类型的模糊具有独特的实际重要性,因为几乎所有的汽车,交通和自行车摄像机都遵循平面外平移运动。本文从几何模型的研究入手,分析了现有几何模型处理方法的难点。我们还提出了一个考虑深度变化的解决方案。在优化框架中考虑并求解与不同三维平面相关的同形异构。我们的方法在几个自然图像实例上得到了验证,这些实例是以往方法不能令人满意地处理的。
{"title":"Forward Motion Deblurring","authors":"Shicheng Zheng, Li Xu, Jiaya Jia","doi":"10.1109/ICCV.2013.185","DOIUrl":"https://doi.org/10.1109/ICCV.2013.185","url":null,"abstract":"We handle a special type of motion blur considering that cameras move primarily forward or backward. Solving this type of blur is of unique practical importance since nearly all car, traffic and bike-mounted cameras follow out-of-plane translational motion. We start with the study of geometric models and analyze the difficulty of existing methods to deal with them. We also propose a solution accounting for depth variation. Homographies associated with different 3D planes are considered and solved for in an optimization framework. Our method is verified on several natural image examples that cannot be satisfyingly dealt with by previous methods.","PeriodicalId":6351,"journal":{"name":"2013 IEEE International Conference on Computer Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75716635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Structured Forests for Fast Edge Detection 结构化森林快速边缘检测
Pub Date : 2013-12-01 DOI: 10.1109/ICCV.2013.231
Piotr Dollár, C. L. Zitnick
Edge detection is a critical component of many vision systems, including object detectors and image segmentation algorithms. Patches of edges exhibit well-known forms of local structure, such as straight lines or T-junctions. In this paper we take advantage of the structure present in local image patches to learn both an accurate and computationally efficient edge detector. We formulate the problem of predicting local edge masks in a structured learning framework applied to random decision forests. Our novel approach to learning decision trees robustly maps the structured labels to a discrete space on which standard information gain measures may be evaluated. The result is an approach that obtains real time performance that is orders of magnitude faster than many competing state-of-the-art approaches, while also achieving state-of-the-art edge detection results on the BSDS500 Segmentation dataset and NYU Depth dataset. Finally, we show the potential of our approach as a general purpose edge detector by showing our learned edge models generalize well across datasets.
边缘检测是许多视觉系统的关键组成部分,包括目标检测器和图像分割算法。一块块的边缘呈现出众所周知的局部结构形式,如直线或t形结。在本文中,我们利用局部图像补丁中存在的结构来学习精确且计算效率高的边缘检测器。我们在一个应用于随机决策森林的结构化学习框架中提出了预测局部边缘掩码的问题。我们的学习决策树的新方法鲁棒地将结构化标签映射到一个可以评估标准信息增益度量的离散空间。结果是一种获得实时性能的方法,比许多竞争的最先进的方法快几个数量级,同时在BSDS500分割数据集和NYU深度数据集上也获得了最先进的边缘检测结果。最后,我们通过展示我们学习的边缘模型在数据集上的良好泛化,展示了我们的方法作为通用边缘检测器的潜力。
{"title":"Structured Forests for Fast Edge Detection","authors":"Piotr Dollár, C. L. Zitnick","doi":"10.1109/ICCV.2013.231","DOIUrl":"https://doi.org/10.1109/ICCV.2013.231","url":null,"abstract":"Edge detection is a critical component of many vision systems, including object detectors and image segmentation algorithms. Patches of edges exhibit well-known forms of local structure, such as straight lines or T-junctions. In this paper we take advantage of the structure present in local image patches to learn both an accurate and computationally efficient edge detector. We formulate the problem of predicting local edge masks in a structured learning framework applied to random decision forests. Our novel approach to learning decision trees robustly maps the structured labels to a discrete space on which standard information gain measures may be evaluated. The result is an approach that obtains real time performance that is orders of magnitude faster than many competing state-of-the-art approaches, while also achieving state-of-the-art edge detection results on the BSDS500 Segmentation dataset and NYU Depth dataset. Finally, we show the potential of our approach as a general purpose edge detector by showing our learned edge models generalize well across datasets.","PeriodicalId":6351,"journal":{"name":"2013 IEEE International Conference on Computer Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74677878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 934
From Where and How to What We See 从哪里和如何到我们所看到的
Pub Date : 2013-12-01 DOI: 10.1109/ICCV.2013.83
S. Karthikeyan, V. Jagadeesh, Renuka Shenoy, M. Eckstein, B. S. Manjunath
Eye movement studies have confirmed that overt attention is highly biased towards faces and text regions in images. In this paper we explore a novel problem of predicting face and text regions in images using eye tracking data from multiple subjects. The problem is challenging as we aim to predict the semantics (face/text/background) only from eye tracking data without utilizing any image information. The proposed algorithm spatially clusters eye tracking data obtained in an image into different coherent groups and subsequently models the likelihood of the clusters containing faces and text using a fully connected Markov Random Field (MRF). Given the eye tracking data from a test image, it predicts potential face/head (humans, dogs and cats) and text locations reliably. Furthermore, the approach can be used to select regions of interest for further analysis by object detectors for faces and text. The hybrid eye position/object detector approach achieves better detection performance and reduced computation time compared to using only the object detection algorithm. We also present a new eye tracking dataset on 300 images selected from ICDAR, Street-view, Flickr and Oxford-IIIT Pet Dataset from 15 subjects.
眼球运动研究已经证实,明显的注意力高度偏向于图像中的人脸和文字区域。在本文中,我们探索了一个新的问题,即使用来自多个受试者的眼动追踪数据来预测图像中的人脸和文本区域。这个问题是具有挑战性的,因为我们的目标是仅从眼动追踪数据中预测语义(人脸/文本/背景),而不使用任何图像信息。该算法将图像中获得的眼动追踪数据在空间上聚类为不同的连贯组,然后使用完全连接的马尔可夫随机场(MRF)对包含人脸和文本的聚类进行可能性建模。根据测试图像的眼动追踪数据,它可以可靠地预测潜在的面部/头部(人类、狗和猫)和文本位置。此外,该方法可用于选择感兴趣的区域,以便对象检测器对人脸和文本进行进一步分析。与仅使用目标检测算法相比,混合眼位/目标检测方法具有更好的检测性能和更少的计算时间。我们还提出了一个新的眼动追踪数据集,该数据集从ICDAR、街景、Flickr和Oxford-IIIT Pet数据集中选择了300幅图像,来自15个主题。
{"title":"From Where and How to What We See","authors":"S. Karthikeyan, V. Jagadeesh, Renuka Shenoy, M. Eckstein, B. S. Manjunath","doi":"10.1109/ICCV.2013.83","DOIUrl":"https://doi.org/10.1109/ICCV.2013.83","url":null,"abstract":"Eye movement studies have confirmed that overt attention is highly biased towards faces and text regions in images. In this paper we explore a novel problem of predicting face and text regions in images using eye tracking data from multiple subjects. The problem is challenging as we aim to predict the semantics (face/text/background) only from eye tracking data without utilizing any image information. The proposed algorithm spatially clusters eye tracking data obtained in an image into different coherent groups and subsequently models the likelihood of the clusters containing faces and text using a fully connected Markov Random Field (MRF). Given the eye tracking data from a test image, it predicts potential face/head (humans, dogs and cats) and text locations reliably. Furthermore, the approach can be used to select regions of interest for further analysis by object detectors for faces and text. The hybrid eye position/object detector approach achieves better detection performance and reduced computation time compared to using only the object detection algorithm. We also present a new eye tracking dataset on 300 images selected from ICDAR, Street-view, Flickr and Oxford-IIIT Pet Dataset from 15 subjects.","PeriodicalId":6351,"journal":{"name":"2013 IEEE International Conference on Computer Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74679659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Dynamic Pooling for Complex Event Recognition 复杂事件识别的动态池
Pub Date : 2013-12-01 DOI: 10.1109/ICCV.2013.339
Wei-Xin Li, Qian Yu, Ajay Divakaran, N. Vasconcelos
The problem of adaptively selecting pooling regions for the classification of complex video events is considered. Complex events are defined as events composed of several characteristic behaviors, whose temporal configuration can change from sequence to sequence. A dynamic pooling operator is defined so as to enable a unified solution to the problems of event specific video segmentation, temporal structure modeling, and event detection. Video is decomposed into segments, and the segments most informative for detecting a given event are identified, so as to dynamically determine the pooling operator most suited for each sequence. This dynamic pooling is implemented by treating the locations of characteristic segments as hidden information, which is inferred, on a sequence-by-sequence basis, via a large-margin classification rule with latent variables. Although the feasible set of segment selections is combinatorial, it is shown that a globally optimal solution to the inference problem can be obtained efficiently, through the solution of a series of linear programs. Besides the coarse-level location of segments, a finer model of video structure is implemented by jointly pooling features of segment-tuples. Experimental evaluation demonstrates that the resulting event detector has state-of-the-art performance on challenging video datasets.
研究了复杂视频事件分类中池化区域的自适应选择问题。复杂事件被定义为由多个特征行为组成的事件,这些特征行为的时间结构可以随序列而变化。为了统一解决特定事件的视频分割、时间结构建模和事件检测等问题,定义了动态池算子。将视频分解为片段,识别出对检测给定事件信息量最大的片段,从而动态确定最适合每个序列的池化算子。这种动态池化是通过将特征片段的位置作为隐藏信息来实现的,这些隐藏信息是通过带有潜在变量的大边界分类规则逐序列推断出来的。尽管段选择可行集是组合的,但通过求解一系列线性规划,可以有效地得到推理问题的全局最优解。除了粗层次的片段定位外,还通过片段元组特征的联合池化实现了更精细的视频结构模型。实验评估表明,所得到的事件检测器在具有挑战性的视频数据集上具有最先进的性能。
{"title":"Dynamic Pooling for Complex Event Recognition","authors":"Wei-Xin Li, Qian Yu, Ajay Divakaran, N. Vasconcelos","doi":"10.1109/ICCV.2013.339","DOIUrl":"https://doi.org/10.1109/ICCV.2013.339","url":null,"abstract":"The problem of adaptively selecting pooling regions for the classification of complex video events is considered. Complex events are defined as events composed of several characteristic behaviors, whose temporal configuration can change from sequence to sequence. A dynamic pooling operator is defined so as to enable a unified solution to the problems of event specific video segmentation, temporal structure modeling, and event detection. Video is decomposed into segments, and the segments most informative for detecting a given event are identified, so as to dynamically determine the pooling operator most suited for each sequence. This dynamic pooling is implemented by treating the locations of characteristic segments as hidden information, which is inferred, on a sequence-by-sequence basis, via a large-margin classification rule with latent variables. Although the feasible set of segment selections is combinatorial, it is shown that a globally optimal solution to the inference problem can be obtained efficiently, through the solution of a series of linear programs. Besides the coarse-level location of segments, a finer model of video structure is implemented by jointly pooling features of segment-tuples. Experimental evaluation demonstrates that the resulting event detector has state-of-the-art performance on challenging video datasets.","PeriodicalId":6351,"journal":{"name":"2013 IEEE International Conference on Computer Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73043738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
期刊
2013 IEEE International Conference on Computer Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1