首页 > 最新文献

IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision最新文献

英文 中文
Hierarchical representation of videos with spatio-temporal fibers 具有时空纤维的视频分层表示
Ratnesh Kumar, G. Charpiat, M. Thonnat
We propose a new representation of videos, as spatio-temporal fibers. These fibers are clusters of trajectories that are meshed spatially in the image domain. They form a hierarchical partition of the video into regions that are coherent in time and space. They can be seen as dense, spatially-organized, long-term optical flow. Their robustness to noise and ambiguities is ensured by taking into account the reliability of each source of information. As fibers allow users to handle easily moving objects in videos, they prove useful for video editing, as demonstrated in a video inpainting example.
我们提出了一种新的视频表示,作为时空纤维。这些纤维是在图像域空间网格化的轨迹簇。它们将视频分层划分为在时间和空间上一致的区域。它们可以被看作是密集的、有空间组织的、长期的光流。它们对噪声和模糊性的鲁棒性是通过考虑每个信息源的可靠性来保证的。由于纤维允许用户在视频中轻松处理移动物体,它们被证明对视频编辑很有用,如视频中所示。
{"title":"Hierarchical representation of videos with spatio-temporal fibers","authors":"Ratnesh Kumar, G. Charpiat, M. Thonnat","doi":"10.1109/WACV.2014.6836064","DOIUrl":"https://doi.org/10.1109/WACV.2014.6836064","url":null,"abstract":"We propose a new representation of videos, as spatio-temporal fibers. These fibers are clusters of trajectories that are meshed spatially in the image domain. They form a hierarchical partition of the video into regions that are coherent in time and space. They can be seen as dense, spatially-organized, long-term optical flow. Their robustness to noise and ambiguities is ensured by taking into account the reliability of each source of information. As fibers allow users to handle easily moving objects in videos, they prove useful for video editing, as demonstrated in a video inpainting example.","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86636857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Furniture-geek: Understanding fine-grained furniture attributes from freely associated text and tags 家具极客:从自由关联的文本和标签中理解细粒度家具属性
Vicente Ordonez, V. Jagadeesh, Wei Di, Anurag Bhardwaj, Robinson Piramuthu
As the amount of user generated content on the internet grows, it becomes ever more important to come up with vision systems that learn directly from weakly annotated and noisy data. We leverage a large scale collection of user generated content comprising of images, tags and title/captions of furniture inventory from an e-commerce website to discover and categorize learnable visual attributes. Furniture categories have long been the quintessential example of why computer vision is hard, and we make one of the first attempts to understand them through a large scale weakly annotated dataset. We focus on a handful of furniture categories that are associated with a large number of fine-grained attributes. We propose a set of localized feature representations built on top of state-of-the-art computer vision representations originally designed for fine-grained object categorization. We report a thorough empirical characterization on the visual identifiability of various fine-grained attributes using these representations and show encouraging results on finding iconic images and on multi-attribute prediction.
随着互联网上用户生成内容的数量不断增长,直接从弱注释和噪声数据中学习的视觉系统变得越来越重要。我们利用大量用户生成的内容,包括来自电子商务网站的家具库存的图像、标签和标题/说明,来发现和分类可学习的视觉属性。长期以来,家具类别一直是计算机视觉困难的典型例子,我们通过大规模弱注释数据集首次尝试理解它们。我们专注于与大量细粒度属性相关联的少数家具类别。我们提出了一组局部特征表示,建立在最初为细粒度对象分类设计的最先进的计算机视觉表示之上。我们报告了使用这些表示对各种细粒度属性的视觉可识别性进行了彻底的经验表征,并在寻找标志性图像和多属性预测方面显示了令人鼓舞的结果。
{"title":"Furniture-geek: Understanding fine-grained furniture attributes from freely associated text and tags","authors":"Vicente Ordonez, V. Jagadeesh, Wei Di, Anurag Bhardwaj, Robinson Piramuthu","doi":"10.1109/WACV.2014.6836083","DOIUrl":"https://doi.org/10.1109/WACV.2014.6836083","url":null,"abstract":"As the amount of user generated content on the internet grows, it becomes ever more important to come up with vision systems that learn directly from weakly annotated and noisy data. We leverage a large scale collection of user generated content comprising of images, tags and title/captions of furniture inventory from an e-commerce website to discover and categorize learnable visual attributes. Furniture categories have long been the quintessential example of why computer vision is hard, and we make one of the first attempts to understand them through a large scale weakly annotated dataset. We focus on a handful of furniture categories that are associated with a large number of fine-grained attributes. We propose a set of localized feature representations built on top of state-of-the-art computer vision representations originally designed for fine-grained object categorization. We report a thorough empirical characterization on the visual identifiability of various fine-grained attributes using these representations and show encouraging results on finding iconic images and on multi-attribute prediction.","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84157800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Data-driven exemplar model selection 数据驱动的范例模型选择
Ishan Misra, Abhinav Shrivastava, M. Hebert
We consider the problem of discovering discriminative exemplars suitable for object detection. Due to the diversity in appearance in real world objects, an object detector must capture variations in scale, viewpoint, illumination etc. The current approaches do this by using mixtures of models, where each mixture is designed to capture one (or a few) axis of variation. Current methods usually rely on heuristics to capture these variations; however, it is unclear which axes of variation exist and are relevant to a particular task. Another issue is the requirement of a large set of training images to capture such variations. Current methods do not scale to large training sets either because of training time complexity [31] or test time complexity [26]. In this work, we explore the idea of compactly capturing task-appropriate variation from the data itself. We propose a two stage data-driven process, which selects and learns a compact set of exemplar models for object detection. These selected models have an inherent ranking, which can be used for anytime/budgeted detection scenarios. Another benefit of our approach (beyond the computational speedup) is that the selected set of exemplar models performs better than the entire set.
我们考虑发现适合于目标检测的判别样例问题。由于现实世界中物体外观的多样性,物体检测器必须捕捉尺度、视点、光照等方面的变化。目前的方法是通过使用混合模型来做到这一点,其中每个混合被设计为捕获一个(或几个)变化轴。目前的方法通常依赖于启发式来捕捉这些变化;然而,目前尚不清楚哪些变异轴存在并与特定任务相关。另一个问题是需要大量的训练图像来捕捉这些变化。由于训练时间的复杂性[31]或测试时间的复杂性[26],目前的方法不能扩展到大型训练集。在这项工作中,我们探索了从数据本身紧凑地捕获适合任务的变化的想法。我们提出了一个两阶段的数据驱动过程,该过程选择和学习一组紧凑的样本模型用于目标检测。这些选择的模型具有固有的排名,可用于任何时间/预算检测场景。我们的方法的另一个好处(除了计算加速之外)是选定的范例模型集比整个集执行得更好。
{"title":"Data-driven exemplar model selection","authors":"Ishan Misra, Abhinav Shrivastava, M. Hebert","doi":"10.1109/WACV.2014.6836080","DOIUrl":"https://doi.org/10.1109/WACV.2014.6836080","url":null,"abstract":"We consider the problem of discovering discriminative exemplars suitable for object detection. Due to the diversity in appearance in real world objects, an object detector must capture variations in scale, viewpoint, illumination etc. The current approaches do this by using mixtures of models, where each mixture is designed to capture one (or a few) axis of variation. Current methods usually rely on heuristics to capture these variations; however, it is unclear which axes of variation exist and are relevant to a particular task. Another issue is the requirement of a large set of training images to capture such variations. Current methods do not scale to large training sets either because of training time complexity [31] or test time complexity [26]. In this work, we explore the idea of compactly capturing task-appropriate variation from the data itself. We propose a two stage data-driven process, which selects and learns a compact set of exemplar models for object detection. These selected models have an inherent ranking, which can be used for anytime/budgeted detection scenarios. Another benefit of our approach (beyond the computational speedup) is that the selected set of exemplar models performs better than the entire set.","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84218738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Rotation estimation from cloud tracking 从云跟踪估计旋转
Sangwoo Cho, Enrique Dunn, Jan-Michael Frahm
We address the problem of online relative orientation estimation from streaming video captured by a sky-facing camera on a mobile device. Namely, we rely on the detection and tracking of visual features attained from cloud structures. Our proposed method achieves robust and efficient operation by combining realtime visual odometry modules, learning based feature classification, and Kalman filtering within a robustness-driven data management framework, while achieving framerate processing on a mobile device. The relatively large 3D distance between the camera and the observed cloud features is leveraged to simplify our processing pipeline. First, as an efficiency driven optimization, we adopt a homography based motion model and focus on estimating relative rotations across adjacent keyframes. To this end, we rely on efficient feature extraction, KLT tracking, and RANSAC based model fitting. Second, to ensure the validity of our simplified motion model, we segregate detected cloud features from scene features through SVM classification. Finally, to make tracking more robust, we employ predictive Kalman filtering to enable feature persistence through temporary occlusions and manage feature spatial distribution to foster tracking robustness. Results exemplify the accuracy and robustness of the proposed approach and highlight its potential as a passive orientation sensor.
我们解决了由移动设备上的面向天空的相机捕获的流媒体视频的在线相对方向估计问题。也就是说,我们依赖于从云结构中获得的视觉特征的检测和跟踪。我们提出的方法通过在鲁棒性驱动的数据管理框架内结合实时视觉里程计模块、基于学习的特征分类和卡尔曼滤波,实现了鲁棒性和高效的操作,同时实现了移动设备上的帧率处理。相机和观察到的云特征之间相对较大的3D距离被用来简化我们的处理流程。首先,作为效率驱动的优化,我们采用了基于单应性的运动模型,并专注于估计相邻关键帧之间的相对旋转。为此,我们依靠高效的特征提取、KLT跟踪和基于RANSAC的模型拟合。其次,为了保证简化的运动模型的有效性,我们通过SVM分类将检测到的云特征从场景特征中分离出来。最后,为了提高跟踪的鲁棒性,我们采用预测卡尔曼滤波,通过临时遮挡实现特征的持久性,并管理特征的空间分布,以增强跟踪的鲁棒性。结果证明了该方法的准确性和鲁棒性,并突出了其作为无源定向传感器的潜力。
{"title":"Rotation estimation from cloud tracking","authors":"Sangwoo Cho, Enrique Dunn, Jan-Michael Frahm","doi":"10.1109/WACV.2014.6836006","DOIUrl":"https://doi.org/10.1109/WACV.2014.6836006","url":null,"abstract":"We address the problem of online relative orientation estimation from streaming video captured by a sky-facing camera on a mobile device. Namely, we rely on the detection and tracking of visual features attained from cloud structures. Our proposed method achieves robust and efficient operation by combining realtime visual odometry modules, learning based feature classification, and Kalman filtering within a robustness-driven data management framework, while achieving framerate processing on a mobile device. The relatively large 3D distance between the camera and the observed cloud features is leveraged to simplify our processing pipeline. First, as an efficiency driven optimization, we adopt a homography based motion model and focus on estimating relative rotations across adjacent keyframes. To this end, we rely on efficient feature extraction, KLT tracking, and RANSAC based model fitting. Second, to ensure the validity of our simplified motion model, we segregate detected cloud features from scene features through SVM classification. Finally, to make tracking more robust, we employ predictive Kalman filtering to enable feature persistence through temporary occlusions and manage feature spatial distribution to foster tracking robustness. Results exemplify the accuracy and robustness of the proposed approach and highlight its potential as a passive orientation sensor.","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85886402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Small Hand-held Object Recognition Test (SHORT) 小型手持物体识别测试(SHORT)
Jose Rivera-Rubio, Saad Idrees, I. Alexiou, Lucas Hadjilucas, A. Bharath
The ubiquity of smartphones with high quality cameras and fast network connections will spawn many new applications. One of these is visual object recognition, an emerging smartphone feature which could play roles in high-street shopping, price comparisons and similar uses. There are also potential roles for such technology in assistive applications, such as for people who have visual impairment. We introduce the Small Hand-held Object Recognition Test (SHORT), a new dataset that aims to benchmark the performance of algorithms for recognising hand-held objects from either snapshots or videos acquired using hand-held or wearable cameras. We show that SHORT provides a set of images and ground truth that help assess the many factors that affect recognition performance. SHORT is designed to be focused on the assistive systems context, though it can provide useful information on more general aspects of recognition performance for hand-held objects. We describe the present state of the dataset, comprised of a small set of high quality training images and a large set of nearly 135,000 smartphone-captured test images of 30 grocery products. In this version, SHORT addresses another context not covered by traditional datasets, in which high quality catalogue images are being compared with variable quality user-captured images; this makes the matching more challenging in SHORT than other datasets. Images of similar quality are often not present in “database” and “query” datasets, a situation being increasingly encountered in commercial applications. Finally, we compare the results of popular object recognition algorithms of different levels of complexity when tested against SHORT and discuss the research challenges arising from the particularities of visual object recognition from objects that are being held by users.
拥有高质量摄像头和快速网络连接的智能手机无处不在,这将催生许多新的应用。其中之一是视觉物体识别,这是一项新兴的智能手机功能,可以在商业街购物、价格比较和类似用途中发挥作用。这种技术在辅助应用中也有潜在的作用,比如对有视力障碍的人。我们介绍了小型手持物体识别测试(SHORT),这是一个新的数据集,旨在测试从使用手持或可穿戴相机获取的快照或视频中识别手持物体的算法的性能。我们表明,SHORT提供了一组图像和基础事实,有助于评估影响识别性能的许多因素。SHORT的设计重点是辅助系统上下文,尽管它可以提供关于手持物体识别性能更一般方面的有用信息。我们描述了数据集的当前状态,该数据集由一小组高质量的训练图像和一组近13.5万张智能手机捕获的30种杂货产品的测试图像组成。在这个版本中,SHORT解决了传统数据集未涵盖的另一个上下文,其中高质量目录图像与可变质量用户捕获的图像进行比较;这使得SHORT中的匹配比其他数据集更具挑战性。类似质量的图像通常不存在于“数据库”和“查询”数据集中,这种情况在商业应用中越来越多地遇到。最后,我们比较了不同复杂程度的流行物体识别算法在SHORT测试时的结果,并讨论了从用户持有的物体中识别视觉物体的特殊性所带来的研究挑战。
{"title":"Small Hand-held Object Recognition Test (SHORT)","authors":"Jose Rivera-Rubio, Saad Idrees, I. Alexiou, Lucas Hadjilucas, A. Bharath","doi":"10.1109/WACV.2014.6836057","DOIUrl":"https://doi.org/10.1109/WACV.2014.6836057","url":null,"abstract":"The ubiquity of smartphones with high quality cameras and fast network connections will spawn many new applications. One of these is visual object recognition, an emerging smartphone feature which could play roles in high-street shopping, price comparisons and similar uses. There are also potential roles for such technology in assistive applications, such as for people who have visual impairment. We introduce the Small Hand-held Object Recognition Test (SHORT), a new dataset that aims to benchmark the performance of algorithms for recognising hand-held objects from either snapshots or videos acquired using hand-held or wearable cameras. We show that SHORT provides a set of images and ground truth that help assess the many factors that affect recognition performance. SHORT is designed to be focused on the assistive systems context, though it can provide useful information on more general aspects of recognition performance for hand-held objects. We describe the present state of the dataset, comprised of a small set of high quality training images and a large set of nearly 135,000 smartphone-captured test images of 30 grocery products. In this version, SHORT addresses another context not covered by traditional datasets, in which high quality catalogue images are being compared with variable quality user-captured images; this makes the matching more challenging in SHORT than other datasets. Images of similar quality are often not present in “database” and “query” datasets, a situation being increasingly encountered in commercial applications. Finally, we compare the results of popular object recognition algorithms of different levels of complexity when tested against SHORT and discuss the research challenges arising from the particularities of visual object recognition from objects that are being held by users.","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88417994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Is my new tracker really better than yours? 我的新追踪器真的比你的好吗?
Luka Cehovin, M. Kristan, A. Leonardis
The problem of visual tracking evaluation is sporting an abundance of performance measures, which are used by various authors, and largely suffers from lack of consensus about which measures should be preferred. This is hampering the cross-paper tracker comparison and faster advancement of the field. In this paper we provide an overview of the popular measures and performance visualizations and their critical theoretical and experimental analysis. We show that several measures are equivalent from the point of information they provide for tracker comparison and, crucially, that some are more brittle than the others. Based on our analysis we narrow down the set of potential measures to only two complementary ones that can be intuitively interpreted and visualized, thus pushing towards homogenization of the tracker evaluation methodology.
视觉跟踪评估的问题是各种作者使用的大量性能度量,并且在很大程度上缺乏对哪种度量应该首选的共识。这阻碍了跨论文跟踪器的比较和该领域的更快发展。在本文中,我们概述了流行的测量和性能可视化及其关键的理论和实验分析。我们表明,从它们为跟踪器比较提供的信息的角度来看,有几个度量是等效的,而且至关重要的是,有些度量比其他度量更脆弱。根据我们的分析,我们将一组潜在的措施缩小到只有两个互补的措施,可以直观地解释和可视化,从而推动跟踪器评估方法的同质化。
{"title":"Is my new tracker really better than yours?","authors":"Luka Cehovin, M. Kristan, A. Leonardis","doi":"10.1109/WACV.2014.6836055","DOIUrl":"https://doi.org/10.1109/WACV.2014.6836055","url":null,"abstract":"The problem of visual tracking evaluation is sporting an abundance of performance measures, which are used by various authors, and largely suffers from lack of consensus about which measures should be preferred. This is hampering the cross-paper tracker comparison and faster advancement of the field. In this paper we provide an overview of the popular measures and performance visualizations and their critical theoretical and experimental analysis. We show that several measures are equivalent from the point of information they provide for tracker comparison and, crucially, that some are more brittle than the others. Based on our analysis we narrow down the set of potential measures to only two complementary ones that can be intuitively interpreted and visualized, thus pushing towards homogenization of the tracker evaluation methodology.","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90943230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 93
Offline learning of prototypical negatives for efficient online Exemplar SVM 高效在线样例支持向量机的原型否定离线学习
Masato Takami, Peter Bell, B. Ommer
Online searches in big image databases require sufficient results in feasible time. Digitization campaigns have simplified the access to a huge number of images in the field of art history, which can be analyzed by detecting duplicates and similar objects in the dataset. A high recall is essential for the evaluation and therefore the search method has to be robust against minor changes due to smearing or aging effects of the documents. At the same time the computational time has to be short to allow a practical use of the online search. By using an Exemplar SVM based classifier [12] a high recall can be achieved, but the mining of negatives and the multiple rounds of retraining for every search makes the method too time-consuming. An even bigger problem is that by allowing arbitrary query regions, it is not possible to provide a training set, which would be necessary to create a classifier. To solve this, we created a pool of general negatives offline in advance, which can be used by any arbitrary input in the online search step and requires only one short training round without the time-consuming mining. In a second step, this classifier is improved by using positive detections in an additional training round. This results in a classifier for the online search in unlabeled datasets, which provides high recall in short calculation time respectively.
大图像数据库的在线搜索要求在可行的时间内得到充分的结果。数字化运动简化了对艺术史领域大量图像的访问,可以通过检测数据集中的重复和相似对象来分析这些图像。高召回率对于评估是必不可少的,因此搜索方法必须对由于文件的涂抹或老化影响而产生的微小变化具有鲁棒性。同时,计算时间必须短,以允许实际使用的在线搜索。通过使用基于样例支持向量机的分类器[12],可以获得较高的召回率,但每次搜索的负面挖掘和多轮再训练使得该方法过于耗时。一个更大的问题是,通过允许任意查询区域,不可能提供创建分类器所必需的训练集。为了解决这个问题,我们提前在离线状态下创建了一个通用否定库,它可以被任何在线搜索步骤中的任意输入使用,并且只需要一个短的训练回合,而不需要耗时的挖掘。在第二步中,通过在额外的训练轮中使用阳性检测来改进该分类器。这产生了一个分类器,用于在线搜索未标记的数据集,分别在较短的计算时间内提供高召回率。
{"title":"Offline learning of prototypical negatives for efficient online Exemplar SVM","authors":"Masato Takami, Peter Bell, B. Ommer","doi":"10.1109/WACV.2014.6836075","DOIUrl":"https://doi.org/10.1109/WACV.2014.6836075","url":null,"abstract":"Online searches in big image databases require sufficient results in feasible time. Digitization campaigns have simplified the access to a huge number of images in the field of art history, which can be analyzed by detecting duplicates and similar objects in the dataset. A high recall is essential for the evaluation and therefore the search method has to be robust against minor changes due to smearing or aging effects of the documents. At the same time the computational time has to be short to allow a practical use of the online search. By using an Exemplar SVM based classifier [12] a high recall can be achieved, but the mining of negatives and the multiple rounds of retraining for every search makes the method too time-consuming. An even bigger problem is that by allowing arbitrary query regions, it is not possible to provide a training set, which would be necessary to create a classifier. To solve this, we created a pool of general negatives offline in advance, which can be used by any arbitrary input in the online search step and requires only one short training round without the time-consuming mining. In a second step, this classifier is improved by using positive detections in an additional training round. This results in a classifier for the online search in unlabeled datasets, which provides high recall in short calculation time respectively.","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77308573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
“Important stuff, everywhere!” Activity recognition with salient proto-objects as context “重要的东西,到处都是!”以显著原对象为背景的活动识别
L. Rybok, Boris Schauerte, Ziad Al-Halah, R. Stiefelhagen
Object information is an important cue to discriminate between activities that draw part of their meaning from context. Most of current work either ignores this information or relies on specific object detectors. However, such object detectors require a significant amount of training data and complicate the transfer of the action recognition framework to novel domains with different objects and object-action relationships. Motivated by recent advances in saliency detection, we propose to employ salient proto-objects for unsupervised discovery of object- and object-part candidates and use them as a contextual cue for activity recognition. Our experimental evaluation on three publicly available data sets shows that the integration of proto-objects and simple motion features substantially improves recognition performance, outperforming the state-of-the-art.
对象信息是区分从上下文获取部分意义的活动的重要线索。目前的大多数工作要么忽略了这些信息,要么依赖于特定的目标检测器。然而,这种目标检测器需要大量的训练数据,并且使动作识别框架向具有不同对象和对象-动作关系的新领域的转移复杂化。受显著性检测最新进展的启发,我们建议使用显著性原型对象进行无监督的对象和对象部分候选发现,并将其用作活动识别的上下文线索。我们对三个公开可用数据集的实验评估表明,原型对象和简单运动特征的集成大大提高了识别性能,优于最先进的技术。
{"title":"“Important stuff, everywhere!” Activity recognition with salient proto-objects as context","authors":"L. Rybok, Boris Schauerte, Ziad Al-Halah, R. Stiefelhagen","doi":"10.1109/WACV.2014.6836041","DOIUrl":"https://doi.org/10.1109/WACV.2014.6836041","url":null,"abstract":"Object information is an important cue to discriminate between activities that draw part of their meaning from context. Most of current work either ignores this information or relies on specific object detectors. However, such object detectors require a significant amount of training data and complicate the transfer of the action recognition framework to novel domains with different objects and object-action relationships. Motivated by recent advances in saliency detection, we propose to employ salient proto-objects for unsupervised discovery of object- and object-part candidates and use them as a contextual cue for activity recognition. Our experimental evaluation on three publicly available data sets shows that the integration of proto-objects and simple motion features substantially improves recognition performance, outperforming the state-of-the-art.","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79262611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Object co-labeling in multiple images 在多个图像中进行对象共标记
Xi Chen, Arpit Jain, L. Davis
We introduce a new problem called object co-labeling where the goal is to jointly annotate multiple images of the same scene which do not have temporal consistency. We present an adaptive framework for joint segmentation and recognition to solve this problem. We propose an objective function that considers not only appearance but also appearance and context consistency across images of the scene. A relaxed form of the cost function is minimized using an efficient quadratic programming solver. Our approach improves labeling performance compared to labeling each image individually. We also show the application of our co-labeling framework to other recognition problems such as label propagation in videos and object recognition in similar scenes. Experimental results demonstrates the efficacy of our approach.
我们引入了一个新的问题,称为对象共标注,其目标是共同标注同一场景中不具有时间一致性的多幅图像。为了解决这一问题,我们提出了一种自适应的联合分割和识别框架。我们提出了一个目标函数,不仅考虑外观,而且考虑场景图像的外观和上下文一致性。使用有效的二次规划求解器最小化代价函数的松弛形式。与单独标记每个图像相比,我们的方法提高了标记性能。我们还展示了我们的共同标记框架在其他识别问题上的应用,如视频中的标签传播和相似场景中的物体识别。实验结果证明了该方法的有效性。
{"title":"Object co-labeling in multiple images","authors":"Xi Chen, Arpit Jain, L. Davis","doi":"10.1109/WACV.2014.6836031","DOIUrl":"https://doi.org/10.1109/WACV.2014.6836031","url":null,"abstract":"We introduce a new problem called object co-labeling where the goal is to jointly annotate multiple images of the same scene which do not have temporal consistency. We present an adaptive framework for joint segmentation and recognition to solve this problem. We propose an objective function that considers not only appearance but also appearance and context consistency across images of the scene. A relaxed form of the cost function is minimized using an efficient quadratic programming solver. Our approach improves labeling performance compared to labeling each image individually. We also show the application of our co-labeling framework to other recognition problems such as label propagation in videos and object recognition in similar scenes. Experimental results demonstrates the efficacy of our approach.","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77957816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Multi class boosted random ferns for adapting a generic object detector to a specific video 多类增强随机蕨类植物适应一个通用的对象检测器到一个特定的视频
Pramod Sharma, R. Nevatia
Detector adaptation is a challenging problem and several methods have been proposed in recent years. We propose multi class boosted random ferns for detector adaptation. First we collect online samples in an unsupervised manner and collected positive online samples are divided into different categories for different poses of the object. Then we train a multi-class boosted random fern adaptive classifier. Our adaptive classifier training focuses on two aspects: discriminability and efficiency. Boosting provides discriminative random ferns. For efficiency, our boosting procedure focuses on sharing the same feature among different classes and multiple strong classifiers are trained in a single boosting framework. Experiments on challenging public datasets demonstrate effectiveness of our approach.
检测器自适应是一个具有挑战性的问题,近年来提出了几种方法。我们提出了多类增强随机蕨类植物来适应检测器。首先采用无监督的方式采集在线样本,采集到的在线阳性样本根据物体的不同姿态进行分类。然后我们训练了一个多类增强随机蕨类自适应分类器。我们的自适应分类器训练主要集中在两个方面:可判别性和效率。增强提供了判别随机蕨类。为了提高效率,我们的增强过程侧重于在不同的类之间共享相同的特征,并且在单个增强框架中训练多个强分类器。在具有挑战性的公共数据集上的实验证明了我们的方法的有效性。
{"title":"Multi class boosted random ferns for adapting a generic object detector to a specific video","authors":"Pramod Sharma, R. Nevatia","doi":"10.1109/WACV.2014.6836028","DOIUrl":"https://doi.org/10.1109/WACV.2014.6836028","url":null,"abstract":"Detector adaptation is a challenging problem and several methods have been proposed in recent years. We propose multi class boosted random ferns for detector adaptation. First we collect online samples in an unsupervised manner and collected positive online samples are divided into different categories for different poses of the object. Then we train a multi-class boosted random fern adaptive classifier. Our adaptive classifier training focuses on two aspects: discriminability and efficiency. Boosting provides discriminative random ferns. For efficiency, our boosting procedure focuses on sharing the same feature among different classes and multiple strong classifiers are trained in a single boosting framework. Experiments on challenging public datasets demonstrate effectiveness of our approach.","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84756556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1