首页 > 最新文献

2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance最新文献

英文 中文
Object tracking with dynamic feature graph 基于动态特征图的目标跟踪
Feng Tang, Hai Tao
Two major problems for model-based object tracking are: 1) how to represent an object so that it can effectively be discriminated with background and other objects; 2) how to dynamically update the model to accommodate the object appearance and structure changes. Traditional appearance based representations (like color histogram) fails when the object has rich texture. In this paper, we present a novel feature based object representation attributed relational graph (ARG) for reliable object tracking. The object is modeled with invariant features (SIFT) and their relationship is encoded in the form of an ARG that can effectively distinguish itself from background and other objects. We adopt a competitive and efficient dynamic model to adoptively update the object model by adding new stable features as well as deleting inactive features. A relaxation labeling method is used to match the model graph with the observation to gel the best object position. Experiments show that our method can get reliable track even under dramatic appearance changes, occlusions, etc.
基于模型的目标跟踪的两个主要问题是:1)如何表示一个目标,使其能够有效地与背景和其他目标区分;2)如何动态更新模型以适应对象外观和结构的变化。当对象具有丰富的纹理时,传统的基于外观的表示(如颜色直方图)就失效了。本文提出了一种新的基于特征的对象表示属性关系图(ARG),用于可靠的目标跟踪。用不变特征(SIFT)对目标进行建模,并将它们之间的关系以ARG的形式进行编码,从而有效地将目标与背景和其他目标区分开来。我们采用一种竞争高效的动态模型,通过增加新的稳定特征和删除不活跃特征来自适应地更新对象模型。采用松弛标记法将模型图与观测值进行匹配,得到最佳目标位置。实验表明,该方法可以在剧烈的外观变化、遮挡等情况下获得可靠的跟踪结果。
{"title":"Object tracking with dynamic feature graph","authors":"Feng Tang, Hai Tao","doi":"10.1109/VSPETS.2005.1570894","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570894","url":null,"abstract":"Two major problems for model-based object tracking are: 1) how to represent an object so that it can effectively be discriminated with background and other objects; 2) how to dynamically update the model to accommodate the object appearance and structure changes. Traditional appearance based representations (like color histogram) fails when the object has rich texture. In this paper, we present a novel feature based object representation attributed relational graph (ARG) for reliable object tracking. The object is modeled with invariant features (SIFT) and their relationship is encoded in the form of an ARG that can effectively distinguish itself from background and other objects. We adopt a competitive and efficient dynamic model to adoptively update the object model by adding new stable features as well as deleting inactive features. A relaxation labeling method is used to match the model graph with the observation to gel the best object position. Experiments show that our method can get reliable track even under dramatic appearance changes, occlusions, etc.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115301007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 65
Application and Evaluation of Colour Constancy in Visual Surveillance 色彩恒定性在视觉监控中的应用与评价
John-Paul Renno, Dimitrios Makris, T. Ellis, Graeme A. Jones
The problem of colour constancy in the context of visual surveillance applications is addressed in this paper. We seek to reduce the variability of the surface colours inherent in the video of most indoor and outdoor surveillance scenarios to improve the robustness and reliability of applications which depend on reliable colour descriptions e.g. content retrieval. Two well-known colour constancy algorithms - the Grey-world and Gamut-mapping - are applied to frame sequences containing significant variations in the colour temperature of the illuminant. We also consider the problem of automatically selecting a reference image, representative of the scene under the canonical illuminant. A quantitative evaluation of the performance of the colour constancy algorithms is undertaken
本文讨论了在视觉监控应用中的色彩稳定性问题。我们试图减少大多数室内和室外监控场景中视频中固有的表面颜色的可变性,以提高依赖于可靠颜色描述的应用程序的鲁棒性和可靠性,例如内容检索。两种著名的颜色恒定算法——灰世界和色域映射——被应用于包含光源色温显著变化的帧序列。我们还考虑了在标准光源下自动选择代表场景的参考图像的问题。对颜色恒常性算法的性能进行了定量评价
{"title":"Application and Evaluation of Colour Constancy in Visual Surveillance","authors":"John-Paul Renno, Dimitrios Makris, T. Ellis, Graeme A. Jones","doi":"10.1109/VSPETS.2005.1570929","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570929","url":null,"abstract":"The problem of colour constancy in the context of visual surveillance applications is addressed in this paper. We seek to reduce the variability of the surface colours inherent in the video of most indoor and outdoor surveillance scenarios to improve the robustness and reliability of applications which depend on reliable colour descriptions e.g. content retrieval. Two well-known colour constancy algorithms - the Grey-world and Gamut-mapping - are applied to frame sequences containing significant variations in the colour temperature of the illuminant. We also consider the problem of automatically selecting a reference image, representative of the scene under the canonical illuminant. A quantitative evaluation of the performance of the colour constancy algorithms is undertaken","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"181 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131847764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Appearance-based 3D face tracker: an evaluation study 基于外观的3D人脸跟踪器:一项评估研究
F. Dornaika, A. Sappa
The ability to detect and track human heads and faces in video sequences is useful in a great number of applications. In this paper, we present our recent 3D face tracker that combines online appearance models with an image registration technique. This monocular tracker runs in real-time and is drift insensitive. We introduce a scheme that takes into account the orientation of local facial regions into the registration technique. Moreover, we introduce a general framework for evaluating the developed appearance-based tracker. Precision and usability of the tracker are assessed using stereo-based range facial data from which ground truth 3D motions are inferred. This evaluation quantifies the monocular tracker accuracy, and identifies its working range in 3D space.
在视频序列中检测和跟踪人类头部和面部的能力在许多应用中都很有用。在本文中,我们介绍了一种结合在线外观模型和图像配准技术的3D人脸跟踪器。这种单目跟踪器是实时运行的,对漂移不敏感。我们引入了一种考虑局部面部区域方向的配准方法。此外,我们还介绍了一个评估已开发的基于外观的跟踪器的一般框架。跟踪器的精度和可用性使用基于立体的距离面部数据进行评估,从这些数据推断出地面真实的3D运动。该评估量化了单目跟踪器的精度,并确定了其在三维空间中的工作范围。
{"title":"Appearance-based 3D face tracker: an evaluation study","authors":"F. Dornaika, A. Sappa","doi":"10.1109/VSPETS.2005.1570906","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570906","url":null,"abstract":"The ability to detect and track human heads and faces in video sequences is useful in a great number of applications. In this paper, we present our recent 3D face tracker that combines online appearance models with an image registration technique. This monocular tracker runs in real-time and is drift insensitive. We introduce a scheme that takes into account the orientation of local facial regions into the registration technique. Moreover, we introduce a general framework for evaluating the developed appearance-based tracker. Precision and usability of the tracker are assessed using stereo-based range facial data from which ground truth 3D motions are inferred. This evaluation quantifies the monocular tracker accuracy, and identifies its working range in 3D space.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131929224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Deleted Interpolation Using a Hierarchical Bayesian Grammar Network for Recognizing Human Activity 基于层次贝叶斯语法网络的删除插值识别人类活动
Kris Kitani, Y. Sato, A. Sugimoto
From the viewpoint of an intelligent video surveillance system, the high-level recognition of human activity requires a priori hierarchical domain knowledge as well as a means of reasoning based on that knowledge. We approach the problem of human activity recognition based on the understanding that activities are hierarchical, temporally constrained and temporally overlapped. While stochastic grammars and graphical models have been widely used for the recognition of human activity, methods combining hierarchy and complex queries have been limited. We propose a new method of merging and implementing the advantages of both approaches to recognize activities in real-time. To address the hierarchical nature of human activity recognition, we implement a hierarchical Bayesian network (HBN) based on a stochastic context-free grammar (SCFG). The HBN is applied to digressive substrings of the current string of evidence via deleted interpolation (DI) to calculate the probability distribution of overlapped activities in the current string. Preliminary results from the analysis of activity sequences from a video surveillance camera show the validity of our approach.
从智能视频监控系统的角度来看,对人类活动的高级识别需要先验的层次领域知识以及基于该知识的推理手段。我们基于活动是分层的、时间约束的和时间重叠的理解来处理人类活动识别问题。虽然随机语法和图形模型已广泛用于人类活动的识别,但结合层次和复杂查询的方法受到限制。我们提出了一种融合并实现这两种方法的优点的新方法来实时识别活动。为了解决人类活动识别的层次性,我们实现了基于随机上下文无关语法(SCFG)的层次化贝叶斯网络(HBN)。通过删除插值(DI)将HBN应用于当前证据串的偏离子串,计算当前证据串中重叠活动的概率分布。从视频监控摄像机的活动序列分析的初步结果表明了我们的方法的有效性。
{"title":"Deleted Interpolation Using a Hierarchical Bayesian Grammar Network for Recognizing Human Activity","authors":"Kris Kitani, Y. Sato, A. Sugimoto","doi":"10.1109/VSPETS.2005.1570921","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570921","url":null,"abstract":"From the viewpoint of an intelligent video surveillance system, the high-level recognition of human activity requires a priori hierarchical domain knowledge as well as a means of reasoning based on that knowledge. We approach the problem of human activity recognition based on the understanding that activities are hierarchical, temporally constrained and temporally overlapped. While stochastic grammars and graphical models have been widely used for the recognition of human activity, methods combining hierarchy and complex queries have been limited. We propose a new method of merging and implementing the advantages of both approaches to recognize activities in real-time. To address the hierarchical nature of human activity recognition, we implement a hierarchical Bayesian network (HBN) based on a stochastic context-free grammar (SCFG). The HBN is applied to digressive substrings of the current string of evidence via deleted interpolation (DI) to calculate the probability distribution of overlapped activities in the current string. Preliminary results from the analysis of activity sequences from a video surveillance camera show the validity of our approach.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121645904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Behavior recognition via sparse spatio-temporal features 基于稀疏时空特征的行为识别
Piotr Dollár, V. Rabaud, G. Cottrell, Serge J. Belongie
A common trend in object recognition is to detect and leverage the use of sparse, informative feature points. The use of such features makes the problem more manageable while providing increased robustness to noise and pose variation. In this work we develop an extension of these ideas to the spatio-temporal case. For this purpose, we show that the direct 3D counterparts to commonly used 2D interest point detectors are inadequate, and we propose an alternative. Anchoring off of these interest points, we devise a recognition algorithm based on spatio-temporally windowed data. We present recognition results on a variety of datasets including both human and rodent behavior.
目标识别的一个共同趋势是检测和利用稀疏的、信息丰富的特征点。这些特征的使用使问题更易于管理,同时提供了对噪声和姿态变化的增强鲁棒性。在这项工作中,我们将这些想法扩展到时空案例。为此,我们表明直接的3D对应物与常用的2D兴趣点检测器是不够的,我们提出了一种替代方案。基于这些兴趣点,我们设计了一种基于时空窗口数据的识别算法。我们展示了各种数据集上的识别结果,包括人类和啮齿动物的行为。
{"title":"Behavior recognition via sparse spatio-temporal features","authors":"Piotr Dollár, V. Rabaud, G. Cottrell, Serge J. Belongie","doi":"10.1109/VSPETS.2005.1570899","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570899","url":null,"abstract":"A common trend in object recognition is to detect and leverage the use of sparse, informative feature points. The use of such features makes the problem more manageable while providing increased robustness to noise and pose variation. In this work we develop an extension of these ideas to the spatio-temporal case. For this purpose, we show that the direct 3D counterparts to commonly used 2D interest point detectors are inadequate, and we propose an alternative. Anchoring off of these interest points, we devise a recognition algorithm based on spatio-temporally windowed data. We present recognition results on a variety of datasets including both human and rodent behavior.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132981287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2794
期刊
2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1