首页 > 最新文献

2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance最新文献

英文 中文
A hybrid blob- and appearance-based framework for multi-object tracking through complex occlusions 一种基于斑点和外观的混合框架,用于复杂遮挡下的多目标跟踪
Li-Qun Xu, P. Puig
Static and dynamic occlusions due to stationary scene structures and/or interactions between moving objects are a major concern in tracking multiple objects in dynamic and cluttered visual scenes. We propose a hybrid blob- and appearance-based analysis framework as a solution to the problem, exploiting the strength of both. The core of this framework is an effective probabilistic appearance based technique for complex occlusions handling. We introduce in the conventional likelihood function a novel 'spatial-depth affinity metric' (SDAM), which utilises information of both spatial locations of pixels and dynamic depth ordering of the component objects forming a group, to improve object segmentation during occlusions. Depth ordering estimation is achieved through a combination of top-down and bottom-up approach. Experiments on some real-world difficult scenarios of low resolution and highly compressed videos demonstrate the very promising results achieved.
由于静止场景结构和/或运动物体之间的相互作用造成的静态和动态遮挡是在动态和混乱的视觉场景中跟踪多个物体的主要问题。我们提出了一个混合的基于blob和基于外观的分析框架来解决这个问题,利用两者的优势。该框架的核心是一种有效的基于概率外观的复杂遮挡处理技术。我们在传统的似然函数中引入了一种新的“空间深度亲和度量”(SDAM),它利用像素的空间位置信息和组成一组的组件对象的动态深度排序信息,以改善遮挡期间的目标分割。深度排序估计是通过自顶向下和自底向上相结合的方法来实现的。在一些现实世界的低分辨率和高度压缩视频的困难场景中进行的实验表明,取得了非常有希望的结果。
{"title":"A hybrid blob- and appearance-based framework for multi-object tracking through complex occlusions","authors":"Li-Qun Xu, P. Puig","doi":"10.1109/VSPETS.2005.1570900","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570900","url":null,"abstract":"Static and dynamic occlusions due to stationary scene structures and/or interactions between moving objects are a major concern in tracking multiple objects in dynamic and cluttered visual scenes. We propose a hybrid blob- and appearance-based analysis framework as a solution to the problem, exploiting the strength of both. The core of this framework is an effective probabilistic appearance based technique for complex occlusions handling. We introduce in the conventional likelihood function a novel 'spatial-depth affinity metric' (SDAM), which utilises information of both spatial locations of pixels and dynamic depth ordering of the component objects forming a group, to improve object segmentation during occlusions. Depth ordering estimation is achieved through a combination of top-down and bottom-up approach. Experiments on some real-world difficult scenarios of low resolution and highly compressed videos demonstrate the very promising results achieved.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"542 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129997371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Application and Evaluation of Colour Constancy in Visual Surveillance 色彩恒定性在视觉监控中的应用与评价
John-Paul Renno, Dimitrios Makris, T. Ellis, Graeme A. Jones
The problem of colour constancy in the context of visual surveillance applications is addressed in this paper. We seek to reduce the variability of the surface colours inherent in the video of most indoor and outdoor surveillance scenarios to improve the robustness and reliability of applications which depend on reliable colour descriptions e.g. content retrieval. Two well-known colour constancy algorithms - the Grey-world and Gamut-mapping - are applied to frame sequences containing significant variations in the colour temperature of the illuminant. We also consider the problem of automatically selecting a reference image, representative of the scene under the canonical illuminant. A quantitative evaluation of the performance of the colour constancy algorithms is undertaken
本文讨论了在视觉监控应用中的色彩稳定性问题。我们试图减少大多数室内和室外监控场景中视频中固有的表面颜色的可变性,以提高依赖于可靠颜色描述的应用程序的鲁棒性和可靠性,例如内容检索。两种著名的颜色恒定算法——灰世界和色域映射——被应用于包含光源色温显著变化的帧序列。我们还考虑了在标准光源下自动选择代表场景的参考图像的问题。对颜色恒常性算法的性能进行了定量评价
{"title":"Application and Evaluation of Colour Constancy in Visual Surveillance","authors":"John-Paul Renno, Dimitrios Makris, T. Ellis, Graeme A. Jones","doi":"10.1109/VSPETS.2005.1570929","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570929","url":null,"abstract":"The problem of colour constancy in the context of visual surveillance applications is addressed in this paper. We seek to reduce the variability of the surface colours inherent in the video of most indoor and outdoor surveillance scenarios to improve the robustness and reliability of applications which depend on reliable colour descriptions e.g. content retrieval. Two well-known colour constancy algorithms - the Grey-world and Gamut-mapping - are applied to frame sequences containing significant variations in the colour temperature of the illuminant. We also consider the problem of automatically selecting a reference image, representative of the scene under the canonical illuminant. A quantitative evaluation of the performance of the colour constancy algorithms is undertaken","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"181 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131847764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Appearance-based 3D face tracker: an evaluation study 基于外观的3D人脸跟踪器:一项评估研究
F. Dornaika, A. Sappa
The ability to detect and track human heads and faces in video sequences is useful in a great number of applications. In this paper, we present our recent 3D face tracker that combines online appearance models with an image registration technique. This monocular tracker runs in real-time and is drift insensitive. We introduce a scheme that takes into account the orientation of local facial regions into the registration technique. Moreover, we introduce a general framework for evaluating the developed appearance-based tracker. Precision and usability of the tracker are assessed using stereo-based range facial data from which ground truth 3D motions are inferred. This evaluation quantifies the monocular tracker accuracy, and identifies its working range in 3D space.
在视频序列中检测和跟踪人类头部和面部的能力在许多应用中都很有用。在本文中,我们介绍了一种结合在线外观模型和图像配准技术的3D人脸跟踪器。这种单目跟踪器是实时运行的,对漂移不敏感。我们引入了一种考虑局部面部区域方向的配准方法。此外,我们还介绍了一个评估已开发的基于外观的跟踪器的一般框架。跟踪器的精度和可用性使用基于立体的距离面部数据进行评估,从这些数据推断出地面真实的3D运动。该评估量化了单目跟踪器的精度,并确定了其在三维空间中的工作范围。
{"title":"Appearance-based 3D face tracker: an evaluation study","authors":"F. Dornaika, A. Sappa","doi":"10.1109/VSPETS.2005.1570906","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570906","url":null,"abstract":"The ability to detect and track human heads and faces in video sequences is useful in a great number of applications. In this paper, we present our recent 3D face tracker that combines online appearance models with an image registration technique. This monocular tracker runs in real-time and is drift insensitive. We introduce a scheme that takes into account the orientation of local facial regions into the registration technique. Moreover, we introduce a general framework for evaluating the developed appearance-based tracker. Precision and usability of the tracker are assessed using stereo-based range facial data from which ground truth 3D motions are inferred. This evaluation quantifies the monocular tracker accuracy, and identifies its working range in 3D space.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131929224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Deleted Interpolation Using a Hierarchical Bayesian Grammar Network for Recognizing Human Activity 基于层次贝叶斯语法网络的删除插值识别人类活动
Kris Kitani, Y. Sato, A. Sugimoto
From the viewpoint of an intelligent video surveillance system, the high-level recognition of human activity requires a priori hierarchical domain knowledge as well as a means of reasoning based on that knowledge. We approach the problem of human activity recognition based on the understanding that activities are hierarchical, temporally constrained and temporally overlapped. While stochastic grammars and graphical models have been widely used for the recognition of human activity, methods combining hierarchy and complex queries have been limited. We propose a new method of merging and implementing the advantages of both approaches to recognize activities in real-time. To address the hierarchical nature of human activity recognition, we implement a hierarchical Bayesian network (HBN) based on a stochastic context-free grammar (SCFG). The HBN is applied to digressive substrings of the current string of evidence via deleted interpolation (DI) to calculate the probability distribution of overlapped activities in the current string. Preliminary results from the analysis of activity sequences from a video surveillance camera show the validity of our approach.
从智能视频监控系统的角度来看,对人类活动的高级识别需要先验的层次领域知识以及基于该知识的推理手段。我们基于活动是分层的、时间约束的和时间重叠的理解来处理人类活动识别问题。虽然随机语法和图形模型已广泛用于人类活动的识别,但结合层次和复杂查询的方法受到限制。我们提出了一种融合并实现这两种方法的优点的新方法来实时识别活动。为了解决人类活动识别的层次性,我们实现了基于随机上下文无关语法(SCFG)的层次化贝叶斯网络(HBN)。通过删除插值(DI)将HBN应用于当前证据串的偏离子串,计算当前证据串中重叠活动的概率分布。从视频监控摄像机的活动序列分析的初步结果表明了我们的方法的有效性。
{"title":"Deleted Interpolation Using a Hierarchical Bayesian Grammar Network for Recognizing Human Activity","authors":"Kris Kitani, Y. Sato, A. Sugimoto","doi":"10.1109/VSPETS.2005.1570921","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570921","url":null,"abstract":"From the viewpoint of an intelligent video surveillance system, the high-level recognition of human activity requires a priori hierarchical domain knowledge as well as a means of reasoning based on that knowledge. We approach the problem of human activity recognition based on the understanding that activities are hierarchical, temporally constrained and temporally overlapped. While stochastic grammars and graphical models have been widely used for the recognition of human activity, methods combining hierarchy and complex queries have been limited. We propose a new method of merging and implementing the advantages of both approaches to recognize activities in real-time. To address the hierarchical nature of human activity recognition, we implement a hierarchical Bayesian network (HBN) based on a stochastic context-free grammar (SCFG). The HBN is applied to digressive substrings of the current string of evidence via deleted interpolation (DI) to calculate the probability distribution of overlapped activities in the current string. Preliminary results from the analysis of activity sequences from a video surveillance camera show the validity of our approach.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121645904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Behavior recognition via sparse spatio-temporal features 基于稀疏时空特征的行为识别
Piotr Dollár, V. Rabaud, G. Cottrell, Serge J. Belongie
A common trend in object recognition is to detect and leverage the use of sparse, informative feature points. The use of such features makes the problem more manageable while providing increased robustness to noise and pose variation. In this work we develop an extension of these ideas to the spatio-temporal case. For this purpose, we show that the direct 3D counterparts to commonly used 2D interest point detectors are inadequate, and we propose an alternative. Anchoring off of these interest points, we devise a recognition algorithm based on spatio-temporally windowed data. We present recognition results on a variety of datasets including both human and rodent behavior.
目标识别的一个共同趋势是检测和利用稀疏的、信息丰富的特征点。这些特征的使用使问题更易于管理,同时提供了对噪声和姿态变化的增强鲁棒性。在这项工作中,我们将这些想法扩展到时空案例。为此,我们表明直接的3D对应物与常用的2D兴趣点检测器是不够的,我们提出了一种替代方案。基于这些兴趣点,我们设计了一种基于时空窗口数据的识别算法。我们展示了各种数据集上的识别结果,包括人类和啮齿动物的行为。
{"title":"Behavior recognition via sparse spatio-temporal features","authors":"Piotr Dollár, V. Rabaud, G. Cottrell, Serge J. Belongie","doi":"10.1109/VSPETS.2005.1570899","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570899","url":null,"abstract":"A common trend in object recognition is to detect and leverage the use of sparse, informative feature points. The use of such features makes the problem more manageable while providing increased robustness to noise and pose variation. In this work we develop an extension of these ideas to the spatio-temporal case. For this purpose, we show that the direct 3D counterparts to commonly used 2D interest point detectors are inadequate, and we propose an alternative. Anchoring off of these interest points, we devise a recognition algorithm based on spatio-temporally windowed data. We present recognition results on a variety of datasets including both human and rodent behavior.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132981287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2794
期刊
2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1