首页 > 最新文献

2007 IEEE Conference on Advanced Video and Signal Based Surveillance最新文献

英文 中文
Detecting shopper groups in video sequences 在视频序列中检测购物者组
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425347
A. Leykin, M. Tuceryan
We present a generalized extensible framework for automated recognition of swarming activities in video sequences. The trajectory of each individual is produced by the visual tracking sub-system and is further analyzed to detect certain types of high-level grouping behavior. We utilize recent findings in swarming behavior analysis to formulate a problem in terms of the specific distance function that we subsequently apply as part of the two-stage agglomerative clustering method to create a set of swarming events followed by a set of swarming activities. In this paper we present results for one particular type of swarming: shopper grouping. As part of this work the events detected in a relatively short time interval are further integrated into activities, the manifestation of prolonged high-level swarming behavior. The results demonstrate the ability of our method to detect such activities in congested surveillance videos. In particular in three hours of indoor retail store video, our method has correctly identified over85% of valid '"shopper-groups'" with a very low level of false positives, validated against human coded ground truth.
我们提出了一个通用的可扩展框架,用于自动识别视频序列中的群集活动。每个个体的轨迹由视觉跟踪子系统产生,并进一步分析以检测某些类型的高级分组行为。我们利用最近在蜂群行为分析方面的发现,根据特定距离函数来制定一个问题,我们随后将其作为两阶段凝聚聚类方法的一部分,以创建一组蜂群事件,随后是一组蜂群活动。在本文中,我们给出了一种特殊类型的蜂群:购物者分组的结果。作为这项工作的一部分,在相对较短的时间间隔内检测到的事件被进一步整合到活动中,这是长期高水平群体行为的表现。结果证明了我们的方法在拥挤的监控视频中检测此类活动的能力。特别是在三个小时的室内零售商店视频中,我们的方法正确识别了超过85%的有效“购物者群体”,假阳性水平非常低,与人类编码的基本事实相对照。
{"title":"Detecting shopper groups in video sequences","authors":"A. Leykin, M. Tuceryan","doi":"10.1109/AVSS.2007.4425347","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425347","url":null,"abstract":"We present a generalized extensible framework for automated recognition of swarming activities in video sequences. The trajectory of each individual is produced by the visual tracking sub-system and is further analyzed to detect certain types of high-level grouping behavior. We utilize recent findings in swarming behavior analysis to formulate a problem in terms of the specific distance function that we subsequently apply as part of the two-stage agglomerative clustering method to create a set of swarming events followed by a set of swarming activities. In this paper we present results for one particular type of swarming: shopper grouping. As part of this work the events detected in a relatively short time interval are further integrated into activities, the manifestation of prolonged high-level swarming behavior. The results demonstrate the ability of our method to detect such activities in congested surveillance videos. In particular in three hours of indoor retail store video, our method has correctly identified over85% of valid '\"shopper-groups'\" with a very low level of false positives, validated against human coded ground truth.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126879031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
A framework for track matching across disjoint cameras using robust shape and appearance features 基于鲁棒形状和外观特征的跨不相交摄像机的轨迹匹配框架
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425308
Christopher S. Madden, M. Piccardi
This paper presents a framework based on robust shape and appearance features for matching the various tracks generated by a single individual moving within a surveillance system. Each track is first automatically analysed in order to detect and remove the frames affected by large segmentation errors and drastic changes in illumination. The object's features computed over the remaining frames prove more robust and capable of supporting correct matching of tracks even in the case of significantly disjointed camera views. The shape and appearance features used include a height estimate as well as illumination-tolerant colour representation of the individual's global colours and the colours of the upper and lower portions of clothing. The results of a test from a real surveillance system show that the combination of these four features can provide a probability of matching as high as 91 percent with 5 percent probability of false alarms under views which have significantly differing illumination levels and suffer from significant segmentation errors in as many as 1 in 4 frames.
本文提出了一种基于鲁棒形状和外观特征的框架,用于匹配监视系统中单个移动个体产生的各种轨迹。每个轨道首先被自动分析,以检测和去除受大分割错误和光照剧烈变化影响的帧。在剩余的帧上计算的物体特征证明更健壮,并且能够支持正确的轨迹匹配,即使在明显脱节的相机视图的情况下。所使用的形状和外观特征包括身高估计,以及个人整体颜色和衣服上下部分颜色的耐光照颜色表示。来自真实监控系统的测试结果表明,这四个特征的组合可以提供高达91%的匹配概率和5%的假警报概率,在具有显着不同的照明水平和遭受多达1 / 4帧的显著分割错误的视图下。
{"title":"A framework for track matching across disjoint cameras using robust shape and appearance features","authors":"Christopher S. Madden, M. Piccardi","doi":"10.1109/AVSS.2007.4425308","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425308","url":null,"abstract":"This paper presents a framework based on robust shape and appearance features for matching the various tracks generated by a single individual moving within a surveillance system. Each track is first automatically analysed in order to detect and remove the frames affected by large segmentation errors and drastic changes in illumination. The object's features computed over the remaining frames prove more robust and capable of supporting correct matching of tracks even in the case of significantly disjointed camera views. The shape and appearance features used include a height estimate as well as illumination-tolerant colour representation of the individual's global colours and the colours of the upper and lower portions of clothing. The results of a test from a real surveillance system show that the combination of these four features can provide a probability of matching as high as 91 percent with 5 percent probability of false alarms under views which have significantly differing illumination levels and suffer from significant segmentation errors in as many as 1 in 4 frames.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"2016 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127391501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Automated 3D Face authentication & recognition 自动3D人脸认证和识别
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425284
M. Bae, A. Razdan, G. Farin
This paper presents a fully automated 3D face authentication (verification) and recognition (identification) method and recent results from our work in this area. The major contributions of our paper are: (a) the method can handle data with different facial expressions including hair, upper body, clothing, etc. and (b) development of weighted features for discrimination. The input to our system is a triangular mesh and it outputs a matching % against a gallery. Our method includes both surface and curve based features that are automatically extracted from a given face data. The test set for authentication consisted of 117 different people with 421 scans including different facial expressions. Our study shows equal error rate (EER) at 0.065% for normal faces and 1.13% in faces with expressions. We report verification rates of 100% in normal faces and 93.12% in faces with expressions at 0.1% FAR. For identification, our experiment shows 100% rate in normal faces and 95.6% in faces with expressions. From our experiment we conclude that combining feature points, profile curve, and partial face surface matching gives better authentication and recognition rate than any single matching method.
本文介绍了一种全自动的三维人脸认证(验证)和识别(识别)方法以及我们在这一领域的最新工作成果。本文的主要贡献是:(a)该方法可以处理不同面部表情的数据,包括头发、上身、服装等;(b)开发了加权特征进行识别。我们系统的输入是一个三角形网格,它输出一个与画廊匹配的%。我们的方法包括从给定的人脸数据中自动提取的基于表面和曲线的特征。验证的测试集由117个不同的人组成,他们进行了421次扫描,包括不同的面部表情。我们的研究表明,正常面孔的相等错误率(EER)为0.065%,而表情面部的相等错误率为1.13%。我们报告了正常面部的验证率为100%,0.1% FAR表情面部的验证率为93.12%。在我们的实验中,正常面孔的识别率为100%,表情面孔的识别率为95.6%。实验结果表明,结合特征点、轮廓曲线和部分人脸表面匹配比任何单一匹配方法都具有更好的认证识别率。
{"title":"Automated 3D Face authentication & recognition","authors":"M. Bae, A. Razdan, G. Farin","doi":"10.1109/AVSS.2007.4425284","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425284","url":null,"abstract":"This paper presents a fully automated 3D face authentication (verification) and recognition (identification) method and recent results from our work in this area. The major contributions of our paper are: (a) the method can handle data with different facial expressions including hair, upper body, clothing, etc. and (b) development of weighted features for discrimination. The input to our system is a triangular mesh and it outputs a matching % against a gallery. Our method includes both surface and curve based features that are automatically extracted from a given face data. The test set for authentication consisted of 117 different people with 421 scans including different facial expressions. Our study shows equal error rate (EER) at 0.065% for normal faces and 1.13% in faces with expressions. We report verification rates of 100% in normal faces and 93.12% in faces with expressions at 0.1% FAR. For identification, our experiment shows 100% rate in normal faces and 95.6% in faces with expressions. From our experiment we conclude that combining feature points, profile curve, and partial face surface matching gives better authentication and recognition rate than any single matching method.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131266051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Combination of self-organization map and kernel mutual subspace method for video surveillance 自组织映射与核互子空间相结合的视频监控方法
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425297
Bailing Zhang, Junbum Park, Hanseok Ko
This paper addresses the video surveillance issue of automatically identifying moving vehicles and people from continuous observation of image sequences. With a single far-field surveillance camera, moving objects are first segmented by simple background subtraction. To reduce the redundancy and select the representative prototypes from input video streams, the self-organizing feature map (SOM) is applied for both training and testing sequences. The recognition scheme is designed based on the recently proposed kernel mutual subspace (KMS) model. As an alternative to some probability-based models, KMS does not make assumptions about the data sampling processing and offers an efficient and robust classifier. Experiments demonstrated a highly accurate recognition result, showing the model's applicability in real-world surveillance system.
本文研究了从连续观察图像序列中自动识别移动车辆和人员的视频监控问题。对于单个远场监控摄像机,首先通过简单的背景减法分割运动物体。为了减少冗余并从输入视频流中选择具有代表性的原型,将自组织特征映射(SOM)应用于训练序列和测试序列。基于最近提出的核互子空间(KMS)模型设计了识别方案。作为一些基于概率的模型的替代方案,KMS不对数据采样处理进行假设,并提供了高效和鲁棒的分类器。实验结果表明,该模型具有较高的识别精度,在实际监控系统中具有一定的适用性。
{"title":"Combination of self-organization map and kernel mutual subspace method for video surveillance","authors":"Bailing Zhang, Junbum Park, Hanseok Ko","doi":"10.1109/AVSS.2007.4425297","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425297","url":null,"abstract":"This paper addresses the video surveillance issue of automatically identifying moving vehicles and people from continuous observation of image sequences. With a single far-field surveillance camera, moving objects are first segmented by simple background subtraction. To reduce the redundancy and select the representative prototypes from input video streams, the self-organizing feature map (SOM) is applied for both training and testing sequences. The recognition scheme is designed based on the recently proposed kernel mutual subspace (KMS) model. As an alternative to some probability-based models, KMS does not make assumptions about the data sampling processing and offers an efficient and robust classifier. Experiments demonstrated a highly accurate recognition result, showing the model's applicability in real-world surveillance system.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134340106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Technology, applications and innovations in physical security - A home office perspective 物理安全的技术、应用和创新——家庭办公室视角
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425275
A. Coleman
Summary form only given. This overview talk will first introduce the Home Office Scientific Development Branch (HOSDB) as organisation and then will offer a summary of our programmes in the area of the physical security sector. The talk will explain how HOSDB is contributing to protection and law enforcement. I will use a series of examples to cover this area. In the second part, the talk shall focus on vision based systems and on HOSDB initiatives on this technology. I will provide a strategic view of initiatives aimed to cause innovation in the industry and academic research. I will then cover our initiatives in bench marking and in video evidence analysis. Finally, I will provide an overview of future technology trends from the HOSDB perspective.
只提供摘要形式。本次概览讲座将首先介绍内政部科学发展处(HOSDB)作为组织,然后将提供我们在物理安全领域的项目总结。该演讲将解释HOSDB如何为保护和执法做出贡献。我将使用一系列示例来介绍这一领域。在第二部分,演讲将集中在基于视觉的系统和HOSDB在该技术方面的举措。我将提供旨在引起行业和学术研究创新的举措的战略观点。然后我将介绍我们在基准测试和视频证据分析方面的举措。最后,我将从HOSDB的角度概述未来的技术趋势。
{"title":"Technology, applications and innovations in physical security - A home office perspective","authors":"A. Coleman","doi":"10.1109/AVSS.2007.4425275","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425275","url":null,"abstract":"Summary form only given. This overview talk will first introduce the Home Office Scientific Development Branch (HOSDB) as organisation and then will offer a summary of our programmes in the area of the physical security sector. The talk will explain how HOSDB is contributing to protection and law enforcement. I will use a series of examples to cover this area. In the second part, the talk shall focus on vision based systems and on HOSDB initiatives on this technology. I will provide a strategic view of initiatives aimed to cause innovation in the industry and academic research. I will then cover our initiatives in bench marking and in video evidence analysis. Finally, I will provide an overview of future technology trends from the HOSDB perspective.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123845732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video verification of point of sale transactions 销售点交易的视频验证
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425346
P. L. Venetianer, Zhong Zhang, Andrew W. Scanlon, Yongtong Hu, A. Lipton
Loss prevention is a significant challenge in retail enterprises. A significant percentage of this loss occurs at point of sale (POS) terminals. POS data mining tools known collectively as exception based reporting (EBR) are helping retailers, but they have limitations as they can only work statistically on trends and anomalies in digital POS data. By applying video analytics techniques to POS transactions, it is possible to detect fraudulent or anomalous activity at the level of individual transactions. Very specific fraudulent behaviors that cannot be detected via POS data alone become clear when combined with video-derived data. ObjectVideo, a provider of intelligent video software, has produced a system called RetailWatch that combines POS information with video data to create a unique loss prevention tool. This paper describes the system architecture, algorithmic approach, and capabilities of the system, together with a customer case-study illustrating the results and effectiveness of the system.
防损是零售企业面临的重大挑战。这种损失的很大一部分发生在销售点(POS)终端。POS数据挖掘工具统称为基于异常的报告(EBR),它们正在帮助零售商,但它们有局限性,因为它们只能统计数字POS数据中的趋势和异常。通过将视频分析技术应用于POS交易,可以在单个交易级别检测欺诈或异常活动。仅通过POS数据无法检测到的非常具体的欺诈行为在与视频衍生数据相结合时变得清晰。ObjectVideo是一家智能视频软件提供商,它生产了一种名为RetailWatch的系统,该系统将POS信息与视频数据结合在一起,创造了一种独特的防损工具。本文描述了系统架构、算法方法和系统的功能,以及一个说明系统结果和有效性的客户案例研究。
{"title":"Video verification of point of sale transactions","authors":"P. L. Venetianer, Zhong Zhang, Andrew W. Scanlon, Yongtong Hu, A. Lipton","doi":"10.1109/AVSS.2007.4425346","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425346","url":null,"abstract":"Loss prevention is a significant challenge in retail enterprises. A significant percentage of this loss occurs at point of sale (POS) terminals. POS data mining tools known collectively as exception based reporting (EBR) are helping retailers, but they have limitations as they can only work statistically on trends and anomalies in digital POS data. By applying video analytics techniques to POS transactions, it is possible to detect fraudulent or anomalous activity at the level of individual transactions. Very specific fraudulent behaviors that cannot be detected via POS data alone become clear when combined with video-derived data. ObjectVideo, a provider of intelligent video software, has produced a system called RetailWatch that combines POS information with video data to create a unique loss prevention tool. This paper describes the system architecture, algorithmic approach, and capabilities of the system, together with a customer case-study illustrating the results and effectiveness of the system.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115844018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Recognition through constructing the Eigenface classifiers using conjugation indices 利用共轭指数构造特征脸分类器进行识别
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425355
V. Fursov, Nikita Kozin
The principal component analysis (PCA), also called the eigenfaces analysis, is one of the most extensively used face image recognition techniques. The idea of the method is decomposition of image vectors into a system of eigenvectors matched to the maximum eigenvalues. The method of proximity assessment of vectors composed of principal components essentially influences the recognition quality. In the paper the use of different indices of conjugation with subspace stretched on training vectors is considered as a proximity measure. It is shown that this approach is very effective in the case of a small number of training examples. The results of experiments for a standard ORL-face database are presented.
主成分分析(PCA)又称特征脸分析,是应用最广泛的人脸图像识别技术之一。该方法的思想是将图像向量分解成与最大特征值相匹配的特征向量系统。由主成分组成的向量的接近性评价方法直接影响识别质量。本文考虑了在训练向量上拉伸子空间的不同共轭指标作为一种接近度量。结果表明,该方法在训练样本数量较少的情况下是非常有效的。给出了标准ORL-face数据库的实验结果。
{"title":"Recognition through constructing the Eigenface classifiers using conjugation indices","authors":"V. Fursov, Nikita Kozin","doi":"10.1109/AVSS.2007.4425355","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425355","url":null,"abstract":"The principal component analysis (PCA), also called the eigenfaces analysis, is one of the most extensively used face image recognition techniques. The idea of the method is decomposition of image vectors into a system of eigenvectors matched to the maximum eigenvalues. The method of proximity assessment of vectors composed of principal components essentially influences the recognition quality. In the paper the use of different indices of conjugation with subspace stretched on training vectors is considered as a proximity measure. It is shown that this approach is very effective in the case of a small number of training examples. The results of experiments for a standard ORL-face database are presented.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116885296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Recovering the linguistic components of the manual signs in American Sign Language 恢复美国手语中手势的语言成分
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425352
Liya Ding, Aleix M. Martinez
Manual signs in American sign language (ASL) are constructed using three building blocks -handshape, motion, and place of articulations. Only when these three are successfully estimated, can a sign by uniquely identified. Hence, the use of pattern recognition techniques that use only a subset of these is inappropriate. To achieve accurate classifications, the motion, the handshape and their three-dimensional position need to be recovered. In this paper, we define an algorithm to determine these three components form a single video sequence of two-dimensional pictures of a sign. We demonstrated the use of our algorithm in describing and recognizing a set of manual signs in ASL.
美国手语(ASL)中的手势是由三个组成部分组成的:手的形状、动作和发音的位置。只有当这三者被成功估计时,一个符号才能被唯一地识别出来。因此,只使用其中一个子集的模式识别技术是不合适的。为了实现准确的分类,需要恢复运动、手型和它们的三维位置。在本文中,我们定义了一种算法来确定这三个成分构成一个符号的二维图像的单个视频序列。我们演示了我们的算法在描述和识别一组手语手势中的使用。
{"title":"Recovering the linguistic components of the manual signs in American Sign Language","authors":"Liya Ding, Aleix M. Martinez","doi":"10.1109/AVSS.2007.4425352","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425352","url":null,"abstract":"Manual signs in American sign language (ASL) are constructed using three building blocks -handshape, motion, and place of articulations. Only when these three are successfully estimated, can a sign by uniquely identified. Hence, the use of pattern recognition techniques that use only a subset of these is inappropriate. To achieve accurate classifications, the motion, the handshape and their three-dimensional position need to be recovered. In this paper, we define an algorithm to determine these three components form a single video sequence of two-dimensional pictures of a sign. We demonstrated the use of our algorithm in describing and recognizing a set of manual signs in ASL.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115734475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Searching surveillance video 搜索监控视频
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425289
A. Hampapur, L. Brown, R. Feris, A. Senior, Chiao-Fe Shu, Ying-li Tian, Y. Zhai, M. Lu
Surveillance video is used in two key modes, watching for known threats in real-time and searching for events of interest after the fact. Typically, real-time alerting is a localized function, e.g. airport security center receives and reacts to a "perimeter breach alert", while investigations often tend to encompass a large number of geographically distributed cameras like the London bombing, or Washington sniper incidents. Enabling effective search of surveillance video for investigation & preemption, involves indexing the video along multiple dimensions. This paper presents a framework for surveillance search which includes, video parsing, indexing and query mechanisms. It explores video parsing techniques which automatically extract index data from video, indexing which stores data in relational tables, retrieval which uses SQL queries to retrieve events of interest and the software architecture that integrates these technologies.
监控视频有两种主要模式,一种是实时监视已知威胁,另一种是事后搜索感兴趣的事件。通常,实时警报是一种局部功能,例如机场安全中心接收并对“外围破坏警报”做出反应,而调查通常倾向于包含大量地理分布的摄像头,如伦敦爆炸案或华盛顿狙击手事件。有效地搜索监控视频以进行调查和预防,涉及沿多个维度对视频进行索引。本文提出了一个监控搜索框架,包括视频解析、索引和查询机制。本文探讨了自动从视频中提取索引数据的视频解析技术、将数据存储在关系表中的索引技术、使用SQL查询检索感兴趣的事件的检索技术以及集成这些技术的软件体系结构。
{"title":"Searching surveillance video","authors":"A. Hampapur, L. Brown, R. Feris, A. Senior, Chiao-Fe Shu, Ying-li Tian, Y. Zhai, M. Lu","doi":"10.1109/AVSS.2007.4425289","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425289","url":null,"abstract":"Surveillance video is used in two key modes, watching for known threats in real-time and searching for events of interest after the fact. Typically, real-time alerting is a localized function, e.g. airport security center receives and reacts to a \"perimeter breach alert\", while investigations often tend to encompass a large number of geographically distributed cameras like the London bombing, or Washington sniper incidents. Enabling effective search of surveillance video for investigation & preemption, involves indexing the video along multiple dimensions. This paper presents a framework for surveillance search which includes, video parsing, indexing and query mechanisms. It explores video parsing techniques which automatically extract index data from video, indexing which stores data in relational tables, retrieval which uses SQL queries to retrieve events of interest and the software architecture that integrates these technologies.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114556775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Stationary target detection using the objectvideo surveillance system 静止目标检测利用目标视频监控系统
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425317
P. L. Venetianer, Zhong Zhang, Weihong Yin, A. Lipton
Detecting stationary objects, such as an abandoned baggage or a parked vehicle is crucial in a wide range of video surveillance and monitoring applications. ObjectVideo, the leader in intelligent video software has been deploying commercial products to address these problems for the last 5 years. The ObjectVideo VEW and OnBoard system addresses these problems using an array of algorithms optimized for various scenario types and can be selected dynamically. This paper describes the key challenges and algorithms, and presents results on the standard i-LIDS dataset.
检测静止物体,如丢弃的行李或停放的车辆,在广泛的视频监控和监控应用中至关重要。ObjectVideo是智能视频软件的领导者,在过去的5年里一直在部署商业产品来解决这些问题。ObjectVideo VEW和机载系统使用一系列针对各种场景类型优化的算法来解决这些问题,并且可以动态选择。本文描述了关键挑战和算法,并给出了在标准i-LIDS数据集上的结果。
{"title":"Stationary target detection using the objectvideo surveillance system","authors":"P. L. Venetianer, Zhong Zhang, Weihong Yin, A. Lipton","doi":"10.1109/AVSS.2007.4425317","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425317","url":null,"abstract":"Detecting stationary objects, such as an abandoned baggage or a parked vehicle is crucial in a wide range of video surveillance and monitoring applications. ObjectVideo, the leader in intelligent video software has been deploying commercial products to address these problems for the last 5 years. The ObjectVideo VEW and OnBoard system addresses these problems using an array of algorithms optimized for various scenario types and can be selected dynamically. This paper describes the key challenges and algorithms, and presents results on the standard i-LIDS dataset.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128109491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 56
期刊
2007 IEEE Conference on Advanced Video and Signal Based Surveillance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1