首页 > 最新文献

2007 IEEE Conference on Advanced Video and Signal Based Surveillance最新文献

英文 中文
Video analytics for retail 零售视频分析
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425348
A. Senior, L. Brown, A. Hampapur, Chiao-Fe Shu, Y. Zhai, R. Feris, Ying-li Tian, S. Borger, Christopher R. Carlson
We describe a set of tools for retail analytics based on a combination of video understanding and transaction-log. Tools are provided for loss prevention (returns fraud and cashier fraud), store operations (customer counting) and merchandising (display effectiveness). Results are presented on returns fraud and customer counting.
我们描述了一套基于视频理解和交易日志组合的零售分析工具。工具提供了防止损失(退货欺诈和收银员欺诈),商店操作(顾客计数)和销售(展示效果)。结果提出了退货,欺诈和客户计数。
{"title":"Video analytics for retail","authors":"A. Senior, L. Brown, A. Hampapur, Chiao-Fe Shu, Y. Zhai, R. Feris, Ying-li Tian, S. Borger, Christopher R. Carlson","doi":"10.1109/AVSS.2007.4425348","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425348","url":null,"abstract":"We describe a set of tools for retail analytics based on a combination of video understanding and transaction-log. Tools are provided for loss prevention (returns fraud and cashier fraud), store operations (customer counting) and merchandising (display effectiveness). Results are presented on returns fraud and customer counting.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126740364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Automatic people detection and counting for athletic videos classification 自动检测和计数运动视频分类
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425349
C. Panagiotakis, E. Ramasso, G. Tziritas, M. Rombaut, D. Pellerin
We propose a general framework that focuses on automatic individual/multiple people motion-shape analysis and on suitable features extraction that can be used on action/activity recognition problems under real, dynamical and unconstrained environments. We have considered various athletic videos from a single uncalibrated, possibly moving camera in order to evaluate the robustness of the proposed method. We have used an easily expanded hierarchical scheme in order to classify them to videos of individual and team sports. Robust, adaptive and independent from the camera motion, the proposed features are combined within Transferable Belief Model (TBM) framework providing a two level (frames and shot) video categorization. The experimental results of 97% individual/team sport categorization accuracy, using a dataset of more than 250 videos of athletic meetings indicate the good performance of the proposed scheme.
我们提出了一个通用框架,重点关注个人/多人的自动运动形状分析和合适的特征提取,可用于真实,动态和无约束环境下的动作/活动识别问题。我们考虑了来自单个未校准的运动摄像机的各种运动视频,以评估所提出方法的鲁棒性。我们使用了一个易于扩展的分层方案,以便将它们分类为个人和团队运动的视频。鲁棒性、自适应性和不受摄像机运动影响,所提出的特征结合在可转移信念模型(TBM)框架内,提供两级(帧和镜头)视频分类。使用250多个运动会视频数据集的实验结果表明,该方案具有良好的性能,个人/团队运动分类准确率达到97%。
{"title":"Automatic people detection and counting for athletic videos classification","authors":"C. Panagiotakis, E. Ramasso, G. Tziritas, M. Rombaut, D. Pellerin","doi":"10.1109/AVSS.2007.4425349","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425349","url":null,"abstract":"We propose a general framework that focuses on automatic individual/multiple people motion-shape analysis and on suitable features extraction that can be used on action/activity recognition problems under real, dynamical and unconstrained environments. We have considered various athletic videos from a single uncalibrated, possibly moving camera in order to evaluate the robustness of the proposed method. We have used an easily expanded hierarchical scheme in order to classify them to videos of individual and team sports. Robust, adaptive and independent from the camera motion, the proposed features are combined within Transferable Belief Model (TBM) framework providing a two level (frames and shot) video categorization. The experimental results of 97% individual/team sport categorization accuracy, using a dataset of more than 250 videos of athletic meetings indicate the good performance of the proposed scheme.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122667984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
2D and 3D face localization for complex scenes 复杂场景的二维和三维人脸定位
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425339
Ghassan O. Karame, A. Stergiou, N. Katsarakis, Panagiotis Papageorgiou, Aristodemos Pnevmatikakis
In this paper, we address face tracking of multiple people in complex 3D scenes, using multiple calibrated and synchronized far-field recordings. We localize faces in every camera view and associate them across the different views. To cope with the complexity of 2D face localization introduced by the multitude of people and unconstrained face poses, a combination of stochastic and deterministic trackers, detectors and a Gaussian mixture model for face validation are utilized. Then faces of the same person seen from the different cameras are associated by first finding all possible associations and then choosing the best option by means of a 3D stochastic tracker. The performance of the proposed system is evaluated and is found enhanced compared to existing systems.
在本文中,我们使用多个校准和同步的远场记录来解决复杂3D场景中多人的面部跟踪问题。我们在每个摄像头视图中定位人脸,并在不同的视图中将它们关联起来。针对人群众多和人脸姿态不受约束所带来的二维人脸定位的复杂性,采用随机与确定性相结合的跟踪器、检测器和高斯混合模型进行人脸验证。然后,通过首先找到所有可能的关联,然后通过3D随机跟踪器选择最佳选项,将从不同摄像机看到的同一个人的脸联系起来。对所提出的系统的性能进行了评估,发现与现有系统相比,该系统的性能得到了提高。
{"title":"2D and 3D face localization for complex scenes","authors":"Ghassan O. Karame, A. Stergiou, N. Katsarakis, Panagiotis Papageorgiou, Aristodemos Pnevmatikakis","doi":"10.1109/AVSS.2007.4425339","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425339","url":null,"abstract":"In this paper, we address face tracking of multiple people in complex 3D scenes, using multiple calibrated and synchronized far-field recordings. We localize faces in every camera view and associate them across the different views. To cope with the complexity of 2D face localization introduced by the multitude of people and unconstrained face poses, a combination of stochastic and deterministic trackers, detectors and a Gaussian mixture model for face validation are utilized. Then faces of the same person seen from the different cameras are associated by first finding all possible associations and then choosing the best option by means of a 3D stochastic tracker. The performance of the proposed system is evaluated and is found enhanced compared to existing systems.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126145110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Automated 3D Face authentication & recognition 自动3D人脸认证和识别
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425284
M. Bae, A. Razdan, G. Farin
This paper presents a fully automated 3D face authentication (verification) and recognition (identification) method and recent results from our work in this area. The major contributions of our paper are: (a) the method can handle data with different facial expressions including hair, upper body, clothing, etc. and (b) development of weighted features for discrimination. The input to our system is a triangular mesh and it outputs a matching % against a gallery. Our method includes both surface and curve based features that are automatically extracted from a given face data. The test set for authentication consisted of 117 different people with 421 scans including different facial expressions. Our study shows equal error rate (EER) at 0.065% for normal faces and 1.13% in faces with expressions. We report verification rates of 100% in normal faces and 93.12% in faces with expressions at 0.1% FAR. For identification, our experiment shows 100% rate in normal faces and 95.6% in faces with expressions. From our experiment we conclude that combining feature points, profile curve, and partial face surface matching gives better authentication and recognition rate than any single matching method.
本文介绍了一种全自动的三维人脸认证(验证)和识别(识别)方法以及我们在这一领域的最新工作成果。本文的主要贡献是:(a)该方法可以处理不同面部表情的数据,包括头发、上身、服装等;(b)开发了加权特征进行识别。我们系统的输入是一个三角形网格,它输出一个与画廊匹配的%。我们的方法包括从给定的人脸数据中自动提取的基于表面和曲线的特征。验证的测试集由117个不同的人组成,他们进行了421次扫描,包括不同的面部表情。我们的研究表明,正常面孔的相等错误率(EER)为0.065%,而表情面部的相等错误率为1.13%。我们报告了正常面部的验证率为100%,0.1% FAR表情面部的验证率为93.12%。在我们的实验中,正常面孔的识别率为100%,表情面孔的识别率为95.6%。实验结果表明,结合特征点、轮廓曲线和部分人脸表面匹配比任何单一匹配方法都具有更好的认证识别率。
{"title":"Automated 3D Face authentication & recognition","authors":"M. Bae, A. Razdan, G. Farin","doi":"10.1109/AVSS.2007.4425284","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425284","url":null,"abstract":"This paper presents a fully automated 3D face authentication (verification) and recognition (identification) method and recent results from our work in this area. The major contributions of our paper are: (a) the method can handle data with different facial expressions including hair, upper body, clothing, etc. and (b) development of weighted features for discrimination. The input to our system is a triangular mesh and it outputs a matching % against a gallery. Our method includes both surface and curve based features that are automatically extracted from a given face data. The test set for authentication consisted of 117 different people with 421 scans including different facial expressions. Our study shows equal error rate (EER) at 0.065% for normal faces and 1.13% in faces with expressions. We report verification rates of 100% in normal faces and 93.12% in faces with expressions at 0.1% FAR. For identification, our experiment shows 100% rate in normal faces and 95.6% in faces with expressions. From our experiment we conclude that combining feature points, profile curve, and partial face surface matching gives better authentication and recognition rate than any single matching method.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131266051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Technology, applications and innovations in physical security - A home office perspective 物理安全的技术、应用和创新——家庭办公室视角
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425275
A. Coleman
Summary form only given. This overview talk will first introduce the Home Office Scientific Development Branch (HOSDB) as organisation and then will offer a summary of our programmes in the area of the physical security sector. The talk will explain how HOSDB is contributing to protection and law enforcement. I will use a series of examples to cover this area. In the second part, the talk shall focus on vision based systems and on HOSDB initiatives on this technology. I will provide a strategic view of initiatives aimed to cause innovation in the industry and academic research. I will then cover our initiatives in bench marking and in video evidence analysis. Finally, I will provide an overview of future technology trends from the HOSDB perspective.
只提供摘要形式。本次概览讲座将首先介绍内政部科学发展处(HOSDB)作为组织,然后将提供我们在物理安全领域的项目总结。该演讲将解释HOSDB如何为保护和执法做出贡献。我将使用一系列示例来介绍这一领域。在第二部分,演讲将集中在基于视觉的系统和HOSDB在该技术方面的举措。我将提供旨在引起行业和学术研究创新的举措的战略观点。然后我将介绍我们在基准测试和视频证据分析方面的举措。最后,我将从HOSDB的角度概述未来的技术趋势。
{"title":"Technology, applications and innovations in physical security - A home office perspective","authors":"A. Coleman","doi":"10.1109/AVSS.2007.4425275","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425275","url":null,"abstract":"Summary form only given. This overview talk will first introduce the Home Office Scientific Development Branch (HOSDB) as organisation and then will offer a summary of our programmes in the area of the physical security sector. The talk will explain how HOSDB is contributing to protection and law enforcement. I will use a series of examples to cover this area. In the second part, the talk shall focus on vision based systems and on HOSDB initiatives on this technology. I will provide a strategic view of initiatives aimed to cause innovation in the industry and academic research. I will then cover our initiatives in bench marking and in video evidence analysis. Finally, I will provide an overview of future technology trends from the HOSDB perspective.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123845732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video verification of point of sale transactions 销售点交易的视频验证
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425346
P. L. Venetianer, Zhong Zhang, Andrew W. Scanlon, Yongtong Hu, A. Lipton
Loss prevention is a significant challenge in retail enterprises. A significant percentage of this loss occurs at point of sale (POS) terminals. POS data mining tools known collectively as exception based reporting (EBR) are helping retailers, but they have limitations as they can only work statistically on trends and anomalies in digital POS data. By applying video analytics techniques to POS transactions, it is possible to detect fraudulent or anomalous activity at the level of individual transactions. Very specific fraudulent behaviors that cannot be detected via POS data alone become clear when combined with video-derived data. ObjectVideo, a provider of intelligent video software, has produced a system called RetailWatch that combines POS information with video data to create a unique loss prevention tool. This paper describes the system architecture, algorithmic approach, and capabilities of the system, together with a customer case-study illustrating the results and effectiveness of the system.
防损是零售企业面临的重大挑战。这种损失的很大一部分发生在销售点(POS)终端。POS数据挖掘工具统称为基于异常的报告(EBR),它们正在帮助零售商,但它们有局限性,因为它们只能统计数字POS数据中的趋势和异常。通过将视频分析技术应用于POS交易,可以在单个交易级别检测欺诈或异常活动。仅通过POS数据无法检测到的非常具体的欺诈行为在与视频衍生数据相结合时变得清晰。ObjectVideo是一家智能视频软件提供商,它生产了一种名为RetailWatch的系统,该系统将POS信息与视频数据结合在一起,创造了一种独特的防损工具。本文描述了系统架构、算法方法和系统的功能,以及一个说明系统结果和有效性的客户案例研究。
{"title":"Video verification of point of sale transactions","authors":"P. L. Venetianer, Zhong Zhang, Andrew W. Scanlon, Yongtong Hu, A. Lipton","doi":"10.1109/AVSS.2007.4425346","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425346","url":null,"abstract":"Loss prevention is a significant challenge in retail enterprises. A significant percentage of this loss occurs at point of sale (POS) terminals. POS data mining tools known collectively as exception based reporting (EBR) are helping retailers, but they have limitations as they can only work statistically on trends and anomalies in digital POS data. By applying video analytics techniques to POS transactions, it is possible to detect fraudulent or anomalous activity at the level of individual transactions. Very specific fraudulent behaviors that cannot be detected via POS data alone become clear when combined with video-derived data. ObjectVideo, a provider of intelligent video software, has produced a system called RetailWatch that combines POS information with video data to create a unique loss prevention tool. This paper describes the system architecture, algorithmic approach, and capabilities of the system, together with a customer case-study illustrating the results and effectiveness of the system.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115844018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Recognition through constructing the Eigenface classifiers using conjugation indices 利用共轭指数构造特征脸分类器进行识别
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425355
V. Fursov, Nikita Kozin
The principal component analysis (PCA), also called the eigenfaces analysis, is one of the most extensively used face image recognition techniques. The idea of the method is decomposition of image vectors into a system of eigenvectors matched to the maximum eigenvalues. The method of proximity assessment of vectors composed of principal components essentially influences the recognition quality. In the paper the use of different indices of conjugation with subspace stretched on training vectors is considered as a proximity measure. It is shown that this approach is very effective in the case of a small number of training examples. The results of experiments for a standard ORL-face database are presented.
主成分分析(PCA)又称特征脸分析,是应用最广泛的人脸图像识别技术之一。该方法的思想是将图像向量分解成与最大特征值相匹配的特征向量系统。由主成分组成的向量的接近性评价方法直接影响识别质量。本文考虑了在训练向量上拉伸子空间的不同共轭指标作为一种接近度量。结果表明,该方法在训练样本数量较少的情况下是非常有效的。给出了标准ORL-face数据库的实验结果。
{"title":"Recognition through constructing the Eigenface classifiers using conjugation indices","authors":"V. Fursov, Nikita Kozin","doi":"10.1109/AVSS.2007.4425355","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425355","url":null,"abstract":"The principal component analysis (PCA), also called the eigenfaces analysis, is one of the most extensively used face image recognition techniques. The idea of the method is decomposition of image vectors into a system of eigenvectors matched to the maximum eigenvalues. The method of proximity assessment of vectors composed of principal components essentially influences the recognition quality. In the paper the use of different indices of conjugation with subspace stretched on training vectors is considered as a proximity measure. It is shown that this approach is very effective in the case of a small number of training examples. The results of experiments for a standard ORL-face database are presented.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116885296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Recovering the linguistic components of the manual signs in American Sign Language 恢复美国手语中手势的语言成分
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425352
Liya Ding, Aleix M. Martinez
Manual signs in American sign language (ASL) are constructed using three building blocks -handshape, motion, and place of articulations. Only when these three are successfully estimated, can a sign by uniquely identified. Hence, the use of pattern recognition techniques that use only a subset of these is inappropriate. To achieve accurate classifications, the motion, the handshape and their three-dimensional position need to be recovered. In this paper, we define an algorithm to determine these three components form a single video sequence of two-dimensional pictures of a sign. We demonstrated the use of our algorithm in describing and recognizing a set of manual signs in ASL.
美国手语(ASL)中的手势是由三个组成部分组成的:手的形状、动作和发音的位置。只有当这三者被成功估计时,一个符号才能被唯一地识别出来。因此,只使用其中一个子集的模式识别技术是不合适的。为了实现准确的分类,需要恢复运动、手型和它们的三维位置。在本文中,我们定义了一种算法来确定这三个成分构成一个符号的二维图像的单个视频序列。我们演示了我们的算法在描述和识别一组手语手势中的使用。
{"title":"Recovering the linguistic components of the manual signs in American Sign Language","authors":"Liya Ding, Aleix M. Martinez","doi":"10.1109/AVSS.2007.4425352","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425352","url":null,"abstract":"Manual signs in American sign language (ASL) are constructed using three building blocks -handshape, motion, and place of articulations. Only when these three are successfully estimated, can a sign by uniquely identified. Hence, the use of pattern recognition techniques that use only a subset of these is inappropriate. To achieve accurate classifications, the motion, the handshape and their three-dimensional position need to be recovered. In this paper, we define an algorithm to determine these three components form a single video sequence of two-dimensional pictures of a sign. We demonstrated the use of our algorithm in describing and recognizing a set of manual signs in ASL.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115734475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Searching surveillance video 搜索监控视频
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425289
A. Hampapur, L. Brown, R. Feris, A. Senior, Chiao-Fe Shu, Ying-li Tian, Y. Zhai, M. Lu
Surveillance video is used in two key modes, watching for known threats in real-time and searching for events of interest after the fact. Typically, real-time alerting is a localized function, e.g. airport security center receives and reacts to a "perimeter breach alert", while investigations often tend to encompass a large number of geographically distributed cameras like the London bombing, or Washington sniper incidents. Enabling effective search of surveillance video for investigation & preemption, involves indexing the video along multiple dimensions. This paper presents a framework for surveillance search which includes, video parsing, indexing and query mechanisms. It explores video parsing techniques which automatically extract index data from video, indexing which stores data in relational tables, retrieval which uses SQL queries to retrieve events of interest and the software architecture that integrates these technologies.
监控视频有两种主要模式,一种是实时监视已知威胁,另一种是事后搜索感兴趣的事件。通常,实时警报是一种局部功能,例如机场安全中心接收并对“外围破坏警报”做出反应,而调查通常倾向于包含大量地理分布的摄像头,如伦敦爆炸案或华盛顿狙击手事件。有效地搜索监控视频以进行调查和预防,涉及沿多个维度对视频进行索引。本文提出了一个监控搜索框架,包括视频解析、索引和查询机制。本文探讨了自动从视频中提取索引数据的视频解析技术、将数据存储在关系表中的索引技术、使用SQL查询检索感兴趣的事件的检索技术以及集成这些技术的软件体系结构。
{"title":"Searching surveillance video","authors":"A. Hampapur, L. Brown, R. Feris, A. Senior, Chiao-Fe Shu, Ying-li Tian, Y. Zhai, M. Lu","doi":"10.1109/AVSS.2007.4425289","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425289","url":null,"abstract":"Surveillance video is used in two key modes, watching for known threats in real-time and searching for events of interest after the fact. Typically, real-time alerting is a localized function, e.g. airport security center receives and reacts to a \"perimeter breach alert\", while investigations often tend to encompass a large number of geographically distributed cameras like the London bombing, or Washington sniper incidents. Enabling effective search of surveillance video for investigation & preemption, involves indexing the video along multiple dimensions. This paper presents a framework for surveillance search which includes, video parsing, indexing and query mechanisms. It explores video parsing techniques which automatically extract index data from video, indexing which stores data in relational tables, retrieval which uses SQL queries to retrieve events of interest and the software architecture that integrates these technologies.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114556775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Stationary target detection using the objectvideo surveillance system 静止目标检测利用目标视频监控系统
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425317
P. L. Venetianer, Zhong Zhang, Weihong Yin, A. Lipton
Detecting stationary objects, such as an abandoned baggage or a parked vehicle is crucial in a wide range of video surveillance and monitoring applications. ObjectVideo, the leader in intelligent video software has been deploying commercial products to address these problems for the last 5 years. The ObjectVideo VEW and OnBoard system addresses these problems using an array of algorithms optimized for various scenario types and can be selected dynamically. This paper describes the key challenges and algorithms, and presents results on the standard i-LIDS dataset.
检测静止物体,如丢弃的行李或停放的车辆,在广泛的视频监控和监控应用中至关重要。ObjectVideo是智能视频软件的领导者,在过去的5年里一直在部署商业产品来解决这些问题。ObjectVideo VEW和机载系统使用一系列针对各种场景类型优化的算法来解决这些问题,并且可以动态选择。本文描述了关键挑战和算法,并给出了在标准i-LIDS数据集上的结果。
{"title":"Stationary target detection using the objectvideo surveillance system","authors":"P. L. Venetianer, Zhong Zhang, Weihong Yin, A. Lipton","doi":"10.1109/AVSS.2007.4425317","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425317","url":null,"abstract":"Detecting stationary objects, such as an abandoned baggage or a parked vehicle is crucial in a wide range of video surveillance and monitoring applications. ObjectVideo, the leader in intelligent video software has been deploying commercial products to address these problems for the last 5 years. The ObjectVideo VEW and OnBoard system addresses these problems using an array of algorithms optimized for various scenario types and can be selected dynamically. This paper describes the key challenges and algorithms, and presents results on the standard i-LIDS dataset.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128109491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 56
期刊
2007 IEEE Conference on Advanced Video and Signal Based Surveillance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1