首页 > 最新文献

2007 IEEE Conference on Advanced Video and Signal Based Surveillance最新文献

英文 中文
Real time face recognition using decision fusion of neural classifiers in the visible and thermal infrared spectrum 基于决策融合的神经分类器在可见光和热红外光谱中的实时人脸识别
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425327
V. Neagoe, A. Ropot, A. Mugioiu
This paper is dedicated to multispectral facial image recognition, using decision fusion of neural classifiers. The novelty of this paper is that any classifier is based on the model of Concurrent Self-Organizing Maps (CSOM), previously proposed by first author of this paper. Our main achievement is the implementation of a real time CSOM face recognition system using the decision fusion that combines the recognition scores generated from visual channels {(R, G, and B) or Y} with a thermal infrared classifier. As a source of color and infrared images, we used our VICFACE database of 38 subjects. Any picture has 160 times 120 pixels; for each subject there are pictures corresponding to various face expressions and illuminations, in the visual and infrared spectrum. The spectral sensitivity of infrared images corresponds to the long wave range of 7.5 - 13 mum. The very good experimental results are given regarding recognition score.
本文研究了基于神经分类器决策融合的多光谱人脸图像识别。本文的新颖之处在于任何分类器都是基于本文第一作者先前提出的并发自组织映射(CSOM)模型。我们的主要成果是使用决策融合实现实时CSOM人脸识别系统,该系统将视觉通道{(R, G, B)或Y}生成的识别分数与热红外分类器相结合。作为彩色和红外图像的来源,我们使用了38名受试者的VICFACE数据库。任何图片都有160 × 120像素;对于每个受试者,在视觉和红外光谱中都有与各种面部表情和光照相对应的图片。红外图像的光谱灵敏度对应于7.5 ~ 13 μ m的长波范围。在识别分数方面给出了很好的实验结果。
{"title":"Real time face recognition using decision fusion of neural classifiers in the visible and thermal infrared spectrum","authors":"V. Neagoe, A. Ropot, A. Mugioiu","doi":"10.1109/AVSS.2007.4425327","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425327","url":null,"abstract":"This paper is dedicated to multispectral facial image recognition, using decision fusion of neural classifiers. The novelty of this paper is that any classifier is based on the model of Concurrent Self-Organizing Maps (CSOM), previously proposed by first author of this paper. Our main achievement is the implementation of a real time CSOM face recognition system using the decision fusion that combines the recognition scores generated from visual channels {(R, G, and B) or Y} with a thermal infrared classifier. As a source of color and infrared images, we used our VICFACE database of 38 subjects. Any picture has 160 times 120 pixels; for each subject there are pictures corresponding to various face expressions and illuminations, in the visual and infrared spectrum. The spectral sensitivity of infrared images corresponds to the long wave range of 7.5 - 13 mum. The very good experimental results are given regarding recognition score.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131057057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Directions in automatic video analysis evaluations at NIST NIST自动视频分析评估方向
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425276
J. Garofolo
NIST has been conducting a series of evaluations in the automatic analysis of information in video since 2001. These began within the NIST text retrieval evaluation (TREC) as a pilot track in searching for information in large collections of video. The evaluation series was spun off into its own evaluation/workshop series called TRECVID. TRECVID continues to examine the challenge of extracting features for search technologies. In 2004, NIST also began an evaluation series dedicated to assessing video object detection and tracking technologies using training and test sets that were significantly larger than those used in the past -facilitating novel machine learning approaches and supporting statistically-informative evaluation results. Eventually this effort was merged with other video processing evaluations being implemented in Europe under the classification of events, activities, and relationships (CLEAR) consortium. NIST's goal is to evolve these evaluations of video processing technologies towards a focus on the detection of visually observable events and 3D modeling and to help the computer vision community make strides in the areas of accuracy, robustness, and efficiency.
NIST从2001年开始对视频信息的自动分析进行了一系列的评估。这些开始于NIST文本检索评估(TREC),作为在大量视频集合中搜索信息的试点。评估系列被分拆成自己的评估/研讨会系列,称为TRECVID。TRECVID继续研究为搜索技术提取特征所面临的挑战。2004年,NIST还开始了一个评估系列,致力于评估视频目标检测和跟踪技术,使用的训练和测试集比过去使用的要大得多,这促进了新的机器学习方法,并支持统计信息的评估结果。最终,这项工作与欧洲在事件、活动和关系分类(CLEAR)联盟下实施的其他视频处理评估合并在一起。NIST的目标是将这些视频处理技术的评估发展为关注视觉可观察事件的检测和3D建模,并帮助计算机视觉社区在准确性、鲁棒性和效率方面取得进展。
{"title":"Directions in automatic video analysis evaluations at NIST","authors":"J. Garofolo","doi":"10.1109/AVSS.2007.4425276","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425276","url":null,"abstract":"NIST has been conducting a series of evaluations in the automatic analysis of information in video since 2001. These began within the NIST text retrieval evaluation (TREC) as a pilot track in searching for information in large collections of video. The evaluation series was spun off into its own evaluation/workshop series called TRECVID. TRECVID continues to examine the challenge of extracting features for search technologies. In 2004, NIST also began an evaluation series dedicated to assessing video object detection and tracking technologies using training and test sets that were significantly larger than those used in the past -facilitating novel machine learning approaches and supporting statistically-informative evaluation results. Eventually this effort was merged with other video processing evaluations being implemented in Europe under the classification of events, activities, and relationships (CLEAR) consortium. NIST's goal is to evolve these evaluations of video processing technologies towards a focus on the detection of visually observable events and 3D modeling and to help the computer vision community make strides in the areas of accuracy, robustness, and efficiency.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127885477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High performance 3D sound localization for surveillance applications 用于监视应用的高性能3D声音定位
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425372
F. Keyrouz, K. Diepold, S. Keyrouz
One of the key features of the human auditory system, is its nearly constant omni-directional sensitivity, e.g., the system reacts to alerting signals coming from a direction away from the sight of focused visual attention. In many surveillance situations where visual attention completely fails since the robot cameras have no direct line of sight with the sound sources, the ability to estimate the direction of the sources of danger relying on sound becomes extremely important. We present in this paper a novel method for sound localization in azimuth and elevation based on a humanoid head. The method was tested in simulations as well as in a real reverberant environment. Compared to state-of-the-art localization techniques the method is able to localize with high accuracy 3D sound sources even in the presence of reflections and high distortion.
人类听觉系统的关键特征之一是其几乎恒定的全向灵敏度,例如,该系统对来自远离集中视觉注意力的方向的警报信号作出反应。在许多监视情况下,由于机器人摄像机与声源没有直接的视线,视觉注意力完全失效,依靠声音估计危险源方向的能力变得极其重要。本文提出了一种基于人形头部的声音方位和仰角定位新方法。该方法在模拟和真实混响环境中进行了测试。与最先进的定位技术相比,即使在反射和高失真的情况下,该方法也能够高精度地定位3D声源。
{"title":"High performance 3D sound localization for surveillance applications","authors":"F. Keyrouz, K. Diepold, S. Keyrouz","doi":"10.1109/AVSS.2007.4425372","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425372","url":null,"abstract":"One of the key features of the human auditory system, is its nearly constant omni-directional sensitivity, e.g., the system reacts to alerting signals coming from a direction away from the sight of focused visual attention. In many surveillance situations where visual attention completely fails since the robot cameras have no direct line of sight with the sound sources, the ability to estimate the direction of the sources of danger relying on sound becomes extremely important. We present in this paper a novel method for sound localization in azimuth and elevation based on a humanoid head. The method was tested in simulations as well as in a real reverberant environment. Compared to state-of-the-art localization techniques the method is able to localize with high accuracy 3D sound sources even in the presence of reflections and high distortion.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"241 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116155612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
A particle filter based fusion framework for video-radio tracking in smart spaces 基于粒子滤波的智能空间视频无线电跟踪融合框架
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425293
A. Dore, A. Cattoni, C. Regazzoni
One of the main issues for Ambient Intelligence (AmI) systems is to continuously localize the user and to detect his/her identity in order to provide dedicated services. A video-radio fusion methodology, relying on the Particle Filter algorithm, is here proposed to track objects in a complex extensive environment, exploiting the complementary benefits provided by both systems. Visual tracking commonly outperforms radio localization in terms of precision but it is inefficient because of occlusions and illumination changes. Instead, radio measurements, gathered by a user's radio device, are unambiguously associated to the respective target through the "virtual" identity (i.e. MAC/IP addresses). The joint usage of the two data typologies allows a more robust tracking and a major flexibility in the architectural setting up of the AmI system. The method has been extensively tested in a simulated and off-line framework and on real world data proving its effectiveness.
环境智能(AmI)系统的主要问题之一是持续定位用户并检测他/她的身份,以便提供专门的服务。本文提出了一种基于粒子滤波算法的视频无线电融合方法,利用两种系统提供的互补优势,在复杂的广泛环境中跟踪目标。视觉跟踪通常在精度方面优于无线电定位,但由于遮挡和光照变化而效率低下。相反,由用户的无线电设备收集的无线电测量值通过“虚拟”身份(即MAC/IP地址)明确地与各自的目标相关联。这两种数据类型的联合使用允许在AmI系统的体系结构设置中进行更健壮的跟踪和更大的灵活性。该方法已在模拟和离线框架以及现实世界数据中进行了广泛的测试,证明了其有效性。
{"title":"A particle filter based fusion framework for video-radio tracking in smart spaces","authors":"A. Dore, A. Cattoni, C. Regazzoni","doi":"10.1109/AVSS.2007.4425293","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425293","url":null,"abstract":"One of the main issues for Ambient Intelligence (AmI) systems is to continuously localize the user and to detect his/her identity in order to provide dedicated services. A video-radio fusion methodology, relying on the Particle Filter algorithm, is here proposed to track objects in a complex extensive environment, exploiting the complementary benefits provided by both systems. Visual tracking commonly outperforms radio localization in terms of precision but it is inefficient because of occlusions and illumination changes. Instead, radio measurements, gathered by a user's radio device, are unambiguously associated to the respective target through the \"virtual\" identity (i.e. MAC/IP addresses). The joint usage of the two data typologies allows a more robust tracking and a major flexibility in the architectural setting up of the AmI system. The method has been extensively tested in a simulated and off-line framework and on real world data proving its effectiveness.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116553916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
On the development of an autonomous and self-adaptable moving object detector 自主自适应运动目标检测器的研制
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425336
H. Celik, A. Hanjalic, E. Hendriks
Object detection is a crucial step in automating monitoring and surveillance. A classical approach to object detection employs supervised learning methods, which are effective in well-defined narrow application scopes. In this paper we propose a framework for detecting moving objects in video, which first learns autonomously and on-line the characteristic features of typical object appearances at various parts of the observed scene. The collected knowledge is then used to calibrate the system for the given scene, and to separate isolated appearances of a dominant moving object from other events. Compared to the supervised detectors, the proposed framework is self-adaptable, and therefore able to handle large diversity of objects and situations, typical for general surveillance and monitoring applications. We demonstrate the effectiveness of our framework by employing it to isolate pedestrians in public places and cars on a highway.
目标检测是实现自动化监控的关键步骤。经典的目标检测方法采用监督学习方法,这种方法在定义明确的狭窄应用范围内是有效的。在本文中,我们提出了一种检测视频中运动物体的框架,该框架首先自主地在线学习观察场景中各个部分的典型物体外观特征。然后,收集到的知识用于校准给定场景的系统,并将主要移动物体的孤立外观与其他事件分开。与监督检测器相比,所提出的框架具有自适应性,因此能够处理各种各样的对象和情况,适用于一般监视和监控应用。我们通过使用它来隔离公共场所的行人和高速公路上的汽车来证明我们的框架的有效性。
{"title":"On the development of an autonomous and self-adaptable moving object detector","authors":"H. Celik, A. Hanjalic, E. Hendriks","doi":"10.1109/AVSS.2007.4425336","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425336","url":null,"abstract":"Object detection is a crucial step in automating monitoring and surveillance. A classical approach to object detection employs supervised learning methods, which are effective in well-defined narrow application scopes. In this paper we propose a framework for detecting moving objects in video, which first learns autonomously and on-line the characteristic features of typical object appearances at various parts of the observed scene. The collected knowledge is then used to calibrate the system for the given scene, and to separate isolated appearances of a dominant moving object from other events. Compared to the supervised detectors, the proposed framework is self-adaptable, and therefore able to handle large diversity of objects and situations, typical for general surveillance and monitoring applications. We demonstrate the effectiveness of our framework by employing it to isolate pedestrians in public places and cars on a highway.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131149184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Using behavior analysis algorithms to anticipate security threats before they impact mission critical operations 使用行为分析算法在影响关键任务操作之前预测安全威胁
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425328
B. Banks, Gary M. Jackson, J. Helly, David N. Chin, T. J. Smith, A. Schmidt, P. Brewer, Roger Medd, D. Masters, Annetta Burger, W. K. Krebs
The objective of this research is to identify, develop, adapt, prototype, integrate and demonstrate open access force protection and security technologies and processes. The goal is to provide more open public access to recreational and other non-restricted facilities on military bases and to improve the overall base safety and security utilizing advanced video and signal based surveillance. A testbed was created at the Pacific Missile Range Facility (PMRF), Kauai, Hawaii to demonstrate novel and innovative security solutions that serve these objectives. The testbed consists of (1) novel sensors (video cameras, radio frequency identification tags, and seismic, lidar, microwave, and infrared sensors), (2) a computer, data storage, and network infrastructure, and (3) behavior analysis software. The behavior analysis software identifies patterns of behavior and discriminates "normal" and "anomalous" behavior in order to anticipate and predict threats so that they can be interdicted before they impact mission critical operations or cause harm to people and infrastructure.
这项研究的目的是确定、开发、调整、原型化、集成和演示开放访问部队保护和安全技术和过程。目标是为军事基地提供更多开放的公共娱乐设施和其他不受限制的设施,并利用先进的视频和信号监控来提高基地的整体安全性。在夏威夷考艾岛的太平洋导弹靶场设施(PMRF)建立了一个试验台,以展示服务于这些目标的新颖和创新的安全解决方案。该试验台包括(1)新型传感器(摄像机、射频识别标签、地震、激光雷达、微波和红外传感器),(2)计算机、数据存储和网络基础设施,以及(3)行为分析软件。行为分析软件识别行为模式,区分“正常”和“异常”行为,以预测和预测威胁,以便在威胁影响关键任务操作或对人员和基础设施造成伤害之前将其拦截。
{"title":"Using behavior analysis algorithms to anticipate security threats before they impact mission critical operations","authors":"B. Banks, Gary M. Jackson, J. Helly, David N. Chin, T. J. Smith, A. Schmidt, P. Brewer, Roger Medd, D. Masters, Annetta Burger, W. K. Krebs","doi":"10.1109/AVSS.2007.4425328","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425328","url":null,"abstract":"The objective of this research is to identify, develop, adapt, prototype, integrate and demonstrate open access force protection and security technologies and processes. The goal is to provide more open public access to recreational and other non-restricted facilities on military bases and to improve the overall base safety and security utilizing advanced video and signal based surveillance. A testbed was created at the Pacific Missile Range Facility (PMRF), Kauai, Hawaii to demonstrate novel and innovative security solutions that serve these objectives. The testbed consists of (1) novel sensors (video cameras, radio frequency identification tags, and seismic, lidar, microwave, and infrared sensors), (2) a computer, data storage, and network infrastructure, and (3) behavior analysis software. The behavior analysis software identifies patterns of behavior and discriminates \"normal\" and \"anomalous\" behavior in order to anticipate and predict threats so that they can be interdicted before they impact mission critical operations or cause harm to people and infrastructure.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132561308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Foreground object localization using a flooding algorithm based on inter-frame change and colour 使用基于帧间变化和颜色的泛洪算法进行前景目标定位
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425365
I. Grinias, G. Tziritas
A Bayesian, fully automatic moving object localization method is proposed, using inter-frame differences and background/foreground colour as discrimination cues. Change detection pixel classification to one of the labels "changed" or "unchanged" is obtained by mixture analysis, while histograms are used for statistical description of colours. High confidence, change detection based, statistical criteria are used to compute a map of initial labelled pixels. Finally, a region growing algorithm, which is named priority multi-label flooding algorithm, assigns pixels to labels using Bayesian dissimilarity criteria. Localization results on well-known benchmark image sequences as well as on webcam and compressed videos are presented.
提出了一种利用帧间差异和背景/前景颜色作为识别线索的贝叶斯全自动运动目标定位方法。通过混合分析得到变化检测像素分类到“改变”或“不变”的标签之一,使用直方图对颜色进行统计描述。高置信度,基于变化检测,统计标准用于计算初始标记像素的地图。最后,提出了一种区域增长算法,即优先多标签泛洪算法,利用贝叶斯不相似度准则为标签分配像素。给出了在知名基准图像序列、网络摄像头和压缩视频上的定位结果。
{"title":"Foreground object localization using a flooding algorithm based on inter-frame change and colour","authors":"I. Grinias, G. Tziritas","doi":"10.1109/AVSS.2007.4425365","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425365","url":null,"abstract":"A Bayesian, fully automatic moving object localization method is proposed, using inter-frame differences and background/foreground colour as discrimination cues. Change detection pixel classification to one of the labels \"changed\" or \"unchanged\" is obtained by mixture analysis, while histograms are used for statistical description of colours. High confidence, change detection based, statistical criteria are used to compute a map of initial labelled pixels. Finally, a region growing algorithm, which is named priority multi-label flooding algorithm, assigns pixels to labels using Bayesian dissimilarity criteria. Localization results on well-known benchmark image sequences as well as on webcam and compressed videos are presented.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132660008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Camera selection in visual sensor networks 视觉传感器网络中的摄像机选择
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425290
S. Soro, W. Heinzelman
Wireless networks of visual sensors have recently emerged as a new type of sensor-based intelligent system, with performance and complexity challenges that go beyond that of existing wireless sensor networks. The goal of the visual sensor network we examine is to provide a user with visual information from any arbitrary viewpoint within the monitored field. This can be accomplished by synthesizing image data from a selection of cameras whose fields of view overlap with the desired field of view. In this work, we compare two methods for the selection of the camera-nodes. The first method selects cameras that minimize the difference between the images provided by the selected cameras and the image that would be captured by a real camera from the desired viewpoint. The second method considers the energy limitations of the battery powered camera-nodes, as well as their importance in the 3D coverage preservation task. Simulations using both metrics for camera-node selection show a clear trade-off between the quality of the reconstructed image and the network's ability to provide full coverage of the monitored 3D space for a longer period of time.
视觉传感器无线网络是近年来兴起的一种新型的基于传感器的智能系统,其性能和复杂性都超越了现有的无线传感器网络。我们研究的视觉传感器网络的目标是为用户提供从被监测领域内任意视点的视觉信息。这可以通过合成来自一组相机的图像数据来完成,这些相机的视场与期望的视场重叠。在这项工作中,我们比较了两种选择相机节点的方法。第一种方法选择的相机使所选相机提供的图像与实际相机从所需视点捕获的图像之间的差异最小化。第二种方法考虑了电池供电的相机节点的能量限制,以及它们在3D覆盖保持任务中的重要性。使用摄像机节点选择的这两个指标进行的模拟显示,在重建图像的质量和网络在较长时间内提供全覆盖监测的3D空间的能力之间存在明显的权衡。
{"title":"Camera selection in visual sensor networks","authors":"S. Soro, W. Heinzelman","doi":"10.1109/AVSS.2007.4425290","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425290","url":null,"abstract":"Wireless networks of visual sensors have recently emerged as a new type of sensor-based intelligent system, with performance and complexity challenges that go beyond that of existing wireless sensor networks. The goal of the visual sensor network we examine is to provide a user with visual information from any arbitrary viewpoint within the monitored field. This can be accomplished by synthesizing image data from a selection of cameras whose fields of view overlap with the desired field of view. In this work, we compare two methods for the selection of the camera-nodes. The first method selects cameras that minimize the difference between the images provided by the selected cameras and the image that would be captured by a real camera from the desired viewpoint. The second method considers the energy limitations of the battery powered camera-nodes, as well as their importance in the 3D coverage preservation task. Simulations using both metrics for camera-node selection show a clear trade-off between the quality of the reconstructed image and the network's ability to provide full coverage of the monitored 3D space for a longer period of time.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133490943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 68
Real time detection of stopped vehicles in traffic scenes 交通场景中停车车辆的实时检测
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425321
A. Bevilacqua, Stefano Vaccari
Computer vision techniques are widely employed in Traffic Monitoring Systems (TMS) to automatically derive statistical information on traffic flow and trigger alarms on significant events. Research in this field embraces a wide range of methods developed to recognize moving objects and to infer their behavior. Tracking systems are used to reconstruct trajectories of moving objects detected often by using background difference approaches. Errors in either motion detection or tracking can perturb the position of the object centroids used to build the trajectories. To cope with the unavoidable errors, we have conceived a method to detect centers of non-motion through recognizing short stability intervals. These are further connected to build the long stability interval used to measure the overall vehicle stopping time. Extensive experiments also accomplished on the sequences provided by AVSS 2007 prove the effectiveness of our approach to measure the maximum stopped delay, even through a comparison with the ground truth.
计算机视觉技术被广泛应用于交通监控系统(TMS)中,以自动获取交通流量的统计信息,并在重大事件时触发警报。该领域的研究包含了广泛的方法来识别运动物体并推断它们的行为。跟踪系统通常使用背景差分方法来重建运动目标的轨迹。运动检测或跟踪中的错误都会干扰用于构建轨迹的物体质心的位置。为了应对不可避免的误差,我们提出了一种通过识别短稳定区间来检测非运动中心的方法。这些进一步连接,以建立长稳定间隔,用于测量整体车辆停车时间。在AVSS 2007提供的序列上完成的大量实验也证明了我们测量最大停止延迟的方法的有效性,甚至通过与地面真实值的比较。
{"title":"Real time detection of stopped vehicles in traffic scenes","authors":"A. Bevilacqua, Stefano Vaccari","doi":"10.1109/AVSS.2007.4425321","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425321","url":null,"abstract":"Computer vision techniques are widely employed in Traffic Monitoring Systems (TMS) to automatically derive statistical information on traffic flow and trigger alarms on significant events. Research in this field embraces a wide range of methods developed to recognize moving objects and to infer their behavior. Tracking systems are used to reconstruct trajectories of moving objects detected often by using background difference approaches. Errors in either motion detection or tracking can perturb the position of the object centroids used to build the trajectories. To cope with the unavoidable errors, we have conceived a method to detect centers of non-motion through recognizing short stability intervals. These are further connected to build the long stability interval used to measure the overall vehicle stopping time. Extensive experiments also accomplished on the sequences provided by AVSS 2007 prove the effectiveness of our approach to measure the maximum stopped delay, even through a comparison with the ground truth.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132336532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
View-invariant human feature extraction for video-surveillance applications 视频监控应用的视点不变人体特征提取
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425331
Grégory Rogez, J. J. Guerrero, C. Orrite-Uruñuela
We present a view-invariant human feature extractor (shape+pose) for pedestrian monitoring in man-made environments. Our approach can be divided into 2 steps: firstly, a series of view-based models is built by discretizing the viewpoint with respect to the camera into several training views. During the online stage, the Homography that relates the image points to the closest and most adequate training plane is calculated using the dominant 3D directions. The input image is then warped to this training view and processed using the corresponding view-based model. After model fitting, the inverse transformation is performed on the resulting human features obtaining a segmented silhouette and a 2D pose estimation in the original input image. Experimental results demonstrate our system performs well, independently of the direction of motion, when it is applied to monocular sequences with high perspective effect.
我们提出了一种视觉不变的人体特征提取器(形状+姿态),用于人工环境下的行人监测。我们的方法可以分为两个步骤:首先,通过将相机的视点离散成几个训练视图,建立一系列基于视图的模型;在在线阶段,使用占主导地位的3D方向计算将图像指向最近和最充分的训练平面的单应性。然后将输入图像扭曲到这个训练视图,并使用相应的基于视图的模型进行处理。模型拟合后,对得到的人体特征进行逆变换,得到原始输入图像的分割轮廓和二维姿态估计。实验结果表明,该系统在具有高透视效果的单目序列中具有良好的效果,与运动方向无关。
{"title":"View-invariant human feature extraction for video-surveillance applications","authors":"Grégory Rogez, J. J. Guerrero, C. Orrite-Uruñuela","doi":"10.1109/AVSS.2007.4425331","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425331","url":null,"abstract":"We present a view-invariant human feature extractor (shape+pose) for pedestrian monitoring in man-made environments. Our approach can be divided into 2 steps: firstly, a series of view-based models is built by discretizing the viewpoint with respect to the camera into several training views. During the online stage, the Homography that relates the image points to the closest and most adequate training plane is calculated using the dominant 3D directions. The input image is then warped to this training view and processed using the corresponding view-based model. After model fitting, the inverse transformation is performed on the resulting human features obtaining a segmented silhouette and a 2D pose estimation in the original input image. Experimental results demonstrate our system performs well, independently of the direction of motion, when it is applied to monocular sequences with high perspective effect.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134369886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
期刊
2007 IEEE Conference on Advanced Video and Signal Based Surveillance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1