首页 > 最新文献

2007 IEEE Conference on Advanced Video and Signal Based Surveillance最新文献

英文 中文
People tracking across two distant self-calibrated cameras 人们通过两个远距离的自校准摄像机进行跟踪
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425343
R. Pflugfelder, H. Bischof
People tracking is of fundamental importance in multi-camera surveillance systems. In recent years, many approaches for multi-camera tracking have been discussed. Most methods use either various image features or the geometric relation between the cameras or both as a cue. It is a desire to know the geometry for distant cameras, because geometry is not influenced by, for example, drastic changes in object appearance or in scene illumination. However, the determination of the camera geometry is cumbersome. The paper tries to solve this problem and contributes in two different ways. On the one hand, an approach is presented that calibrates two distant cameras automatically. We continue previous work and focus especially on the calibration of the extrinsic parameters. Point correspondences are used for this task which are acquired by detecting points on top of people's heads. On the other hand, qualitative experimental results with the PETS 2006 benchmark data show that the self-calibration is accurate enough for a solely geometric tracking of people across distant cameras. Reliable features for a matching are hardly available in such cases.
在多摄像机监控系统中,人员跟踪是至关重要的。近年来,人们讨论了多种多摄像机跟踪方法。大多数方法要么使用各种图像特征,要么使用相机之间的几何关系,要么同时使用两者作为线索。人们渴望知道远处相机的几何形状,因为几何形状不受物体外观或场景照明的剧烈变化等因素的影响。然而,相机几何形状的确定是很麻烦的。本文试图解决这一问题,并从两个不同的方面作出贡献。一方面,提出了一种自动标定两台远距摄像机的方法。我们继续以前的工作,并特别关注外部参数的校准。这个任务使用点对应,这是通过检测人们头顶上的点来获得的。另一方面,使用PETS 2006基准数据的定性实验结果表明,自校准足够精确,可以对远距离摄像机中的人进行单独的几何跟踪。在这种情况下,很难获得可靠的匹配特性。
{"title":"People tracking across two distant self-calibrated cameras","authors":"R. Pflugfelder, H. Bischof","doi":"10.1109/AVSS.2007.4425343","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425343","url":null,"abstract":"People tracking is of fundamental importance in multi-camera surveillance systems. In recent years, many approaches for multi-camera tracking have been discussed. Most methods use either various image features or the geometric relation between the cameras or both as a cue. It is a desire to know the geometry for distant cameras, because geometry is not influenced by, for example, drastic changes in object appearance or in scene illumination. However, the determination of the camera geometry is cumbersome. The paper tries to solve this problem and contributes in two different ways. On the one hand, an approach is presented that calibrates two distant cameras automatically. We continue previous work and focus especially on the calibration of the extrinsic parameters. Point correspondences are used for this task which are acquired by detecting points on top of people's heads. On the other hand, qualitative experimental results with the PETS 2006 benchmark data show that the self-calibration is accurate enough for a solely geometric tracking of people across distant cameras. Reliable features for a matching are hardly available in such cases.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124125298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Real-time detection of illegally parked vehicles using 1-D transformation 利用一维变换实时检测违章停放车辆
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425319
J. T. Lee, M. Ryoo, Matthew Riley, J. Aggarwal
With decreasing costs of high quality surveillance systems, human activity detection and tracking has become increasingly practical. Accordingly, automated systems have been designed for numerous detection tasks, but the task of detecting illegally parked vehicles has been left largely to the human operators of surveillance systems. We propose a methodology for detecting this event in realtime by applying a novel image projection that reduces the dimensionality of the image data and thus reduces the computational complexity of the segmentation and tracking processes. After event detection, we invert the transformation to recover the original appearance of the vehicle and to allow for further processing that may require the two dimensional data. The proposed algorithm is able to successfully recognize illegally parked vehicles in real-time in the i-LIDS bag and vehicle detection challenge datasets.
随着高质量监测系统成本的降低,人类活动检测和跟踪变得越来越实用。因此,自动化系统已被设计用于许多检测任务,但检测非法停放车辆的任务主要留给了监视系统的人工操作员。我们提出了一种实时检测该事件的方法,该方法通过应用一种新的图像投影来降低图像数据的维数,从而降低分割和跟踪过程的计算复杂性。在检测到事件后,我们进行反向转换以恢复车辆的原始外观,并允许进一步处理可能需要的二维数据。该算法能够在i-LIDS包和车辆检测挑战数据集中成功地实时识别非法停放车辆。
{"title":"Real-time detection of illegally parked vehicles using 1-D transformation","authors":"J. T. Lee, M. Ryoo, Matthew Riley, J. Aggarwal","doi":"10.1109/AVSS.2007.4425319","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425319","url":null,"abstract":"With decreasing costs of high quality surveillance systems, human activity detection and tracking has become increasingly practical. Accordingly, automated systems have been designed for numerous detection tasks, but the task of detecting illegally parked vehicles has been left largely to the human operators of surveillance systems. We propose a methodology for detecting this event in realtime by applying a novel image projection that reduces the dimensionality of the image data and thus reduces the computational complexity of the segmentation and tracking processes. After event detection, we invert the transformation to recover the original appearance of the vehicle and to allow for further processing that may require the two dimensional data. The proposed algorithm is able to successfully recognize illegally parked vehicles in real-time in the i-LIDS bag and vehicle detection challenge datasets.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125906210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Detecting hidden objects: Security imaging using millimetre-waves and terahertz 探测隐藏物体:使用毫米波和太赫兹进行安全成像
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425277
M. Kemp
There has been intense interest in the use of millimetre wave and terahertz technology for the detection of concealed weapons, explosives and other threats. Radiation at these frequencies is safe, penetrates barriers and has short enough wavelengths to allow discrimination between objects. In addition, many solids including explosives have characteristic spectroscopic signatures at terahertz wavelengths which can be used to identify them. This paper reviews the progress which has been made in recent years and identifies the achievements, challenges and prospects for these technologies in checkpoint people screening, stand off detection of improvised explosive devices (lEDs) and suicide bombers as well as more specialized screening tasks.
人们对利用毫米波和太赫兹技术探测隐藏的武器、爆炸物和其他威胁有着浓厚的兴趣。这些频率的辐射是安全的,可以穿透屏障,波长足够短,可以区分物体。此外,包括炸药在内的许多固体在太赫兹波长处具有特征光谱特征,可用于识别它们。本文综述了近年来的进展,并指出了这些技术在检查站人员筛查、简易爆炸装置(led)和自杀式炸弹的站外检测以及更专业的筛查任务中的成就、挑战和前景。
{"title":"Detecting hidden objects: Security imaging using millimetre-waves and terahertz","authors":"M. Kemp","doi":"10.1109/AVSS.2007.4425277","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425277","url":null,"abstract":"There has been intense interest in the use of millimetre wave and terahertz technology for the detection of concealed weapons, explosives and other threats. Radiation at these frequencies is safe, penetrates barriers and has short enough wavelengths to allow discrimination between objects. In addition, many solids including explosives have characteristic spectroscopic signatures at terahertz wavelengths which can be used to identify them. This paper reviews the progress which has been made in recent years and identifies the achievements, challenges and prospects for these technologies in checkpoint people screening, stand off detection of improvised explosive devices (lEDs) and suicide bombers as well as more specialized screening tasks.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130086897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Learning gender from human gaits and faces 从人的步态和面部识别性别
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425362
Caifeng Shan, S. Gong, P. McOwan
Computer vision based gender classification is an important component in visual surveillance systems. In this paper, we investigate gender classification from human gaits in image sequences, a relatively understudied problem. Moreover, we propose to fuse gait and face for improved gender discrimination. We exploit Canonical Correlation Analysis (CCA), a powerful tool that is well suited for relating two sets of measurements, to fuse the two modalities at the feature level. Experiments demonstrate that our multimodal gender recognition system achieves the superior recognition performance of 97.2% in large datasets.
基于计算机视觉的性别分类是视觉监控系统的重要组成部分。在本文中,我们研究了图像序列中人类步态的性别分类,这是一个研究相对较少的问题。此外,我们提出融合步态和面部以改善性别歧视。我们利用典型相关分析(CCA),一个非常适合关联两组测量的强大工具,在特征级别融合两种模式。实验表明,我们的多模态性别识别系统在大型数据集上的识别率达到了97.2%。
{"title":"Learning gender from human gaits and faces","authors":"Caifeng Shan, S. Gong, P. McOwan","doi":"10.1109/AVSS.2007.4425362","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425362","url":null,"abstract":"Computer vision based gender classification is an important component in visual surveillance systems. In this paper, we investigate gender classification from human gaits in image sequences, a relatively understudied problem. Moreover, we propose to fuse gait and face for improved gender discrimination. We exploit Canonical Correlation Analysis (CCA), a powerful tool that is well suited for relating two sets of measurements, to fuse the two modalities at the feature level. Experiments demonstrate that our multimodal gender recognition system achieves the superior recognition performance of 97.2% in large datasets.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"75 2-3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133055342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Vision based anti-collision system for rail track maintenance vehicles 基于视觉的轨道养护车辆防撞系统
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425305
F. Maire
Maintenance trains travel in convoy. In Australia, only the first train of the convoy pays attention to the track signalization (the other convoy vehicles simply follow the preceding vehicle). Because of human errors, collisions can happen between the maintenance vehicles. Although an anti-collision system based on a laser distance meter is already in operation, the existing system has a limited range due to the curvature of the tracks. In this paper, we introduce an anti-collision system based on vision. The proposed system induces a 3D model of the track as a piecewise quadratic function (with continuity constraints on the function and its derivative). The geometric constraints of the rail tracks allow the creation of a completely self-calibrating system. Although road lane marking detection algorithms perform well most of the time for rail detection, the metallic surface of a rail does not always behave like a road lane marking. Therefore we had to develop new techniques to address the specific problems of the reflectance of rails.
维修列车在车队中行驶。在澳大利亚,只有车队的第一列火车注意轨道信号(其他车队车辆只是跟随前面的车辆)。由于人为失误,维修车辆之间可能发生碰撞。尽管基于激光距离计的防碰撞系统已经在运行,但由于轨道的曲率,现有系统的范围有限。本文介绍了一种基于视觉的汽车防撞系统。该系统将轨道的三维模型归纳为一个分段二次函数(函数及其导数具有连续性约束)。轨道的几何约束允许创建一个完全自校准系统。尽管道路车道标记检测算法在大多数情况下对轨道检测表现良好,但轨道的金属表面并不总是表现得像道路车道标记。因此,我们必须开发新的技术来解决轨道反射的具体问题。
{"title":"Vision based anti-collision system for rail track maintenance vehicles","authors":"F. Maire","doi":"10.1109/AVSS.2007.4425305","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425305","url":null,"abstract":"Maintenance trains travel in convoy. In Australia, only the first train of the convoy pays attention to the track signalization (the other convoy vehicles simply follow the preceding vehicle). Because of human errors, collisions can happen between the maintenance vehicles. Although an anti-collision system based on a laser distance meter is already in operation, the existing system has a limited range due to the curvature of the tracks. In this paper, we introduce an anti-collision system based on vision. The proposed system induces a 3D model of the track as a piecewise quadratic function (with continuity constraints on the function and its derivative). The geometric constraints of the rail tracks allow the creation of a completely self-calibrating system. Although road lane marking detection algorithms perform well most of the time for rail detection, the metallic surface of a rail does not always behave like a road lane marking. Therefore we had to develop new techniques to address the specific problems of the reflectance of rails.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128838180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Anomalous trajectory detection using support vector machines 基于支持向量机的异常轨迹检测
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425302
C. Piciarelli, G. Foresti
One of the most promising approaches to event analysis in video sequences is based on the automatic modelling of common patterns of activity for later detection of anomalous events. This approach is especially useful in those applications that do not necessarily require the exact identification of the events, but need only the detection of anomalies that should be reported to a human operator (e.g. video surveillance or traffic monitoring applications). In this paper we propose a trajectory analysis method based on Support Vector Machines; the SVM model is trained on a given set of trajectories and can subsequently detect trajectories substantially differing from the training ones. Particular emphasis is placed on a novel method for estimating the parameter v, since it heavily influences the performances of the system but cannot be easily estimated a-priori. Experimental results are given both on synthetic and real-world data.
视频序列中最有前途的事件分析方法之一是基于对常见活动模式的自动建模,以便以后检测异常事件。这种方法在那些不一定需要准确识别事件的应用中特别有用,而只需要检测应该报告给人工操作员的异常情况(例如视频监控或交通监控应用)。本文提出一种基于支持向量机的轨迹分析方法;支持向量机模型在给定的一组轨迹上进行训练,随后可以检测到与训练轨迹有很大差异的轨迹。特别强调的是一种估计参数v的新方法,因为它严重影响系统的性能,但不能轻易地估计先验。给出了合成数据和实际数据的实验结果。
{"title":"Anomalous trajectory detection using support vector machines","authors":"C. Piciarelli, G. Foresti","doi":"10.1109/AVSS.2007.4425302","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425302","url":null,"abstract":"One of the most promising approaches to event analysis in video sequences is based on the automatic modelling of common patterns of activity for later detection of anomalous events. This approach is especially useful in those applications that do not necessarily require the exact identification of the events, but need only the detection of anomalies that should be reported to a human operator (e.g. video surveillance or traffic monitoring applications). In this paper we propose a trajectory analysis method based on Support Vector Machines; the SVM model is trained on a given set of trajectories and can subsequently detect trajectories substantially differing from the training ones. Particular emphasis is placed on a novel method for estimating the parameter v, since it heavily influences the performances of the system but cannot be easily estimated a-priori. Experimental results are given both on synthetic and real-world data.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"21 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133826502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Image-based shape model for view-invariant human motion recognition 基于图像的视觉不变人体运动识别形状模型
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425333
Ning Jin, F. Mokhtarian
We propose an image-based shape model for view-invariant human motion recognition. Image-based visual hull explicitly represents the 3D shape of an object, which is computed from a set of silhouettes. We then use the set of silhouettes to implicitly represent the visual hull. Due to the fact that a silhouette is the 2D projection of an object in the 3D world with respect to a certain camera, which is sensitive to the point of view, our multi-silhouette representation for the visual hull entails the correspondence between views. To guarantee the correspondence, we define a canonical multi-camera system and a canonical human body orientation in motions. We then "normalize" all the constructed visual hulls into the canonical multi-camera system, align them to follow the canonical orientation, and finally render them. The rendered views thereby satisfy the requirement of the correspondence. In our visual hull's representation, each silhouette is represented as a fixed number of sampled points on its closed contour, therefore, the 3D shape information is implicitly encoded into the concatenation of multiple 2D contours. Each motion class is then learned by a Hidden Markov Model (HMM) with mixture of Gaussians outputs. Experiments using our algorithm over some data sets give encouraging results.
提出了一种基于图像形状的视觉不变人体运动识别模型。基于图像的视觉船体明确地表示一个物体的3D形状,这是从一组轮廓计算出来的。然后,我们使用一组轮廓来隐式地表示视觉船体。由于轮廓是物体在3D世界中相对于特定摄像机的2D投影,这对视角很敏感,因此我们对视觉船体的多轮廓表示需要视图之间的对应关系。为了保证它们的一致性,我们定义了一个规范的多摄像机系统和一个规范的人体运动方向。然后,我们将所有构建的视觉船体“归一化”到规范的多相机系统中,并将它们对齐以遵循规范的方向,最后渲染它们。因此,呈现的视图满足了对应的要求。在我们的视觉船体表示中,每个轮廓都被表示为其封闭轮廓上的固定数量的采样点,因此,3D形状信息被隐式编码为多个2D轮廓的串联。每个运动类然后由一个混合高斯输出的隐马尔可夫模型(HMM)学习。在一些数据集上使用我们的算法进行的实验得到了令人鼓舞的结果。
{"title":"Image-based shape model for view-invariant human motion recognition","authors":"Ning Jin, F. Mokhtarian","doi":"10.1109/AVSS.2007.4425333","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425333","url":null,"abstract":"We propose an image-based shape model for view-invariant human motion recognition. Image-based visual hull explicitly represents the 3D shape of an object, which is computed from a set of silhouettes. We then use the set of silhouettes to implicitly represent the visual hull. Due to the fact that a silhouette is the 2D projection of an object in the 3D world with respect to a certain camera, which is sensitive to the point of view, our multi-silhouette representation for the visual hull entails the correspondence between views. To guarantee the correspondence, we define a canonical multi-camera system and a canonical human body orientation in motions. We then \"normalize\" all the constructed visual hulls into the canonical multi-camera system, align them to follow the canonical orientation, and finally render them. The rendered views thereby satisfy the requirement of the correspondence. In our visual hull's representation, each silhouette is represented as a fixed number of sampled points on its closed contour, therefore, the 3D shape information is implicitly encoded into the concatenation of multiple 2D contours. Each motion class is then learned by a Hidden Markov Model (HMM) with mixture of Gaussians outputs. Experiments using our algorithm over some data sets give encouraging results.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133541418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
On the effect of motion segmentation techniques in description based adaptive video transmission 运动分割技术在基于描述的自适应视频传输中的作用
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425337
Juan Carlos San Miguel, J. Sanchez
This paper presents the results of analysing the effect of different motion segmentation techniques in a system that transmits the information captured by a static surveillance camera in an adaptative way based on the on-line generation of descriptions and their descriptions at different levels of detail. The video sequences are analyzed to detect the regions of activity (motion analysis) and to differentiate them from the background, and the corresponding descriptions (mainly MPEG-7 moving regions) are generated together with the textures of the moving regions and the associated background image. Depending on the available bandwidth, different levels of transmission are specified, ranging from just sending the descriptions generated to a transmission with all the associated images corresponding to the moving objects and background. We study the effect of three motion segmentation algorithms in several aspects such as accurate segmentation, size of the descriptions generated, computational efficiency and reconstructed data quality.
本文分析了不同运动分割技术在静态监控摄像机捕获信息的自适应传输系统中的效果,该系统基于描述的在线生成和不同细节层次的描述。对视频序列进行分析,以检测活动区域(运动分析)并将其与背景区分开来,并生成相应的描述(主要是MPEG-7运动区域)以及运动区域和相关背景图像的纹理。根据可用带宽,指定了不同的传输级别,从仅发送生成的描述到传输与移动物体和背景对应的所有相关图像。研究了三种运动分割算法在分割精度、生成描述大小、计算效率和重构数据质量等方面的效果。
{"title":"On the effect of motion segmentation techniques in description based adaptive video transmission","authors":"Juan Carlos San Miguel, J. Sanchez","doi":"10.1109/AVSS.2007.4425337","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425337","url":null,"abstract":"This paper presents the results of analysing the effect of different motion segmentation techniques in a system that transmits the information captured by a static surveillance camera in an adaptative way based on the on-line generation of descriptions and their descriptions at different levels of detail. The video sequences are analyzed to detect the regions of activity (motion analysis) and to differentiate them from the background, and the corresponding descriptions (mainly MPEG-7 moving regions) are generated together with the textures of the moving regions and the associated background image. Depending on the available bandwidth, different levels of transmission are specified, ranging from just sending the descriptions generated to a transmission with all the associated images corresponding to the moving objects and background. We study the effect of three motion segmentation algorithms in several aspects such as accurate segmentation, size of the descriptions generated, computational efficiency and reconstructed data quality.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133359583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
CASSANDRA: audio-video sensor fusion for aggression detection 用于攻击检测的音频-视频传感器融合
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425310
W. Zajdel, J. D. Krijnders, T. Andringa, D. Gavrila
This paper presents a smart surveillance system named CASSANDRA, aimed at detecting instances of aggressive human behavior in public environments. A distinguishing aspect of CASSANDRA is the exploitation of the complimentary nature of audio and video sensing to disambiguate scene activity in real-life, noisy and dynamic environments. At the lower level, independent analysis of the audio and video streams yields intermediate descriptors of a scene like: "scream", "passing train" or "articulation energy". At the higher level, a Dynamic Bayesian Network is used as a fusion mechanism that produces an aggregate aggression indication for the current scene. Our prototype system is validated on a set of scenarios performed by professional actors at an actual train station to ensure a realistic audio and video noise setting.
本文提出了一种名为CASSANDRA的智能监控系统,旨在检测公共环境中人类攻击行为的实例。CASSANDRA的一个与众不同的方面是利用音频和视频传感的互补特性来消除现实生活中嘈杂和动态环境中的场景活动。在较低的层次上,对音频和视频流的独立分析产生了场景的中间描述符,如“尖叫”、“经过的火车”或“发音能量”。在更高的层次上,动态贝叶斯网络被用作一种融合机制,为当前场景产生汇总攻击指示。我们的原型系统在一组由专业演员在实际火车站表演的场景中进行了验证,以确保真实的音频和视频噪音设置。
{"title":"CASSANDRA: audio-video sensor fusion for aggression detection","authors":"W. Zajdel, J. D. Krijnders, T. Andringa, D. Gavrila","doi":"10.1109/AVSS.2007.4425310","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425310","url":null,"abstract":"This paper presents a smart surveillance system named CASSANDRA, aimed at detecting instances of aggressive human behavior in public environments. A distinguishing aspect of CASSANDRA is the exploitation of the complimentary nature of audio and video sensing to disambiguate scene activity in real-life, noisy and dynamic environments. At the lower level, independent analysis of the audio and video streams yields intermediate descriptors of a scene like: \"scream\", \"passing train\" or \"articulation energy\". At the higher level, a Dynamic Bayesian Network is used as a fusion mechanism that produces an aggregate aggression indication for the current scene. Our prototype system is validated on a set of scenarios performed by professional actors at an actual train station to ensure a realistic audio and video noise setting.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133370887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 113
An audio-visual sensor fusion approach for feature based vehicle identification 一种基于特征的车辆识别的视听传感器融合方法
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425295
A. Klausner, A. Tengg, C. Leistner, Stefan Erb, B. Rinner
In this article we present our software framework for embedded online data fusion, called I-SENSE. We discuss the fusion model and the decision modeling approach using support vector machines. Due to the system complexity and the genetic approach a data oriented model is introduced. The main focus of the article is targeted at our techniques for extracting features of acoustic-and visual-data. Experimental results of our "traffic surveillance" case study demonstrate the feasibility of our multi-level data fusion approach.
在本文中,我们介绍了我们的嵌入式在线数据融合软件框架,称为I-SENSE。讨论了基于支持向量机的融合模型和决策建模方法。由于系统的复杂性和遗传方法,引入了一种面向数据的模型。本文的主要焦点是针对我们提取声学和视觉数据特征的技术。以“交通监控”为例进行了实验研究,结果证明了多级数据融合方法的可行性。
{"title":"An audio-visual sensor fusion approach for feature based vehicle identification","authors":"A. Klausner, A. Tengg, C. Leistner, Stefan Erb, B. Rinner","doi":"10.1109/AVSS.2007.4425295","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425295","url":null,"abstract":"In this article we present our software framework for embedded online data fusion, called I-SENSE. We discuss the fusion model and the decision modeling approach using support vector machines. Due to the system complexity and the genetic approach a data oriented model is introduced. The main focus of the article is targeted at our techniques for extracting features of acoustic-and visual-data. Experimental results of our \"traffic surveillance\" case study demonstrate the feasibility of our multi-level data fusion approach.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"395 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134543224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
期刊
2007 IEEE Conference on Advanced Video and Signal Based Surveillance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1