首页 > 最新文献

Proceedings Second IEEE Workshop on Visual Surveillance (VS'99) (Cat. No.98-89223)最新文献

英文 中文
Monitoring dynamically changing environments by ubiquitous vision system 利用泛在视觉系统监测动态变化的环境
Kim C. Ng, Hiroshi Ishiguro, Mohan M. Trivedi, T. Sogo
Accurate and efficient monitoring of dynamically changing environments is one of the most important requirements for visual surveillance systems. This paper describes development of a ubiquitous vision system for this monitoring purpose. The system consisting of multiple omni-directional vision sensors is developed to address two specific surveillance tasks: (1) Robust and accurate tracking and profiling of human activities, (2) Dynamic synthesis of virtual views for observing the environment from arbitrary vantage points.
准确、高效地监测动态变化的环境是对视觉监控系统最重要的要求之一。本文描述了一种用于这种监测目的的泛在视觉系统的开发。该系统由多个全方位视觉传感器组成,旨在解决两个特定的监视任务:(1)对人类活动进行鲁棒准确的跟踪和分析;(2)从任意有利位置动态合成虚拟视图以观察环境。
{"title":"Monitoring dynamically changing environments by ubiquitous vision system","authors":"Kim C. Ng, Hiroshi Ishiguro, Mohan M. Trivedi, T. Sogo","doi":"10.1109/VS.1999.780270","DOIUrl":"https://doi.org/10.1109/VS.1999.780270","url":null,"abstract":"Accurate and efficient monitoring of dynamically changing environments is one of the most important requirements for visual surveillance systems. This paper describes development of a ubiquitous vision system for this monitoring purpose. The system consisting of multiple omni-directional vision sensors is developed to address two specific surveillance tasks: (1) Robust and accurate tracking and profiling of human activities, (2) Dynamic synthesis of virtual views for observing the environment from arbitrary vantage points.","PeriodicalId":371192,"journal":{"name":"Proceedings Second IEEE Workshop on Visual Surveillance (VS'99) (Cat. No.98-89223)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130630549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
A Bayesian approach to human activity recognition 人类活动识别的贝叶斯方法
A. Madabhushi, J. Aggarwal
Presents a methodology for automatically identifying human action. We use a new approach to human activity recognition that incorporates a Bayesian framework. By tracking the movement of the head of the subject over consecutive frames of monocular grayscale image sequences, we recognize actions in the frontal or lateral view. Input sequences captured from a CCD camera are matched against stored models of actions. The action that is found to be closest to the input sequence is identified. In the present implementation, these actions include sitting down, standing up, bending down, getting up, hugging, squatting, rising from a squatting position, bending sideways, falling backward and walking. This methodology finds application in environments where constant monitoring of human activity is required, such as in department stores and airports.
提出了一种自动识别人类行为的方法。我们使用了一种新的方法来识别人类活动,该方法结合了贝叶斯框架。通过在单目灰度图像序列的连续帧中跟踪受试者头部的运动,我们可以识别正面或侧面视图中的动作。从CCD相机捕获的输入序列与存储的动作模型相匹配。识别出与输入序列最接近的操作。在现在的实施中,这些动作包括坐下、站起来、弯腰、起身、拥抱、蹲起来、从蹲起来、侧身弯腰、向后跌倒和行走。这种方法适用于需要持续监测人类活动的环境,例如百货商店和机场。
{"title":"A Bayesian approach to human activity recognition","authors":"A. Madabhushi, J. Aggarwal","doi":"10.1109/VS.1999.780265","DOIUrl":"https://doi.org/10.1109/VS.1999.780265","url":null,"abstract":"Presents a methodology for automatically identifying human action. We use a new approach to human activity recognition that incorporates a Bayesian framework. By tracking the movement of the head of the subject over consecutive frames of monocular grayscale image sequences, we recognize actions in the frontal or lateral view. Input sequences captured from a CCD camera are matched against stored models of actions. The action that is found to be closest to the input sequence is identified. In the present implementation, these actions include sitting down, standing up, bending down, getting up, hugging, squatting, rising from a squatting position, bending sideways, falling backward and walking. This methodology finds application in environments where constant monitoring of human activity is required, such as in department stores and airports.","PeriodicalId":371192,"journal":{"name":"Proceedings Second IEEE Workshop on Visual Surveillance (VS'99) (Cat. No.98-89223)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130423902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 106
Using models to recognise man-made objects 利用模型识别人造物体
A. L. Reno, D. Booth
Objects in aerial images are extracted by using a method based on a two-dimensional viewer-centred model. Using this approach has advantages over existing methods because the models are easily created and it is efficient. Experiments illustrate the extraction and tracking of man-made objects in reconnaissance images. In later work we intend to extend the method to allow selection between different object types, or between different views of the same object.
采用一种基于二维视角模型的方法提取航拍图像中的目标。使用这种方法比现有的方法有优势,因为模型容易创建,而且效率很高。实验验证了侦察图像中人造目标的提取与跟踪。在以后的工作中,我们打算扩展该方法,以允许在不同对象类型之间进行选择,或者在同一对象的不同视图之间进行选择。
{"title":"Using models to recognise man-made objects","authors":"A. L. Reno, D. Booth","doi":"10.1109/VS.1999.780266","DOIUrl":"https://doi.org/10.1109/VS.1999.780266","url":null,"abstract":"Objects in aerial images are extracted by using a method based on a two-dimensional viewer-centred model. Using this approach has advantages over existing methods because the models are easily created and it is efficient. Experiments illustrate the extraction and tracking of man-made objects in reconnaissance images. In later work we intend to extend the method to allow selection between different object types, or between different views of the same object.","PeriodicalId":371192,"journal":{"name":"Proceedings Second IEEE Workshop on Visual Surveillance (VS'99) (Cat. No.98-89223)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123860260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Multi-view calibration from planar motion for video surveillance 基于平面运动的视频监控多视点标定
C. Jaynes
We present a technique for the registration of multiple surveillance cameras through the automatic alignment of image trajectories. The algorithm address the problem of recovering the relative pose of several stationary cameras that observe one or more objects in motion. Each camera tracks several objects to produce a set of trajectories in the image. Using a simple calibration procedure, we recover the relative orientation of each camera to the local ground plane in order to projectively unwarp image trajectories onto a nominal plane of correct orientation. Unwarped trajectory curves are then matched by solving for the 3D to 3D rotation, translation, and scale that bring them into alignment. The relative transform between a pair of cameras is derived from the independent camera-to-ground-plane rotations and the plane-to-plane transform computed from matched trajectories. Registration aligns n-cameras with respect to each other in a single camera frame (that of the reference camera). The approach recovers both the epipolar geometry between all cameras and the camera-to-ground rotation for each camera. After calibration, points that are known to lay on a world ground plane can be directly backprojected into each of the camera frames. The algorithm is demonstrated for two-camera and three-camera scenarios by tracking pedestrians as they move through a surveillance area and matching the resulting trajectories.
我们提出了一种通过自动对准图像轨迹来实现多台监控摄像机的配准技术。该算法解决了几个静止摄像机在观察一个或多个运动物体时的相对姿态恢复问题。每个摄像机跟踪几个物体,在图像中产生一组轨迹。使用简单的校准程序,我们恢复每个相机的相对方向到本地地平面,以便投影地将图像轨迹解扭曲到正确方向的标称平面上。然后通过求解3D到3D的旋转、平移和缩放来匹配未弯曲的轨迹曲线,从而使它们对齐。一对相机之间的相对变换是由独立的相机到地平面的旋转和由匹配轨迹计算的平面到平面的变换推导出来的。配准在单个相机帧(参考相机的帧)中对n个相机进行相对对齐。该方法恢复了所有相机之间的极层几何形状和每个相机的相机到地面的旋转。校准后,已知的位于世界地平面上的点可以直接反投影到每个相机帧中。该算法通过跟踪行人通过监视区域并匹配生成的轨迹来演示双摄像头和三摄像头场景。
{"title":"Multi-view calibration from planar motion for video surveillance","authors":"C. Jaynes","doi":"10.1109/VS.1999.780269","DOIUrl":"https://doi.org/10.1109/VS.1999.780269","url":null,"abstract":"We present a technique for the registration of multiple surveillance cameras through the automatic alignment of image trajectories. The algorithm address the problem of recovering the relative pose of several stationary cameras that observe one or more objects in motion. Each camera tracks several objects to produce a set of trajectories in the image. Using a simple calibration procedure, we recover the relative orientation of each camera to the local ground plane in order to projectively unwarp image trajectories onto a nominal plane of correct orientation. Unwarped trajectory curves are then matched by solving for the 3D to 3D rotation, translation, and scale that bring them into alignment. The relative transform between a pair of cameras is derived from the independent camera-to-ground-plane rotations and the plane-to-plane transform computed from matched trajectories. Registration aligns n-cameras with respect to each other in a single camera frame (that of the reference camera). The approach recovers both the epipolar geometry between all cameras and the camera-to-ground rotation for each camera. After calibration, points that are known to lay on a world ground plane can be directly backprojected into each of the camera frames. The algorithm is demonstrated for two-camera and three-camera scenarios by tracking pedestrians as they move through a surveillance area and matching the resulting trajectories.","PeriodicalId":371192,"journal":{"name":"Proceedings Second IEEE Workshop on Visual Surveillance (VS'99) (Cat. No.98-89223)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124581378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Frame-rate omnidirectional surveillance and tracking of camouflaged and occluded targets 伪装和遮挡目标的帧率全向监视和跟踪
Terrance E. Boult, R. Micheals, X. Gao, P. Lewis, C. Power, Weihong Yin, A. Erkan
Video surveillance involves watching an area for significant events. Perimeter security generally requires watching areas that afford trespassers reasonable cover and concealment. By definition, such "interesting" areas have limited visibility. Furthermore, targets of interest generally attempt to conceal themselves within the cover sometimes adding camouflage to further reduce their visibility. Such targets are only visible "while in motion". The combined result of limited visibility distance and target visibility severely reduces the usefulness of any panning-based approach. As a result, these situations call for a wide field of view, and are a natural application for omni-directional VSAM (video surveillance and monitoring). This paper describes an omni-directional tracking system. After motivating its use, we discuss some domain application constraints and background on the paracamera. We then go through the basic components of the frame-rate Lehigh Omni-directional Tracking System (LOTS) and describe some of its unique features. In particular, the system's combined performance depends on novel adaptive multi-background modeling and a novel quasi-connected-components technique. These key components are described in some detail, while other components are summarized. We end with a summary of an external evaluation of the system.
视频监控包括监视一个地区的重大事件。外围安全通常要求监视那些为非法侵入者提供合理掩护和隐蔽的区域。根据定义,这种“有趣”的区域能见度有限。此外,感兴趣的目标通常会试图隐藏在掩体中,有时会增加伪装来进一步降低他们的能见度。这样的目标只有在“运动中”才能看到。有限的可见距离和目标可见性的综合结果严重降低了任何基于平移的方法的有效性。因此,这些情况需要广阔的视场,是全方位视频监控(VSAM)的自然应用。本文介绍了一种全向跟踪系统。在介绍了它的用途之后,我们讨论了它的一些领域应用限制和背景。然后,我们通过帧率利哈伊全方位跟踪系统(LOTS)的基本组成部分,并描述其一些独特的功能。特别地,系统的综合性能依赖于一种新的自适应多背景建模和一种新的准连接组件技术。详细描述了这些关键组件,同时总结了其他组件。最后,我们对系统的外部评估进行总结。
{"title":"Frame-rate omnidirectional surveillance and tracking of camouflaged and occluded targets","authors":"Terrance E. Boult, R. Micheals, X. Gao, P. Lewis, C. Power, Weihong Yin, A. Erkan","doi":"10.1109/VS.1999.780268","DOIUrl":"https://doi.org/10.1109/VS.1999.780268","url":null,"abstract":"Video surveillance involves watching an area for significant events. Perimeter security generally requires watching areas that afford trespassers reasonable cover and concealment. By definition, such \"interesting\" areas have limited visibility. Furthermore, targets of interest generally attempt to conceal themselves within the cover sometimes adding camouflage to further reduce their visibility. Such targets are only visible \"while in motion\". The combined result of limited visibility distance and target visibility severely reduces the usefulness of any panning-based approach. As a result, these situations call for a wide field of view, and are a natural application for omni-directional VSAM (video surveillance and monitoring). This paper describes an omni-directional tracking system. After motivating its use, we discuss some domain application constraints and background on the paracamera. We then go through the basic components of the frame-rate Lehigh Omni-directional Tracking System (LOTS) and describe some of its unique features. In particular, the system's combined performance depends on novel adaptive multi-background modeling and a novel quasi-connected-components technique. These key components are described in some detail, while other components are summarized. We end with a summary of an external evaluation of the system.","PeriodicalId":371192,"journal":{"name":"Proceedings Second IEEE Workshop on Visual Surveillance (VS'99) (Cat. No.98-89223)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128933843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 134
The analysis of human motion and its application for visual surveillance 人体运动分析及其在视觉监控中的应用
D.M. Gavrilla
"Looking at People" is currently one of the most active application area in computer vision. This contribution provides a short overview of existing work on human motion as far as whole-body motion and gestures are concerned. The overview is based on a more extensive survey article (Gavria (1991)); here, the emphasis lies on surveillance scenarios.
“看人”是目前计算机视觉最活跃的应用领域之一。这个贡献提供了一个关于人体运动的现有工作的简短概述,就全身运动和手势而言。概述是基于一篇更广泛的调查文章(Gavria (1991));这里,重点在于监视场景。
{"title":"The analysis of human motion and its application for visual surveillance","authors":"D.M. Gavrilla","doi":"10.1109/VS.1999.780260","DOIUrl":"https://doi.org/10.1109/VS.1999.780260","url":null,"abstract":"\"Looking at People\" is currently one of the most active application area in computer vision. This contribution provides a short overview of existing work on human motion as far as whole-body motion and gestures are concerned. The overview is based on a more extensive survey article (Gavria (1991)); here, the emphasis lies on surveillance scenarios.","PeriodicalId":371192,"journal":{"name":"Proceedings Second IEEE Workshop on Visual Surveillance (VS'99) (Cat. No.98-89223)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126982280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Video surveillance of interactions 互动视频监控
Y. Ivanov, C. Stauffer, A. Bobick, W. Grimson
This paper describes an automatic surveillance system, which performs labeling of events and interactions in an outdoor environment. The system is designed to monitor activities in an open parking lot. It consists of three components-an adaptive tracker, an event generator, which maps object tracks onto a set of pre-determined discrete events, and a stochastic parser. The system performs segmentation and labeling of surveillance video of a parking lot and identifies person-vehicle interactions, such as pick-up and drop-off. The system presented in this paper is developed jointly by MIT Media Lab and MIT Artificial Intelligence Lab.
本文描述了一种自动监控系统,该系统可以对室外环境中的事件和交互进行标记。该系统设计用于监控露天停车场的活动。它由三个组件组成:一个自适应跟踪器、一个事件生成器(将对象跟踪映射到一组预先确定的离散事件)和一个随机解析器。该系统对停车场的监控视频进行分割和标记,并识别人与车的互动,如上下车。本文介绍的系统是由麻省理工学院媒体实验室和麻省理工学院人工智能实验室共同开发的。
{"title":"Video surveillance of interactions","authors":"Y. Ivanov, C. Stauffer, A. Bobick, W. Grimson","doi":"10.1109/VS.1999.780272","DOIUrl":"https://doi.org/10.1109/VS.1999.780272","url":null,"abstract":"This paper describes an automatic surveillance system, which performs labeling of events and interactions in an outdoor environment. The system is designed to monitor activities in an open parking lot. It consists of three components-an adaptive tracker, an event generator, which maps object tracks onto a set of pre-determined discrete events, and a stochastic parser. The system performs segmentation and labeling of surveillance video of a parking lot and identifies person-vehicle interactions, such as pick-up and drop-off. The system presented in this paper is developed jointly by MIT Media Lab and MIT Artificial Intelligence Lab.","PeriodicalId":371192,"journal":{"name":"Proceedings Second IEEE Workshop on Visual Surveillance (VS'99) (Cat. No.98-89223)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129758313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 121
Robust person tracking in real scenarios with non-stationary background using a statistical computer vision approach 基于统计计算机视觉的非平稳背景下真实场景鲁棒人跟踪
G. Rigoll, B. Winterstein, S. Muller
This paper presents a novel approach to robust and flexible person tracking using an algorithm that combines two powerful stochastic modeling techniques: The first one is the technique of so-called Pseudo-2D Hidden Markov Models (P2DHMMs) used for capturing the shape of a person with an image frame, and the second technique is the well-known Kalman-filtering algorithm, that uses the output of the P2DHMM for tracking the person by estimation of a bounding box trajectory indicating the location of the person within the entire video sequence. Both algorithms are cooperating together in an optimal way, and with this cooperative feedback, the proposed approach even makes the tracking of persons possible in the presence of background motions, for instance caused by moving objects such as cars, or by camera operations as, for example, panning or zooming. We consider this as major advantage compared to most other tracking algorithms that are mostly not capable of dealing with background motion. Furthermore, the person to be tracked is not required to wear special equipment (e.g. sensors) or special clothing. We therefore believe that our proposed algorithm is among the first approaches capable of handling such a complex tracking problem. Our results are confirmed by several tracking examples in real scenarios, shown at the end of the paper and provided on the web server of our institute.
本文提出了一种新颖的鲁棒和灵活的人跟踪方法,该算法结合了两种强大的随机建模技术:第一种技术是所谓的伪2d隐马尔可夫模型(P2DHMM)技术,用于用图像帧捕获人的形状,第二种技术是众所周知的卡尔曼滤波算法,它使用P2DHMM的输出通过估计一个边界框轨迹来跟踪人,该边界框轨迹指示人在整个视频序列中的位置。这两种算法以最优的方式协同工作,通过这种协同反馈,所提出的方法甚至可以在背景运动的情况下跟踪人,例如由移动的物体(如汽车)或由相机操作(如平移或缩放)引起的运动。我们认为这是与大多数其他跟踪算法相比的主要优势,这些算法大多无法处理背景运动。此外,被跟踪的人不需要佩戴特殊设备(例如传感器)或特殊服装。因此,我们相信我们提出的算法是能够处理如此复杂的跟踪问题的第一个方法之一。我们的结果通过几个实际场景的跟踪实例得到了证实,这些实例在论文的末尾显示,并在我们研究所的web服务器上提供。
{"title":"Robust person tracking in real scenarios with non-stationary background using a statistical computer vision approach","authors":"G. Rigoll, B. Winterstein, S. Muller","doi":"10.1109/VS.1999.780267","DOIUrl":"https://doi.org/10.1109/VS.1999.780267","url":null,"abstract":"This paper presents a novel approach to robust and flexible person tracking using an algorithm that combines two powerful stochastic modeling techniques: The first one is the technique of so-called Pseudo-2D Hidden Markov Models (P2DHMMs) used for capturing the shape of a person with an image frame, and the second technique is the well-known Kalman-filtering algorithm, that uses the output of the P2DHMM for tracking the person by estimation of a bounding box trajectory indicating the location of the person within the entire video sequence. Both algorithms are cooperating together in an optimal way, and with this cooperative feedback, the proposed approach even makes the tracking of persons possible in the presence of background motions, for instance caused by moving objects such as cars, or by camera operations as, for example, panning or zooming. We consider this as major advantage compared to most other tracking algorithms that are mostly not capable of dealing with background motion. Furthermore, the person to be tracked is not required to wear special equipment (e.g. sensors) or special clothing. We therefore believe that our proposed algorithm is among the first approaches capable of handling such a complex tracking problem. Our results are confirmed by several tracking examples in real scenarios, shown at the end of the paper and provided on the web server of our institute.","PeriodicalId":371192,"journal":{"name":"Proceedings Second IEEE Workshop on Visual Surveillance (VS'99) (Cat. No.98-89223)","volume":"31 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114018164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
A real-time system for monitoring of cyclists and pedestrians 一个实时监控骑自行车和行人的系统
J. Heikkilä, O. Silvén
Camera based fixed systems are routinely used for monitoring highway traffic. For this purpose inductive loops and microwave sensors are mainly used. Both techniques achieve very good counting accuracy and are capable of discriminating trucks and cars. However pedestrians and cyclists are mostly counted manually. In this paper, we describe a new camera based automatic system that utilizes Kalman filtering in tracking and Learning Vector Quantization (LVQ) for classifying the observations to pedestrians and cyclists. Both the requirements for such systems and the algorithms used are described. The tests performed show that the system achieves around 80%-90% accuracy in counting and classification.
基于摄像机的固定系统通常用于监控公路交通。为此,主要采用电感回路和微波传感器。这两种技术都达到了非常好的计数精度,并且能够区分卡车和汽车。然而,行人和骑自行车的人大多是手工计算的。在本文中,我们描述了一种新的基于相机的自动系统,该系统利用卡尔曼滤波跟踪和学习向量量化(LVQ)来对行人和骑自行车的人的观察进行分类。描述了这种系统的要求和所使用的算法。实验结果表明,该系统的计数和分类准确率在80% ~ 90%之间。
{"title":"A real-time system for monitoring of cyclists and pedestrians","authors":"J. Heikkilä, O. Silvén","doi":"10.1109/VS.1999.780271","DOIUrl":"https://doi.org/10.1109/VS.1999.780271","url":null,"abstract":"Camera based fixed systems are routinely used for monitoring highway traffic. For this purpose inductive loops and microwave sensors are mainly used. Both techniques achieve very good counting accuracy and are capable of discriminating trucks and cars. However pedestrians and cyclists are mostly counted manually. In this paper, we describe a new camera based automatic system that utilizes Kalman filtering in tracking and Learning Vector Quantization (LVQ) for classifying the observations to pedestrians and cyclists. Both the requirements for such systems and the algorithms used are described. The tests performed show that the system achieves around 80%-90% accuracy in counting and classification.","PeriodicalId":371192,"journal":{"name":"Proceedings Second IEEE Workshop on Visual Surveillance (VS'99) (Cat. No.98-89223)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128218787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 329
Multi-camera colour tracking 多摄像机彩色跟踪
J. Orwell, Paolo Remagnino, Graeme A. Jones
We propose a colour tracker for use in visual surveillance. The tracker is part of a framework designed to monitor a dynamic scene with more than one camera. Colour tracking complements spatial tracking: it can also be used over large temporal intervals, and between spatially uncalibrated cameras. The colour distributions from objects are modelled, and measures of difference between them are discussed. A context is required for assessing the significance of any difference. It is provided by an analysis of the noise processes: first on the camera capture, then on the underlying variability of the signal. We present results comparing parametric and explicit representations, the inclusion and omission of intensity data, and single and multiple cameras.
我们提出了一种用于视觉监控的颜色跟踪器。跟踪器是一个框架的一部分,该框架旨在用多台摄像机监控动态场景。颜色跟踪补充了空间跟踪:它也可以用于大的时间间隔,以及在空间未校准的相机之间。对物体的颜色分布进行了建模,并讨论了它们之间的差异度量。评估任何差异的重要性都需要一个背景。它是通过对噪声过程的分析提供的:首先是对相机捕获的,然后是对信号的潜在可变性的分析。我们给出了比较参数和显式表示、强度数据的包含和遗漏、单相机和多相机的结果。
{"title":"Multi-camera colour tracking","authors":"J. Orwell, Paolo Remagnino, Graeme A. Jones","doi":"10.1109/VS.1999.780264","DOIUrl":"https://doi.org/10.1109/VS.1999.780264","url":null,"abstract":"We propose a colour tracker for use in visual surveillance. The tracker is part of a framework designed to monitor a dynamic scene with more than one camera. Colour tracking complements spatial tracking: it can also be used over large temporal intervals, and between spatially uncalibrated cameras. The colour distributions from objects are modelled, and measures of difference between them are discussed. A context is required for assessing the significance of any difference. It is provided by an analysis of the noise processes: first on the camera capture, then on the underlying variability of the signal. We present results comparing parametric and explicit representations, the inclusion and omission of intensity data, and single and multiple cameras.","PeriodicalId":371192,"journal":{"name":"Proceedings Second IEEE Workshop on Visual Surveillance (VS'99) (Cat. No.98-89223)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125251752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
期刊
Proceedings Second IEEE Workshop on Visual Surveillance (VS'99) (Cat. No.98-89223)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1