首页 > 最新文献

2007 IEEE Conference on Advanced Video and Signal Based Surveillance最新文献

英文 中文
Bottom-up/top-down coordination in a multiagent visual sensor network 多智能体视觉传感器网络中的自底向上/自顶向下协调
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425292
Federico Castanedo, M. A. Patricio, Jesús García, J. M. Molina
In this paper an approach for multi-sensor coordination in a multiagent visual sensor network is presented. A belief-desire-intention model of multiagent systems is employed. In this multiagent system, the interactions between several surveillance-sensor agents and their respective fusion agent are discussed. The surveillance process is improved using a bottom-up/top-down coordination approach, in which a fusion agent controls the coordination process. In the bottom-up phase the information is sent to the fusion agent. On the other hand, in the top-down stage, feedback messages are sent to those surveillance-sensor agents that are performing an inconsistency tracking process with regard to the global fused tracking process. This feedback information allows to the surveillance-sensor agent to correct its tracking process. Finally, preliminary experiments with the PETS 2006 database are presented.
提出了一种多智能体视觉传感器网络中多传感器协调的方法。采用了多智能体系统的信念-愿望-意图模型。在这个多智能体系统中,讨论了多个监视传感器智能体之间的相互作用以及它们各自的融合智能体。采用自下而上/自上而下的协调方法改进监视过程,其中融合剂控制协调过程。在自底向上阶段,信息被发送到融合剂。另一方面,在自顶向下阶段,反馈消息被发送到那些正在执行关于全局融合跟踪过程的不一致跟踪过程的监视传感器代理。这些反馈信息允许监视传感器代理纠正其跟踪过程。最后,利用PETS 2006数据库进行了初步实验。
{"title":"Bottom-up/top-down coordination in a multiagent visual sensor network","authors":"Federico Castanedo, M. A. Patricio, Jesús García, J. M. Molina","doi":"10.1109/AVSS.2007.4425292","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425292","url":null,"abstract":"In this paper an approach for multi-sensor coordination in a multiagent visual sensor network is presented. A belief-desire-intention model of multiagent systems is employed. In this multiagent system, the interactions between several surveillance-sensor agents and their respective fusion agent are discussed. The surveillance process is improved using a bottom-up/top-down coordination approach, in which a fusion agent controls the coordination process. In the bottom-up phase the information is sent to the fusion agent. On the other hand, in the top-down stage, feedback messages are sent to those surveillance-sensor agents that are performing an inconsistency tracking process with regard to the global fused tracking process. This feedback information allows to the surveillance-sensor agent to correct its tracking process. Finally, preliminary experiments with the PETS 2006 database are presented.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121923280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A DSP-based system for the detection of vehicles parked in prohibited areas 一个基于dsp的系统,用于检测停在禁止区域的车辆
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425320
S. Boragno, B. Boghossian, J. Black, D. Makris, S. Velastín
In this paper, a system for automatic robust video surveillance is described and in particular its application to the problem of locating vehicles that stop in prohibited area is discussed. The structure of software for video processing (alarm generation, interface with the operator and information storage) is outlined together with the hardware (Trimedia DSP boards and industrial computers) which constitutes an industrial-grade product. The emphasis on this paper is to demonstrate robust detection and hence we show the results of a performance evaluation process carried out with the UK's i-LIDS "Parked Vehicle " reference dataset.
本文介绍了一种自动鲁棒视频监控系统,重点讨论了该系统在车辆停在禁区内的定位问题中的应用。概述了视频处理软件的结构(报警生成、与操作员的接口和信息存储)以及硬件(Trimedia DSP板和工业计算机),构成了工业级产品。本文的重点是展示鲁棒检测,因此我们展示了使用英国i-LIDS“停放车辆”参考数据集进行的性能评估过程的结果。
{"title":"A DSP-based system for the detection of vehicles parked in prohibited areas","authors":"S. Boragno, B. Boghossian, J. Black, D. Makris, S. Velastín","doi":"10.1109/AVSS.2007.4425320","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425320","url":null,"abstract":"In this paper, a system for automatic robust video surveillance is described and in particular its application to the problem of locating vehicles that stop in prohibited area is discussed. The structure of software for video processing (alarm generation, interface with the operator and information storage) is outlined together with the hardware (Trimedia DSP boards and industrial computers) which constitutes an industrial-grade product. The emphasis on this paper is to demonstrate robust detection and hence we show the results of a performance evaluation process carried out with the UK's i-LIDS \"Parked Vehicle \" reference dataset.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127813706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Human activity recognition with action primitives 基于动作原语的人类活动识别
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425332
Zsolt L. Husz, A. Wallace, P. Green
This paper considers the link between tracking algorithms and high-level human behavioural analysis, introducing the action primitives model that recovers symbolic labels from tracked limb configurations. The model consists of similar short-term actions, action primitives clusters, formed automatically and then labelled by supervised learning. The model allows both short actions and longer activities, either periodic or aperiodic. New labels are added incrementally. We determine the effects of model parameters on the labelling of action primitives using ground truth derived from a motion capture system. We also present a representative example of a labelled video sequence.
本文考虑了跟踪算法与高级人类行为分析之间的联系,介绍了从被跟踪肢体配置中恢复符号标签的动作原语模型。该模型由相似的短期动作、动作原语聚类组成,这些动作原语聚类是自动形成的,然后通过监督学习进行标记。该模型既允许短期活动,也允许较长的活动,可以是周期性的,也可以是非周期性的。新标签是增量添加的。我们使用来自动作捕捉系统的地面真实值来确定模型参数对动作原语标记的影响。我们还提出了一个有代表性的标记视频序列的例子。
{"title":"Human activity recognition with action primitives","authors":"Zsolt L. Husz, A. Wallace, P. Green","doi":"10.1109/AVSS.2007.4425332","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425332","url":null,"abstract":"This paper considers the link between tracking algorithms and high-level human behavioural analysis, introducing the action primitives model that recovers symbolic labels from tracked limb configurations. The model consists of similar short-term actions, action primitives clusters, formed automatically and then labelled by supervised learning. The model allows both short actions and longer activities, either periodic or aperiodic. New labels are added incrementally. We determine the effects of model parameters on the labelling of action primitives using ground truth derived from a motion capture system. We also present a representative example of a labelled video sequence.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126605528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Multiple appearance models for face tracking in surveillance videos 监控视频中人脸跟踪的多种外观模型
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425341
Gurumurthy Swaminathan, V. Venkoparao, S. Bedros
Face tracking is a key component for automated video surveillance systems. It supports and enhances tasks such as face recognition and video indexing. Face tracking in surveillance scenarios is a challenging problem due to ambient illumination variations, face pose changes, occlusions, and background clutter. We present an algorithm for tracking faces in surveillance video based on a particle filter mechanism using multiple appearance models for robust representation of the face. We propose color based appearance model complemented by an edge based appearance model using the Difference of Gaussian (DOG) filters. We demonstrate that combined appearance models are more robust in handling the face and scene variations than a single appearance model. For example, color template appearance model is better in handling pose variations but they deteriorate against illumination variations. Similarly, an edge based model is robust in handling illumination variations but they fail in handling substantial pose changes. Hence, a combined model is more robust in handling pose and illumination changes than either one of them by itself. We show how the algorithm performs on a real surveillance scenario where the face undergoes various pose and illumination changes. The algorithm runs in real-time at 20 fps on a standard 3.0 GHz desktop PC.
人脸跟踪是自动视频监控系统的关键组成部分。它支持并增强了人脸识别和视频索引等任务。由于环境光照变化、人脸姿态变化、遮挡和背景杂波的影响,在监视场景中人脸跟踪是一个具有挑战性的问题。我们提出了一种基于粒子滤波机制的监控视频人脸跟踪算法,该算法使用多个外观模型对人脸进行鲁棒表示。我们提出了基于颜色的外观模型,并使用高斯差分(DOG)滤波器补充基于边缘的外观模型。我们证明了组合外观模型在处理面部和场景变化方面比单一外观模型更具鲁棒性。例如,颜色模板外观模型在处理姿态变化时效果较好,但在处理光照变化时效果较差。同样,基于边缘的模型在处理光照变化方面是鲁棒的,但它们在处理实质性姿态变化方面失败。因此,组合模型在处理姿态和照明变化方面比其中任何一个本身都更健壮。我们展示了该算法如何在真实的监视场景中执行,其中面部经历各种姿势和照明变化。该算法在标准的3.0 GHz桌面PC上以20fps的速度实时运行。
{"title":"Multiple appearance models for face tracking in surveillance videos","authors":"Gurumurthy Swaminathan, V. Venkoparao, S. Bedros","doi":"10.1109/AVSS.2007.4425341","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425341","url":null,"abstract":"Face tracking is a key component for automated video surveillance systems. It supports and enhances tasks such as face recognition and video indexing. Face tracking in surveillance scenarios is a challenging problem due to ambient illumination variations, face pose changes, occlusions, and background clutter. We present an algorithm for tracking faces in surveillance video based on a particle filter mechanism using multiple appearance models for robust representation of the face. We propose color based appearance model complemented by an edge based appearance model using the Difference of Gaussian (DOG) filters. We demonstrate that combined appearance models are more robust in handling the face and scene variations than a single appearance model. For example, color template appearance model is better in handling pose variations but they deteriorate against illumination variations. Similarly, an edge based model is robust in handling illumination variations but they fail in handling substantial pose changes. Hence, a combined model is more robust in handling pose and illumination changes than either one of them by itself. We show how the algorithm performs on a real surveillance scenario where the face undergoes various pose and illumination changes. The algorithm runs in real-time at 20 fps on a standard 3.0 GHz desktop PC.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"279 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131658646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An efficient particle filter for color-based tracking in complex scenes 一种用于复杂场景中基于颜色跟踪的高效粒子滤波器
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425306
J. M. D. Rincón, C. Orrite-Uruñuela, J. Jaraba
In this paper, we introduce an efficient method for particle selection in tracking objects in complex scenes. First, we improve the proposal distribution function of the tracking algorithm, including current observation, reducing the cost of evaluating particles with a very low likelihood. In addition, we use a partitioned sampling approach to decompose the dynamic state in several stages. It enables to deal with high-dimensional states without an excessive computational cost. To represent the color distribution, the appearance of the tracked object is modelled by sampled pixels. Based on this representation, the probability of any observation is estimated using non-parametric techniques in color space. As a result, we obtain a probability color density image (PDI) where each pixel points its membership to the target color model. In this way, the evaluation of all particles is accelerated by computing the likelihood p(zx) using the integral image of the PDI.
本文介绍了一种用于复杂场景中目标跟踪的粒子选择方法。首先,我们改进了跟踪算法的建议分布函数,包括当前观测值,减少了极低似然粒子的评估成本。此外,我们还采用分段抽样的方法将动态状态分解为几个阶段。它可以在不增加计算成本的情况下处理高维状态。为了表示颜色分布,跟踪对象的外观由采样像素建模。基于这种表示,使用非参数技术在色彩空间中估计任何观测的概率。因此,我们得到一个概率颜色密度图像(PDI),其中每个像素指向其隶属于目标颜色模型。这样,通过使用PDI的积分图像计算可能性p(zx)来加速所有粒子的评估。
{"title":"An efficient particle filter for color-based tracking in complex scenes","authors":"J. M. D. Rincón, C. Orrite-Uruñuela, J. Jaraba","doi":"10.1109/AVSS.2007.4425306","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425306","url":null,"abstract":"In this paper, we introduce an efficient method for particle selection in tracking objects in complex scenes. First, we improve the proposal distribution function of the tracking algorithm, including current observation, reducing the cost of evaluating particles with a very low likelihood. In addition, we use a partitioned sampling approach to decompose the dynamic state in several stages. It enables to deal with high-dimensional states without an excessive computational cost. To represent the color distribution, the appearance of the tracked object is modelled by sampled pixels. Based on this representation, the probability of any observation is estimated using non-parametric techniques in color space. As a result, we obtain a probability color density image (PDI) where each pixel points its membership to the target color model. In this way, the evaluation of all particles is accelerated by computing the likelihood p(zx) using the integral image of the PDI.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133505254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Facial biometry by stimulating salient singularity masks 通过刺激显著奇点面具进行面部生物识别
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425363
G. Lefebvre, Christophe Garcia
We present a novel approach for face recognition based on salient singularity descriptors. The automatic feature extraction is performed thanks to a salient point detector, and the singularity information selection is performed by a SOM region-based structuring. The spatial singularity distribution is preserved in order to activate specific neuron maps and the local salient signature stimuli reveals the individual identity. This proposed method appears to be particularly robust to facial expressions and facial poses, as demonstrated in various experiments on well-known databases.
提出了一种基于显著奇异描述符的人脸识别新方法。通过显著点检测器实现特征的自动提取,通过基于区域的SOM结构实现奇异点信息的选择。保留空间奇异分布以激活特定的神经元图,局部显著特征刺激揭示个体身份。在知名数据库上进行的各种实验表明,该方法对面部表情和面部姿势具有特别的鲁棒性。
{"title":"Facial biometry by stimulating salient singularity masks","authors":"G. Lefebvre, Christophe Garcia","doi":"10.1109/AVSS.2007.4425363","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425363","url":null,"abstract":"We present a novel approach for face recognition based on salient singularity descriptors. The automatic feature extraction is performed thanks to a salient point detector, and the singularity information selection is performed by a SOM region-based structuring. The spatial singularity distribution is preserved in order to activate specific neuron maps and the local salient signature stimuli reveals the individual identity. This proposed method appears to be particularly robust to facial expressions and facial poses, as demonstrated in various experiments on well-known databases.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116042041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Optimal deployment of cameras for video surveillance systems 视频监控系统摄像机的优化部署
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425342
F. Angella, Livier Reithler, Frédéric Gallesio
This article describes a new method which aims at the optimal deployment of sensors for video-surveillance systems, taking realistic models of fixed and PTZ cameras into account, as well as video analysis requirements. The approach relies on a spatial translation of constraints, a method for fast exploration of potential solutions and hardware acceleration of inter-visibility computation. This operational tool allows the evaluation of complex surveillance systems prior installation thanks to a precise simulation of their spatial coverage.
本文介绍了一种针对视频监控系统中传感器优化部署的新方法,该方法考虑了固定摄像机和PTZ摄像机的实际模型以及视频分析要求。该方法依赖于约束的空间转换,一种快速探索潜在解决方案的方法和内部可见性计算的硬件加速。由于对其空间覆盖范围的精确模拟,该操作工具允许在安装之前对复杂的监视系统进行评估。
{"title":"Optimal deployment of cameras for video surveillance systems","authors":"F. Angella, Livier Reithler, Frédéric Gallesio","doi":"10.1109/AVSS.2007.4425342","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425342","url":null,"abstract":"This article describes a new method which aims at the optimal deployment of sensors for video-surveillance systems, taking realistic models of fixed and PTZ cameras into account, as well as video analysis requirements. The approach relies on a spatial translation of constraints, a method for fast exploration of potential solutions and hardware acceleration of inter-visibility computation. This operational tool allows the evaluation of complex surveillance systems prior installation thanks to a precise simulation of their spatial coverage.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123965826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Detecting hidden objects: Security imaging using millimetre-waves and terahertz 探测隐藏物体:使用毫米波和太赫兹进行安全成像
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425277
M. Kemp
There has been intense interest in the use of millimetre wave and terahertz technology for the detection of concealed weapons, explosives and other threats. Radiation at these frequencies is safe, penetrates barriers and has short enough wavelengths to allow discrimination between objects. In addition, many solids including explosives have characteristic spectroscopic signatures at terahertz wavelengths which can be used to identify them. This paper reviews the progress which has been made in recent years and identifies the achievements, challenges and prospects for these technologies in checkpoint people screening, stand off detection of improvised explosive devices (lEDs) and suicide bombers as well as more specialized screening tasks.
人们对利用毫米波和太赫兹技术探测隐藏的武器、爆炸物和其他威胁有着浓厚的兴趣。这些频率的辐射是安全的,可以穿透屏障,波长足够短,可以区分物体。此外,包括炸药在内的许多固体在太赫兹波长处具有特征光谱特征,可用于识别它们。本文综述了近年来的进展,并指出了这些技术在检查站人员筛查、简易爆炸装置(led)和自杀式炸弹的站外检测以及更专业的筛查任务中的成就、挑战和前景。
{"title":"Detecting hidden objects: Security imaging using millimetre-waves and terahertz","authors":"M. Kemp","doi":"10.1109/AVSS.2007.4425277","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425277","url":null,"abstract":"There has been intense interest in the use of millimetre wave and terahertz technology for the detection of concealed weapons, explosives and other threats. Radiation at these frequencies is safe, penetrates barriers and has short enough wavelengths to allow discrimination between objects. In addition, many solids including explosives have characteristic spectroscopic signatures at terahertz wavelengths which can be used to identify them. This paper reviews the progress which has been made in recent years and identifies the achievements, challenges and prospects for these technologies in checkpoint people screening, stand off detection of improvised explosive devices (lEDs) and suicide bombers as well as more specialized screening tasks.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130086897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
People tracking across two distant self-calibrated cameras 人们通过两个远距离的自校准摄像机进行跟踪
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425343
R. Pflugfelder, H. Bischof
People tracking is of fundamental importance in multi-camera surveillance systems. In recent years, many approaches for multi-camera tracking have been discussed. Most methods use either various image features or the geometric relation between the cameras or both as a cue. It is a desire to know the geometry for distant cameras, because geometry is not influenced by, for example, drastic changes in object appearance or in scene illumination. However, the determination of the camera geometry is cumbersome. The paper tries to solve this problem and contributes in two different ways. On the one hand, an approach is presented that calibrates two distant cameras automatically. We continue previous work and focus especially on the calibration of the extrinsic parameters. Point correspondences are used for this task which are acquired by detecting points on top of people's heads. On the other hand, qualitative experimental results with the PETS 2006 benchmark data show that the self-calibration is accurate enough for a solely geometric tracking of people across distant cameras. Reliable features for a matching are hardly available in such cases.
在多摄像机监控系统中,人员跟踪是至关重要的。近年来,人们讨论了多种多摄像机跟踪方法。大多数方法要么使用各种图像特征,要么使用相机之间的几何关系,要么同时使用两者作为线索。人们渴望知道远处相机的几何形状,因为几何形状不受物体外观或场景照明的剧烈变化等因素的影响。然而,相机几何形状的确定是很麻烦的。本文试图解决这一问题,并从两个不同的方面作出贡献。一方面,提出了一种自动标定两台远距摄像机的方法。我们继续以前的工作,并特别关注外部参数的校准。这个任务使用点对应,这是通过检测人们头顶上的点来获得的。另一方面,使用PETS 2006基准数据的定性实验结果表明,自校准足够精确,可以对远距离摄像机中的人进行单独的几何跟踪。在这种情况下,很难获得可靠的匹配特性。
{"title":"People tracking across two distant self-calibrated cameras","authors":"R. Pflugfelder, H. Bischof","doi":"10.1109/AVSS.2007.4425343","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425343","url":null,"abstract":"People tracking is of fundamental importance in multi-camera surveillance systems. In recent years, many approaches for multi-camera tracking have been discussed. Most methods use either various image features or the geometric relation between the cameras or both as a cue. It is a desire to know the geometry for distant cameras, because geometry is not influenced by, for example, drastic changes in object appearance or in scene illumination. However, the determination of the camera geometry is cumbersome. The paper tries to solve this problem and contributes in two different ways. On the one hand, an approach is presented that calibrates two distant cameras automatically. We continue previous work and focus especially on the calibration of the extrinsic parameters. Point correspondences are used for this task which are acquired by detecting points on top of people's heads. On the other hand, qualitative experimental results with the PETS 2006 benchmark data show that the self-calibration is accurate enough for a solely geometric tracking of people across distant cameras. Reliable features for a matching are hardly available in such cases.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124125298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Vision based anti-collision system for rail track maintenance vehicles 基于视觉的轨道养护车辆防撞系统
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425305
F. Maire
Maintenance trains travel in convoy. In Australia, only the first train of the convoy pays attention to the track signalization (the other convoy vehicles simply follow the preceding vehicle). Because of human errors, collisions can happen between the maintenance vehicles. Although an anti-collision system based on a laser distance meter is already in operation, the existing system has a limited range due to the curvature of the tracks. In this paper, we introduce an anti-collision system based on vision. The proposed system induces a 3D model of the track as a piecewise quadratic function (with continuity constraints on the function and its derivative). The geometric constraints of the rail tracks allow the creation of a completely self-calibrating system. Although road lane marking detection algorithms perform well most of the time for rail detection, the metallic surface of a rail does not always behave like a road lane marking. Therefore we had to develop new techniques to address the specific problems of the reflectance of rails.
维修列车在车队中行驶。在澳大利亚,只有车队的第一列火车注意轨道信号(其他车队车辆只是跟随前面的车辆)。由于人为失误,维修车辆之间可能发生碰撞。尽管基于激光距离计的防碰撞系统已经在运行,但由于轨道的曲率,现有系统的范围有限。本文介绍了一种基于视觉的汽车防撞系统。该系统将轨道的三维模型归纳为一个分段二次函数(函数及其导数具有连续性约束)。轨道的几何约束允许创建一个完全自校准系统。尽管道路车道标记检测算法在大多数情况下对轨道检测表现良好,但轨道的金属表面并不总是表现得像道路车道标记。因此,我们必须开发新的技术来解决轨道反射的具体问题。
{"title":"Vision based anti-collision system for rail track maintenance vehicles","authors":"F. Maire","doi":"10.1109/AVSS.2007.4425305","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425305","url":null,"abstract":"Maintenance trains travel in convoy. In Australia, only the first train of the convoy pays attention to the track signalization (the other convoy vehicles simply follow the preceding vehicle). Because of human errors, collisions can happen between the maintenance vehicles. Although an anti-collision system based on a laser distance meter is already in operation, the existing system has a limited range due to the curvature of the tracks. In this paper, we introduce an anti-collision system based on vision. The proposed system induces a 3D model of the track as a piecewise quadratic function (with continuity constraints on the function and its derivative). The geometric constraints of the rail tracks allow the creation of a completely self-calibrating system. Although road lane marking detection algorithms perform well most of the time for rail detection, the metallic surface of a rail does not always behave like a road lane marking. Therefore we had to develop new techniques to address the specific problems of the reflectance of rails.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128838180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
期刊
2007 IEEE Conference on Advanced Video and Signal Based Surveillance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1