首页 > 最新文献

2007 IEEE Conference on Advanced Video and Signal Based Surveillance最新文献

英文 中文
Human activity recognition with action primitives 基于动作原语的人类活动识别
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425332
Zsolt L. Husz, A. Wallace, P. Green
This paper considers the link between tracking algorithms and high-level human behavioural analysis, introducing the action primitives model that recovers symbolic labels from tracked limb configurations. The model consists of similar short-term actions, action primitives clusters, formed automatically and then labelled by supervised learning. The model allows both short actions and longer activities, either periodic or aperiodic. New labels are added incrementally. We determine the effects of model parameters on the labelling of action primitives using ground truth derived from a motion capture system. We also present a representative example of a labelled video sequence.
本文考虑了跟踪算法与高级人类行为分析之间的联系,介绍了从被跟踪肢体配置中恢复符号标签的动作原语模型。该模型由相似的短期动作、动作原语聚类组成,这些动作原语聚类是自动形成的,然后通过监督学习进行标记。该模型既允许短期活动,也允许较长的活动,可以是周期性的,也可以是非周期性的。新标签是增量添加的。我们使用来自动作捕捉系统的地面真实值来确定模型参数对动作原语标记的影响。我们还提出了一个有代表性的标记视频序列的例子。
{"title":"Human activity recognition with action primitives","authors":"Zsolt L. Husz, A. Wallace, P. Green","doi":"10.1109/AVSS.2007.4425332","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425332","url":null,"abstract":"This paper considers the link between tracking algorithms and high-level human behavioural analysis, introducing the action primitives model that recovers symbolic labels from tracked limb configurations. The model consists of similar short-term actions, action primitives clusters, formed automatically and then labelled by supervised learning. The model allows both short actions and longer activities, either periodic or aperiodic. New labels are added incrementally. We determine the effects of model parameters on the labelling of action primitives using ground truth derived from a motion capture system. We also present a representative example of a labelled video sequence.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126605528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Optimal deployment of cameras for video surveillance systems 视频监控系统摄像机的优化部署
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425342
F. Angella, Livier Reithler, Frédéric Gallesio
This article describes a new method which aims at the optimal deployment of sensors for video-surveillance systems, taking realistic models of fixed and PTZ cameras into account, as well as video analysis requirements. The approach relies on a spatial translation of constraints, a method for fast exploration of potential solutions and hardware acceleration of inter-visibility computation. This operational tool allows the evaluation of complex surveillance systems prior installation thanks to a precise simulation of their spatial coverage.
本文介绍了一种针对视频监控系统中传感器优化部署的新方法,该方法考虑了固定摄像机和PTZ摄像机的实际模型以及视频分析要求。该方法依赖于约束的空间转换,一种快速探索潜在解决方案的方法和内部可见性计算的硬件加速。由于对其空间覆盖范围的精确模拟,该操作工具允许在安装之前对复杂的监视系统进行评估。
{"title":"Optimal deployment of cameras for video surveillance systems","authors":"F. Angella, Livier Reithler, Frédéric Gallesio","doi":"10.1109/AVSS.2007.4425342","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425342","url":null,"abstract":"This article describes a new method which aims at the optimal deployment of sensors for video-surveillance systems, taking realistic models of fixed and PTZ cameras into account, as well as video analysis requirements. The approach relies on a spatial translation of constraints, a method for fast exploration of potential solutions and hardware acceleration of inter-visibility computation. This operational tool allows the evaluation of complex surveillance systems prior installation thanks to a precise simulation of their spatial coverage.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123965826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Distributed video surveillance using hardware-friendly sparse large margin classifiers 分布式视频监控使用硬件友好的稀疏大边界分类器
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425291
A. Kerhet, F. Leonardi, A. Boni, P. Lombardo, M. Magno, L. Benini
In contrast to video sensors which just "watch " the world, present-day research is aimed at developing intelligent devices able to interpret it locally. A number of such devices are available on the market, very powerful on the one hand, but requiring either connection to the power grid, or massive rechargeable batteries on the other. MicrelEye, the wireless video sensor node presented in this paper, targets a different design point: portability and a scanty power budget, while still providing a prominent level of intelligence, namely objects classification. To deal with such a challenging task, we propose and implement a new SVM-like hardware-oriented algorithm called ERSVM. The case study considered in this work is people detection. The obtained results suggest that the present technology allows for the design of simple intelligent video nodes capable of performing local classification tasks.
与仅仅“观察”世界的视频传感器不同,目前的研究旨在开发能够在本地解读世界的智能设备。市场上有很多这样的设备,一方面非常强大,但需要连接电网,或者另一方面需要大量的可充电电池。本文提出的无线视频传感器节点MicrelEye针对不同的设计点:便携性和低功耗预算,同时仍然提供突出的智能水平,即对象分类。为了处理这样一个具有挑战性的任务,我们提出并实现了一种新的类似svm的面向硬件的算法,称为ERSVM。在这项工作中考虑的案例研究是人的检测。所得结果表明,本技术允许设计能够执行局部分类任务的简单智能视频节点。
{"title":"Distributed video surveillance using hardware-friendly sparse large margin classifiers","authors":"A. Kerhet, F. Leonardi, A. Boni, P. Lombardo, M. Magno, L. Benini","doi":"10.1109/AVSS.2007.4425291","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425291","url":null,"abstract":"In contrast to video sensors which just \"watch \" the world, present-day research is aimed at developing intelligent devices able to interpret it locally. A number of such devices are available on the market, very powerful on the one hand, but requiring either connection to the power grid, or massive rechargeable batteries on the other. MicrelEye, the wireless video sensor node presented in this paper, targets a different design point: portability and a scanty power budget, while still providing a prominent level of intelligence, namely objects classification. To deal with such a challenging task, we propose and implement a new SVM-like hardware-oriented algorithm called ERSVM. The case study considered in this work is people detection. The obtained results suggest that the present technology allows for the design of simple intelligent video nodes capable of performing local classification tasks.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123572122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Facial biometry by stimulating salient singularity masks 通过刺激显著奇点面具进行面部生物识别
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425363
G. Lefebvre, Christophe Garcia
We present a novel approach for face recognition based on salient singularity descriptors. The automatic feature extraction is performed thanks to a salient point detector, and the singularity information selection is performed by a SOM region-based structuring. The spatial singularity distribution is preserved in order to activate specific neuron maps and the local salient signature stimuli reveals the individual identity. This proposed method appears to be particularly robust to facial expressions and facial poses, as demonstrated in various experiments on well-known databases.
提出了一种基于显著奇异描述符的人脸识别新方法。通过显著点检测器实现特征的自动提取,通过基于区域的SOM结构实现奇异点信息的选择。保留空间奇异分布以激活特定的神经元图,局部显著特征刺激揭示个体身份。在知名数据库上进行的各种实验表明,该方法对面部表情和面部姿势具有特别的鲁棒性。
{"title":"Facial biometry by stimulating salient singularity masks","authors":"G. Lefebvre, Christophe Garcia","doi":"10.1109/AVSS.2007.4425363","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425363","url":null,"abstract":"We present a novel approach for face recognition based on salient singularity descriptors. The automatic feature extraction is performed thanks to a salient point detector, and the singularity information selection is performed by a SOM region-based structuring. The spatial singularity distribution is preserved in order to activate specific neuron maps and the local salient signature stimuli reveals the individual identity. This proposed method appears to be particularly robust to facial expressions and facial poses, as demonstrated in various experiments on well-known databases.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116042041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An efficient particle filter for color-based tracking in complex scenes 一种用于复杂场景中基于颜色跟踪的高效粒子滤波器
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425306
J. M. D. Rincón, C. Orrite-Uruñuela, J. Jaraba
In this paper, we introduce an efficient method for particle selection in tracking objects in complex scenes. First, we improve the proposal distribution function of the tracking algorithm, including current observation, reducing the cost of evaluating particles with a very low likelihood. In addition, we use a partitioned sampling approach to decompose the dynamic state in several stages. It enables to deal with high-dimensional states without an excessive computational cost. To represent the color distribution, the appearance of the tracked object is modelled by sampled pixels. Based on this representation, the probability of any observation is estimated using non-parametric techniques in color space. As a result, we obtain a probability color density image (PDI) where each pixel points its membership to the target color model. In this way, the evaluation of all particles is accelerated by computing the likelihood p(zx) using the integral image of the PDI.
本文介绍了一种用于复杂场景中目标跟踪的粒子选择方法。首先,我们改进了跟踪算法的建议分布函数,包括当前观测值,减少了极低似然粒子的评估成本。此外,我们还采用分段抽样的方法将动态状态分解为几个阶段。它可以在不增加计算成本的情况下处理高维状态。为了表示颜色分布,跟踪对象的外观由采样像素建模。基于这种表示,使用非参数技术在色彩空间中估计任何观测的概率。因此,我们得到一个概率颜色密度图像(PDI),其中每个像素指向其隶属于目标颜色模型。这样,通过使用PDI的积分图像计算可能性p(zx)来加速所有粒子的评估。
{"title":"An efficient particle filter for color-based tracking in complex scenes","authors":"J. M. D. Rincón, C. Orrite-Uruñuela, J. Jaraba","doi":"10.1109/AVSS.2007.4425306","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425306","url":null,"abstract":"In this paper, we introduce an efficient method for particle selection in tracking objects in complex scenes. First, we improve the proposal distribution function of the tracking algorithm, including current observation, reducing the cost of evaluating particles with a very low likelihood. In addition, we use a partitioned sampling approach to decompose the dynamic state in several stages. It enables to deal with high-dimensional states without an excessive computational cost. To represent the color distribution, the appearance of the tracked object is modelled by sampled pixels. Based on this representation, the probability of any observation is estimated using non-parametric techniques in color space. As a result, we obtain a probability color density image (PDI) where each pixel points its membership to the target color model. In this way, the evaluation of all particles is accelerated by computing the likelihood p(zx) using the integral image of the PDI.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133505254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Face recognition using non-linear image reconstruction 基于非线性图像重建的人脸识别
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425354
S. Duffner, Christophe Garcia
We present a face recognition technique based on a special type of convolutional neural network that is trained to extract characteristic features from face images and reconstruct the corresponding reference face images which are chosen beforehand for each individual to recognize. The reconstruction is realized by a so-called "bottle-neck" neural network that learns to project face images into a low-dimensional vector space and to reconstruct the respective reference images from the projected vectors. In contrast to methods based on the Principal Component Analysis (PCA), the Linear Discriminant Analysis (LDA) etc., the projection is non-linear and depends on the choice of the reference images. Moreover, local and global processing are closely interconnected and the respective parameters are conjointly learnt. Having trained the neural network, new face images can then be classified by comparing the respective projected vectors. We experimentally show that the choice of the reference images influences the final recognition performance and that this method outperforms linear projection methods in terms of precision and robustness.
本文提出了一种基于卷积神经网络的人脸识别技术,该技术通过训练从人脸图像中提取特征特征,并重建相应的参考人脸图像,这些图像是预先选择的,供每个人识别。重建是通过所谓的“瓶颈”神经网络实现的,该网络学习将人脸图像投影到低维向量空间中,并从投影向量中重建相应的参考图像。与基于主成分分析(PCA)、线性判别分析(LDA)等方法相比,投影是非线性的,依赖于参考图像的选择。此外,局部和全局处理紧密相连,各自的参数被联合学习。训练神经网络后,新的人脸图像可以通过比较各自的投影向量进行分类。实验表明,参考图像的选择会影响最终的识别性能,并且该方法在精度和鲁棒性方面优于线性投影方法。
{"title":"Face recognition using non-linear image reconstruction","authors":"S. Duffner, Christophe Garcia","doi":"10.1109/AVSS.2007.4425354","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425354","url":null,"abstract":"We present a face recognition technique based on a special type of convolutional neural network that is trained to extract characteristic features from face images and reconstruct the corresponding reference face images which are chosen beforehand for each individual to recognize. The reconstruction is realized by a so-called \"bottle-neck\" neural network that learns to project face images into a low-dimensional vector space and to reconstruct the respective reference images from the projected vectors. In contrast to methods based on the Principal Component Analysis (PCA), the Linear Discriminant Analysis (LDA) etc., the projection is non-linear and depends on the choice of the reference images. Moreover, local and global processing are closely interconnected and the respective parameters are conjointly learnt. Having trained the neural network, new face images can then be classified by comparing the respective projected vectors. We experimentally show that the choice of the reference images influences the final recognition performance and that this method outperforms linear projection methods in terms of precision and robustness.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128195805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Multiple appearance models for face tracking in surveillance videos 监控视频中人脸跟踪的多种外观模型
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425341
Gurumurthy Swaminathan, V. Venkoparao, S. Bedros
Face tracking is a key component for automated video surveillance systems. It supports and enhances tasks such as face recognition and video indexing. Face tracking in surveillance scenarios is a challenging problem due to ambient illumination variations, face pose changes, occlusions, and background clutter. We present an algorithm for tracking faces in surveillance video based on a particle filter mechanism using multiple appearance models for robust representation of the face. We propose color based appearance model complemented by an edge based appearance model using the Difference of Gaussian (DOG) filters. We demonstrate that combined appearance models are more robust in handling the face and scene variations than a single appearance model. For example, color template appearance model is better in handling pose variations but they deteriorate against illumination variations. Similarly, an edge based model is robust in handling illumination variations but they fail in handling substantial pose changes. Hence, a combined model is more robust in handling pose and illumination changes than either one of them by itself. We show how the algorithm performs on a real surveillance scenario where the face undergoes various pose and illumination changes. The algorithm runs in real-time at 20 fps on a standard 3.0 GHz desktop PC.
人脸跟踪是自动视频监控系统的关键组成部分。它支持并增强了人脸识别和视频索引等任务。由于环境光照变化、人脸姿态变化、遮挡和背景杂波的影响,在监视场景中人脸跟踪是一个具有挑战性的问题。我们提出了一种基于粒子滤波机制的监控视频人脸跟踪算法,该算法使用多个外观模型对人脸进行鲁棒表示。我们提出了基于颜色的外观模型,并使用高斯差分(DOG)滤波器补充基于边缘的外观模型。我们证明了组合外观模型在处理面部和场景变化方面比单一外观模型更具鲁棒性。例如,颜色模板外观模型在处理姿态变化时效果较好,但在处理光照变化时效果较差。同样,基于边缘的模型在处理光照变化方面是鲁棒的,但它们在处理实质性姿态变化方面失败。因此,组合模型在处理姿态和照明变化方面比其中任何一个本身都更健壮。我们展示了该算法如何在真实的监视场景中执行,其中面部经历各种姿势和照明变化。该算法在标准的3.0 GHz桌面PC上以20fps的速度实时运行。
{"title":"Multiple appearance models for face tracking in surveillance videos","authors":"Gurumurthy Swaminathan, V. Venkoparao, S. Bedros","doi":"10.1109/AVSS.2007.4425341","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425341","url":null,"abstract":"Face tracking is a key component for automated video surveillance systems. It supports and enhances tasks such as face recognition and video indexing. Face tracking in surveillance scenarios is a challenging problem due to ambient illumination variations, face pose changes, occlusions, and background clutter. We present an algorithm for tracking faces in surveillance video based on a particle filter mechanism using multiple appearance models for robust representation of the face. We propose color based appearance model complemented by an edge based appearance model using the Difference of Gaussian (DOG) filters. We demonstrate that combined appearance models are more robust in handling the face and scene variations than a single appearance model. For example, color template appearance model is better in handling pose variations but they deteriorate against illumination variations. Similarly, an edge based model is robust in handling illumination variations but they fail in handling substantial pose changes. Hence, a combined model is more robust in handling pose and illumination changes than either one of them by itself. We show how the algorithm performs on a real surveillance scenario where the face undergoes various pose and illumination changes. The algorithm runs in real-time at 20 fps on a standard 3.0 GHz desktop PC.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"279 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131658646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Representing and recognizing complex events in surveillance applications 表示和识别监控应用中的复杂事件
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425360
L. Snidaro, Massimo Belluz, G. Foresti
In this paper, we investigate the problem of representing and maintaining rule knowledge for a video surveillance application. We focus on complex events representation which cannot be straightforwardly represented by canonical means. In particular, we highlight the ongoing efforts for a unifying framework for computable rule and taxonomical knowledge representation.
本文研究了视频监控应用中规则知识的表示和维护问题。我们关注的是不能用规范方法直接表示的复杂事件的表示。特别是,我们强调了为可计算规则和分类知识表示的统一框架所做的持续努力。
{"title":"Representing and recognizing complex events in surveillance applications","authors":"L. Snidaro, Massimo Belluz, G. Foresti","doi":"10.1109/AVSS.2007.4425360","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425360","url":null,"abstract":"In this paper, we investigate the problem of representing and maintaining rule knowledge for a video surveillance application. We focus on complex events representation which cannot be straightforwardly represented by canonical means. In particular, we highlight the ongoing efforts for a unifying framework for computable rule and taxonomical knowledge representation.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123958009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Enhancing the spatial resolution of presence detection in a PIR based wireless surveillance network 提高PIR无线监控网络中存在检测的空间分辨率
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425326
P. Zappi, Elisabetta Farella, L. Benini
Pyroelectric sensors are low-cost, low-power small components commonly used only to trigger alarm in presence of humans or moving objects. However, the use of an array of pyroelectric sensors can lead to extraction of more features such as direction of movements, speed, number of people and other characteristics. In this work a low-cost pyroelectric infrared sensor based wireless network is set up to be used for tracking people motion. A novel technique is proposed to distinguish the direction of movement and the number of people passing. The approach has low computational requirements, therefore it is well-suited to limited-resources devices such as wireless nodes. Tests performed gave promising results.
热释电传感器是一种低成本、低功耗的小型元件,通常只用于在有人类或移动物体时触发警报。然而,使用热释电传感器阵列可以提取更多的特征,如运动方向、速度、人数和其他特征。本文建立了一种基于热释电红外传感器的低成本无线网络,用于跟踪人的运动。提出了一种区分运动方向和行人数量的新方法。该方法计算量低,非常适合无线节点等资源有限的设备。进行的测试给出了令人鼓舞的结果。
{"title":"Enhancing the spatial resolution of presence detection in a PIR based wireless surveillance network","authors":"P. Zappi, Elisabetta Farella, L. Benini","doi":"10.1109/AVSS.2007.4425326","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425326","url":null,"abstract":"Pyroelectric sensors are low-cost, low-power small components commonly used only to trigger alarm in presence of humans or moving objects. However, the use of an array of pyroelectric sensors can lead to extraction of more features such as direction of movements, speed, number of people and other characteristics. In this work a low-cost pyroelectric infrared sensor based wireless network is set up to be used for tracking people motion. A novel technique is proposed to distinguish the direction of movement and the number of people passing. The approach has low computational requirements, therefore it is well-suited to limited-resources devices such as wireless nodes. Tests performed gave promising results.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127262268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 82
Tracking by using dynamic shape model learning in the presence of occlusion 在存在遮挡的情况下使用动态形状模型学习进行跟踪
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425315
M. Asadi, A. Dore, A. Beoldo, C. Regazzoni
The paper presents a new corner-model based learning method able to track non-rigid objects in the presence of occlusion. A voting mechanism followed by a probability density analysis of the voting space histogram is used to estimate new position of the target. The model is updated at any frame. The problem rises in the occlusion events where the occluder corners affect the model and the tracker may follow the occluder. The key point of the method toward success is automatically deciding on the corners to classify them into two classes, good and malicious corners. Good corners are used to update the model in a conservative way removing the corners that are voting to the highly voted wrong positions due to the occluder. This leads to a continuous model learning during occlusion. Experimental results show a successful tracking along with a more precise estimation of shape and motion during occlusion
提出了一种新的基于角点模型的学习方法,能够在遮挡的情况下对非刚性物体进行跟踪。采用投票机制,然后对投票空间直方图进行概率密度分析,估计目标的新位置。模型在任意帧更新。在遮挡事件中,遮挡角会影响模型,跟踪器可能会跟随遮挡角。该方法成功的关键在于自动决定将角分为两类,好角和坏角。好的角被用来以一种保守的方式更新模型,去除那些由于遮挡而投票给高度错误位置的角。这导致了在咬合过程中持续的模型学习。实验结果表明,该方法能够成功地跟踪目标,并且能够更精确地估计遮挡过程中的形状和运动
{"title":"Tracking by using dynamic shape model learning in the presence of occlusion","authors":"M. Asadi, A. Dore, A. Beoldo, C. Regazzoni","doi":"10.1109/AVSS.2007.4425315","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425315","url":null,"abstract":"The paper presents a new corner-model based learning method able to track non-rigid objects in the presence of occlusion. A voting mechanism followed by a probability density analysis of the voting space histogram is used to estimate new position of the target. The model is updated at any frame. The problem rises in the occlusion events where the occluder corners affect the model and the tracker may follow the occluder. The key point of the method toward success is automatically deciding on the corners to classify them into two classes, good and malicious corners. Good corners are used to update the model in a conservative way removing the corners that are voting to the highly voted wrong positions due to the occluder. This leads to a continuous model learning during occlusion. Experimental results show a successful tracking along with a more precise estimation of shape and motion during occlusion","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126331499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
2007 IEEE Conference on Advanced Video and Signal Based Surveillance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1