首页 > 最新文献

2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance最新文献

英文 中文
TrustCAM: Security and Privacy-Protection for an Embedded Smart Camera Based on Trusted Computing TrustCAM:基于可信计算的嵌入式智能摄像头的安全与隐私保护
Thomas Winkler, B. Rinner
Security and privacy protection are critical issues forpublic acceptance of camera networks. Smart cameras,with onboard image processing, can be used to identifyand remove privacy sensitive image regions. Existing approaches,however, only address isolated aspects withoutconsidering the integration with established security technologiesand the underlying platform. This work tries to fillthis gap and presents TrustCAM, a security-enhanced smartcamera. Based on Trusted Computing, we realize integrityprotection, authenticity and confidentiality of image data.Multiple levels of privacy protection, together with accesscontrol, are supported. Impact on overall system performanceis evaluated on a real prototype implementation.
安全和隐私保护是公众接受摄像机网络的关键问题。带有机载图像处理功能的智能摄像头可用于识别和删除隐私敏感的图像区域。然而,现有的方法只处理孤立的方面,而没有考虑与已建立的安全技术和底层平台的集成。这项工作试图填补这一空白,并提出TrustCAM,一个安全增强的智能相机。基于可信计算,实现了图像数据的完整性保护、真实性和保密性。支持多级隐私保护以及访问控制。对整体系统性能的影响是在一个真实的原型实现上评估的。
{"title":"TrustCAM: Security and Privacy-Protection for an Embedded Smart Camera Based on Trusted Computing","authors":"Thomas Winkler, B. Rinner","doi":"10.1109/AVSS.2010.38","DOIUrl":"https://doi.org/10.1109/AVSS.2010.38","url":null,"abstract":"Security and privacy protection are critical issues forpublic acceptance of camera networks. Smart cameras,with onboard image processing, can be used to identifyand remove privacy sensitive image regions. Existing approaches,however, only address isolated aspects withoutconsidering the integration with established security technologiesand the underlying platform. This work tries to fillthis gap and presents TrustCAM, a security-enhanced smartcamera. Based on Trusted Computing, we realize integrityprotection, authenticity and confidentiality of image data.Multiple levels of privacy protection, together with accesscontrol, are supported. Impact on overall system performanceis evaluated on a real prototype implementation.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134078619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
Global Identification of Tracklets in Video Using Long Range Identity Sensors 基于远程身份传感器的视频轨迹全局识别
Xunyi Yu, A. Ganz
Reliable tracking of people in video and recovering theiridentities are of great importance to video analytics applications.For outdoor applications, long range identity sensorssuch as active RFID can provide good coverage in alarge open space, though they only provide coarse locationinformation. We propose a probabilistic approach usingnoisy inputs from multiple long range identity sensorsto globally associate and identify fragmented tracklets generatedby video tracking algorithms. We extend a networkflow based data association model to recover tracklet identityefficiently. Our approach is evaluated using five minutesof video and active RFID measurements capturing four peoplewearing RFID tags and a couple of passersby. Simulationis then used to evaluate performance for larger numberof targets under different scenarios.identities are of great importance to video analytics applications.For outdoor applications, long range identity sensorssuch as active RFID can provide good coverage in alarge open space, though they only provide coarse locationinformation. We propose a probabilistic approach usingnoisy inputs from multiple long range identity sensorsto globally associate and identify fragmented tracklets generatedby video tracking algorithms. We extend a networkflow based data association model to recover tracklet identityefficiently. Our approach is evaluated using five minutesof video and active RFID measurements capturing four peoplewearing RFID tags and a couple of passersby. Simulationis then used to evaluate performance for larger numberof targets under different scenarios.
视频中人物的可靠跟踪和身份恢复对视频分析应用具有重要意义。对于户外应用,远距离身份传感器(如有源RFID)可以在大的开放空间提供良好的覆盖,尽管它们只提供粗略的位置信息。我们提出了一种概率方法,使用来自多个远程身份传感器的噪声输入来全局关联和识别由视频跟踪算法生成的碎片轨迹。我们扩展了一种基于网络工作流的数据关联模型来有效地恢复轨迹识别。我们的方法是通过五分钟的视频和主动射频识别测量来评估的,这些测量捕获了四个佩戴射频识别标签的人和几个路人。然后使用模拟来评估不同场景下大量目标的性能。身份对视频分析应用非常重要。对于户外应用,远距离身份传感器(如有源RFID)可以在大的开放空间提供良好的覆盖,尽管它们只提供粗略的位置信息。我们提出了一种概率方法,使用来自多个远程身份传感器的噪声输入来全局关联和识别由视频跟踪算法生成的碎片轨迹。我们扩展了一种基于网络工作流的数据关联模型来有效地恢复轨迹识别。我们的方法是通过五分钟的视频和主动射频识别测量来评估的,这些测量捕获了四个佩戴射频识别标签的人和几个路人。然后使用模拟来评估不同场景下大量目标的性能。
{"title":"Global Identification of Tracklets in Video Using Long Range Identity Sensors","authors":"Xunyi Yu, A. Ganz","doi":"10.1109/AVSS.2010.46","DOIUrl":"https://doi.org/10.1109/AVSS.2010.46","url":null,"abstract":"Reliable tracking of people in video and recovering theiridentities are of great importance to video analytics applications.For outdoor applications, long range identity sensorssuch as active RFID can provide good coverage in alarge open space, though they only provide coarse locationinformation. We propose a probabilistic approach usingnoisy inputs from multiple long range identity sensorsto globally associate and identify fragmented tracklets generatedby video tracking algorithms. We extend a networkflow based data association model to recover tracklet identityefficiently. Our approach is evaluated using five minutesof video and active RFID measurements capturing four peoplewearing RFID tags and a couple of passersby. Simulationis then used to evaluate performance for larger numberof targets under different scenarios.identities are of great importance to video analytics applications.For outdoor applications, long range identity sensorssuch as active RFID can provide good coverage in alarge open space, though they only provide coarse locationinformation. We propose a probabilistic approach usingnoisy inputs from multiple long range identity sensorsto globally associate and identify fragmented tracklets generatedby video tracking algorithms. We extend a networkflow based data association model to recover tracklet identityefficiently. Our approach is evaluated using five minutesof video and active RFID measurements capturing four peoplewearing RFID tags and a couple of passersby. Simulationis then used to evaluate performance for larger numberof targets under different scenarios.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115656333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Local Abnormality Detection in Video Using Subspace Learning 基于子空间学习的视频局部异常检测
Ioannis Tziakos, A. Cavallaro, Li-Qun Xu
On-line abnormality detection in video without the use ofobject detection and tracking is a desirable task in surveillance.We address this problem for the case when labeledinformation about normal events is limited and informationabout abnormal events is not available. We formulatethis problem as a one-class classification, where multiplelocal novelty classifiers (detectors) are used to first learnnormal actions based on motion information and then todetect abnormal instances. Each detector is associated toa small region of interest and is trained over labeled samplesprojected on an appropriate subspace. We discover thissubspace by using both labeled and unlabeled segments.We investigate the use of subspace learning and comparetwo methodologies based on linear (Principal ComponentsAnalysis) and on non-linear subspace learning (LocalityPreserving Projections), respectively. Experimental resultson a real underground station dataset shows that the linearapproach is better suited for cases where the subspacelearning is restricted to the labeled samples, whereas thenon-linear approach is preferable in the presence of additionalunlabeled data.
在不使用目标检测和跟踪的情况下对视频进行在线异常检测是监控领域的理想任务。当有关正常事件的标签信息有限而有关异常事件的信息不可用时,我们会解决此问题。我们将这个问题表述为一类分类,其中使用多局部新颖性分类器(检测器)首先根据运动信息学习正常动作,然后检测异常实例。每个检测器与一个感兴趣的小区域相关联,并在投影在适当子空间上的标记样本上进行训练。我们通过使用标记段和未标记段来发现这个子空间。我们研究了子空间学习的使用,并分别比较了基于线性(主成分分析)和非线性子空间学习(局部保留投影)的两种方法。一个真实地铁站数据集的实验结果表明,线性方法更适合于子空间学习仅限于标记样本的情况,而非线性方法更适合于存在额外未标记数据的情况。
{"title":"Local Abnormality Detection in Video Using Subspace Learning","authors":"Ioannis Tziakos, A. Cavallaro, Li-Qun Xu","doi":"10.1109/AVSS.2010.70","DOIUrl":"https://doi.org/10.1109/AVSS.2010.70","url":null,"abstract":"On-line abnormality detection in video without the use ofobject detection and tracking is a desirable task in surveillance.We address this problem for the case when labeledinformation about normal events is limited and informationabout abnormal events is not available. We formulatethis problem as a one-class classification, where multiplelocal novelty classifiers (detectors) are used to first learnnormal actions based on motion information and then todetect abnormal instances. Each detector is associated toa small region of interest and is trained over labeled samplesprojected on an appropriate subspace. We discover thissubspace by using both labeled and unlabeled segments.We investigate the use of subspace learning and comparetwo methodologies based on linear (Principal ComponentsAnalysis) and on non-linear subspace learning (LocalityPreserving Projections), respectively. Experimental resultson a real underground station dataset shows that the linearapproach is better suited for cases where the subspacelearning is restricted to the labeled samples, whereas thenon-linear approach is preferable in the presence of additionalunlabeled data.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114139564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Privacy-Aware Object Representation for Surveillance Systems 监控系统的隐私感知对象表示
Hauke Vagts, A. Bauer
Real-time object tracking, feature assessment and classification based on video are an enabling technology for improving situation awareness of human operators as well as for automated recognition of critical situations. To bridge the gap between video signal-processing output and spatio-temporal analysis of object behavior at the semantic level, a generic and sensor-independent object representation is necessary. However, in the case of public and corporate video surveillance, centralized storage of aggregated data leads to privacy violations. This article explains how a centralized object representation, complying with the Fair Information Practice Principles (FIP) privacy constraints, can be implemented for a video surveillance system.
基于视频的实时目标跟踪、特征评估和分类是提高人类操作员态势感知和关键态势自动识别的使能技术。为了在语义层面弥合视频信号处理输出和对象行为的时空分析之间的差距,需要一种通用的、独立于传感器的对象表示。然而,在公共和企业视频监控的情况下,集中存储聚合数据会导致隐私侵犯。本文解释了如何在视频监控系统中实现符合公平信息实践原则(FIP)隐私约束的集中式对象表示。
{"title":"Privacy-Aware Object Representation for Surveillance Systems","authors":"Hauke Vagts, A. Bauer","doi":"10.1109/AVSS.2010.73","DOIUrl":"https://doi.org/10.1109/AVSS.2010.73","url":null,"abstract":"Real-time object tracking, feature assessment and classification based on video are an enabling technology for improving situation awareness of human operators as well as for automated recognition of critical situations. To bridge the gap between video signal-processing output and spatio-temporal analysis of object behavior at the semantic level, a generic and sensor-independent object representation is necessary. However, in the case of public and corporate video surveillance, centralized storage of aggregated data leads to privacy violations. This article explains how a centralized object representation, complying with the Fair Information Practice Principles (FIP) privacy constraints, can be implemented for a video surveillance system.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117274999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Incremental Mosaicking of Images from Autonomous, Small-Scale UAVs 从自主,小型无人机图像的增量镶嵌
S. Yahyanejad, D. Wischounig-Strucl, M. Quaritsch, B. Rinner
Unmanned aerial vehicles (UAVs) have been recently deployedin various civilian applications such as environmentalmonitoring, aerial imaging or surveillance. Small-scaleUAVs are of special interest for first responders since theycan rather easily provide bird’s eye view images of disasterareas. In this paper we present a hybrid approach to mosaickan overview image of the area of interest given a setof individual images captured by UAVs flying at low altitude.Our approach combines metadata-based and imagebasedstitching methods in order to overcome the challengesof low-altitude, small-scale UAV deployment such as nonnadirview, inaccurate sensor data, non-planar ground surfacesand limited computing and communication resources.For the generation of the overview image we preserve georeferencingas much as possible, since this is an importantrequirement for disaster management applications. Ourmosaicking method has been implemented on our UAV systemand evaluated based on a quality metric.
无人驾驶飞行器(uav)最近被部署在各种民用应用中,如环境监测、航空成像或监视。小型无人机对急救人员特别有兴趣,因为它们可以很容易地提供灾区的鸟瞰图。在本文中,我们提出了一种混合的方法,以获得一组由低空飞行的无人机捕获的单个图像的兴趣区域的马赛克概述图像。我们的方法结合了基于元数据和基于图像的拼接方法,以克服低空、小型无人机部署的挑战,如非导航视图、不准确的传感器数据、非平面地面以及有限的计算和通信资源。对于总览图像的生成,我们尽可能地保留地理参考,因为这是灾害管理应用程序的重要需求。我们的拼接方法已经在我们的无人机系统上实现,并基于质量度量进行了评估。
{"title":"Incremental Mosaicking of Images from Autonomous, Small-Scale UAVs","authors":"S. Yahyanejad, D. Wischounig-Strucl, M. Quaritsch, B. Rinner","doi":"10.1109/AVSS.2010.14","DOIUrl":"https://doi.org/10.1109/AVSS.2010.14","url":null,"abstract":"Unmanned aerial vehicles (UAVs) have been recently deployedin various civilian applications such as environmentalmonitoring, aerial imaging or surveillance. Small-scaleUAVs are of special interest for first responders since theycan rather easily provide bird’s eye view images of disasterareas. In this paper we present a hybrid approach to mosaickan overview image of the area of interest given a setof individual images captured by UAVs flying at low altitude.Our approach combines metadata-based and imagebasedstitching methods in order to overcome the challengesof low-altitude, small-scale UAV deployment such as nonnadirview, inaccurate sensor data, non-planar ground surfacesand limited computing and communication resources.For the generation of the overview image we preserve georeferencingas much as possible, since this is an importantrequirement for disaster management applications. Ourmosaicking method has been implemented on our UAV systemand evaluated based on a quality metric.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115256596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
An Activity Monitoring System for Real Elderly at Home: Validation Study 一个真正的居家老人活动监测系统:验证研究
N. Zouba, F. Brémond, M. Thonnat
Since the population of the elderly grows highly, the improvement of the quality of life of elderly at home is of a great importance. This can be achieved through the development of technologies for monitoring their activities at home. In this context, we propose an activity monitoring system which aims to achieve behavior analysis of elderly people. The proposed system consists of an approach combining heterogeneous sensor data to recognize activities at home. This approach combines data provided by video cameras with data provided by environmental sensors attached to house furnishings. In this paper, we validate the proposed activity monitoring system for the recognition of a set of daily activities (e.g. using kitchen equipment, preparing meal) for 9 real elderly volunteers living in an experimental apartment. We compare the behavioral profile between the 9 elderly volunteers. This study shows that the proposed system is thoroughly accepted by the elderly and it is also well appreciated by the medical staff.
随着老年人口的快速增长,提高居家老人的生活质量显得尤为重要。这可以通过发展监测其在家活动的技术来实现。在此背景下,我们提出了一个旨在实现老年人行为分析的活动监测系统。所提出的系统包括一种结合异构传感器数据来识别家庭活动的方法。这种方法结合了摄像机提供的数据和安装在室内家具上的环境传感器提供的数据。在本文中,我们对所提出的活动监测系统进行了验证,以识别生活在实验公寓的9名真正的老年志愿者的一系列日常活动(例如使用厨房设备,准备饭菜)。我们比较了9位老年志愿者的行为特征。本研究表明,所提出的系统被老年人完全接受,也得到了医务人员的认可。
{"title":"An Activity Monitoring System for Real Elderly at Home: Validation Study","authors":"N. Zouba, F. Brémond, M. Thonnat","doi":"10.1109/AVSS.2010.83","DOIUrl":"https://doi.org/10.1109/AVSS.2010.83","url":null,"abstract":"Since the population of the elderly grows highly, the improvement of the quality of life of elderly at home is of a great importance. This can be achieved through the development of technologies for monitoring their activities at home. In this context, we propose an activity monitoring system which aims to achieve behavior analysis of elderly people. The proposed system consists of an approach combining heterogeneous sensor data to recognize activities at home. This approach combines data provided by video cameras with data provided by environmental sensors attached to house furnishings. In this paper, we validate the proposed activity monitoring system for the recognition of a set of daily activities (e.g. using kitchen equipment, preparing meal) for 9 real elderly volunteers living in an experimental apartment. We compare the behavioral profile between the 9 elderly volunteers. This study shows that the proposed system is thoroughly accepted by the elderly and it is also well appreciated by the medical staff.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123486918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Human Motion Change Detection by Hierarchical Gaussian Process Dynamical Model with Particle Filter 基于粒子滤波的分层高斯过程动态模型人体运动变化检测
Yafeng Yin, H. Man, Jing Wang, Guang Yang
Human motion change detection is a challenging taskfor a surveillance sensor system. Major challenges includecomplex scenes with a large amount of targets and confusors,and complex motion behaviors of different human objects.Human motion change detection and understandinghave been intensively studied over the past decades. In thispaper, we present a Hierarchical Gaussian Process DynamicalModel (HGPDM) integrated with particle filter trackerfor humanmotion change detection. Firstly, the high dimensionalhuman motion trajectory training data is projected tothe low dimensional latent space with a two-layer hierarchy.The latent space at the leaf node in bottom layer representsa typical humanmotion trajectory, while the root node in theupper layer controls the interaction and switching amongleaf nodes. The trained HGPDM will then be used to classifytest object trajectories which are captured by the particlefilter tracker. If the motion trajectory is different fromthe motion in the previous frame, the root node will transferthe motion trajectory to the corresponding leaf node. Inaddition, HGPDM can be used to predict the next motionstate, and provide Gaussian process dynamical samples forthe particle filter framework. The experiment results indicatethat our framework can accurately track and detect thehuman motion changes despite of complex motion and occlusion.In addition, the sampling in the hierarchical latentspace has greatly improved the efficiency of the particle filterframework.
人体运动变化检测对监控传感器系统来说是一项具有挑战性的任务。主要挑战包括具有大量目标和混淆的复杂场景,以及不同人类物体的复杂运动行为。在过去的几十年里,人类运动变化的检测和理解已经得到了广泛的研究。本文提出了一种结合粒子滤波跟踪器的分层高斯过程动态模型(HGPDM)用于人体运动变化检测。首先,将高维人体运动轨迹训练数据用两层结构投影到低维潜在空间;底层叶节点的潜在空间代表典型的人体运动轨迹,上层的根节点控制叶节点之间的交互和切换。训练后的HGPDM将用于对粒子过滤器跟踪器捕获的测试对象轨迹进行分类。如果运动轨迹与前一帧的运动不同,根节点将运动轨迹转移到相应的叶节点。此外,HGPDM可用于预测下一个运动状态,并为粒子滤波框架提供高斯过程动态样本。实验结果表明,在复杂的运动和遮挡情况下,我们的框架能够准确地跟踪和检测人体的运动变化。此外,分层潜在空间的采样大大提高了粒子滤波框架的效率。
{"title":"Human Motion Change Detection by Hierarchical Gaussian Process Dynamical Model with Particle Filter","authors":"Yafeng Yin, H. Man, Jing Wang, Guang Yang","doi":"10.1109/AVSS.2010.55","DOIUrl":"https://doi.org/10.1109/AVSS.2010.55","url":null,"abstract":"Human motion change detection is a challenging taskfor a surveillance sensor system. Major challenges includecomplex scenes with a large amount of targets and confusors,and complex motion behaviors of different human objects.Human motion change detection and understandinghave been intensively studied over the past decades. In thispaper, we present a Hierarchical Gaussian Process DynamicalModel (HGPDM) integrated with particle filter trackerfor humanmotion change detection. Firstly, the high dimensionalhuman motion trajectory training data is projected tothe low dimensional latent space with a two-layer hierarchy.The latent space at the leaf node in bottom layer representsa typical humanmotion trajectory, while the root node in theupper layer controls the interaction and switching amongleaf nodes. The trained HGPDM will then be used to classifytest object trajectories which are captured by the particlefilter tracker. If the motion trajectory is different fromthe motion in the previous frame, the root node will transferthe motion trajectory to the corresponding leaf node. Inaddition, HGPDM can be used to predict the next motionstate, and provide Gaussian process dynamical samples forthe particle filter framework. The experiment results indicatethat our framework can accurately track and detect thehuman motion changes despite of complex motion and occlusion.In addition, the sampling in the hierarchical latentspace has greatly improved the efficiency of the particle filterframework.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122042661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Automatic Inter-image Homography Estimation from Person Detections 基于人的图像间单应性自动估计
M. Thaler, R. Mörzinger
Inter-image homographies are essential for many differenttasks involving projective geometry. This paper proposesan adaptive correspondence estimation approach betweenperson detections in a planar scene not relying oncorrespondence features as it is the case in many otherRANSAC-based approaches. The result is a planar interimagehomography calculated from estimated point correspondences.The approach is self-configurable, adaptiveand provides robustness over time by exploiting temporaland geometric information. We demonstrate the manifoldapplicability of the proposed approach on a variety ofdatasets. Improved results compared to a common baselineapproach are shown and the influence of error sources suchas missed detections, false detections and non overlappingfield of views is investigated.
图象间同形性对于许多涉及射影几何的不同任务是必不可少的。本文提出了一种平面场景中人物检测之间的自适应对应估计方法,而不像许多其他基于ransac的方法那样依赖于对应特征。结果是根据估计的点对应计算出平面像间单应性。该方法是自配置的,自适应的,并且通过利用时间和几何信息提供随时间的鲁棒性。我们证明了所提出的方法在各种数据集上的多种适用性。与普通基线方法相比,给出了改进的结果,并研究了误检、误检和视场不重叠等误差源的影响。
{"title":"Automatic Inter-image Homography Estimation from Person Detections","authors":"M. Thaler, R. Mörzinger","doi":"10.1109/AVSS.2010.35","DOIUrl":"https://doi.org/10.1109/AVSS.2010.35","url":null,"abstract":"Inter-image homographies are essential for many differenttasks involving projective geometry. This paper proposesan adaptive correspondence estimation approach betweenperson detections in a planar scene not relying oncorrespondence features as it is the case in many otherRANSAC-based approaches. The result is a planar interimagehomography calculated from estimated point correspondences.The approach is self-configurable, adaptiveand provides robustness over time by exploiting temporaland geometric information. We demonstrate the manifoldapplicability of the proposed approach on a variety ofdatasets. Improved results compared to a common baselineapproach are shown and the influence of error sources suchas missed detections, false detections and non overlappingfield of views is investigated.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125872306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Learning Directed Intention-driven Activities using Co-Clustering 使用共聚类学习定向意图驱动的活动
K. Sankaranarayanan, James W. Davis
We present a novel approach for discovering directedintention-driven pedestrian activities across large urban areas.The proposed approach is based on a mutual informationco-clustering technique that simultaneously clusterstrajectory start locations in the scene which have similardistributions across stop locations and vice-versa. The clusteringassignments are obtained by minimizing the loss ofmutual information between a trajectory start-stop associationmatrix and a compressed co-clustered matrix, afterwhich the scene activities are inferred from the compressedmatrix. We demonstrate our approach using a dataset oflong duration trajectories from multiple PTZ cameras coveringa large area and show improved results over two otherpopular trajectory clustering and entry-exit learning approaches.
我们提出了一种新的方法来发现大型城市地区定向导向的行人活动。该方法基于互信息聚类技术,同时对场景中具有相似停止位置分布的轨迹起始位置进行聚类,反之亦然。通过最小化轨迹启停关联矩阵和压缩共聚类矩阵之间的互信息损失来获得聚类分配,然后从压缩矩阵中推断出场景活动。我们使用覆盖大面积的多个PTZ相机的长时间轨迹数据集来演示我们的方法,并展示了比其他两种流行的轨迹聚类和入口-出口学习方法更好的结果。
{"title":"Learning Directed Intention-driven Activities using Co-Clustering","authors":"K. Sankaranarayanan, James W. Davis","doi":"10.1109/AVSS.2010.41","DOIUrl":"https://doi.org/10.1109/AVSS.2010.41","url":null,"abstract":"We present a novel approach for discovering directedintention-driven pedestrian activities across large urban areas.The proposed approach is based on a mutual informationco-clustering technique that simultaneously clusterstrajectory start locations in the scene which have similardistributions across stop locations and vice-versa. The clusteringassignments are obtained by minimizing the loss ofmutual information between a trajectory start-stop associationmatrix and a compressed co-clustered matrix, afterwhich the scene activities are inferred from the compressedmatrix. We demonstrate our approach using a dataset oflong duration trajectories from multiple PTZ cameras coveringa large area and show improved results over two otherpopular trajectory clustering and entry-exit learning approaches.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126021083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Human Action Recognition using a Hybrid NTLD Classifier 基于混合NTLD分类器的人体动作识别
A. Rani, Sanjeev Kumar, C. Micheloni, G. Foresti
This work proposes a hybrid classifier to recognize humanactions in different contexts. In particular, the proposedhybrid classifier (a neural tree with linear discriminantnodes NTLD), is a neural tree whose nodes can be eithersimple preceptrons or recursive fisher linear discriminant(RFLD) classifiers. A novel technique to substitute badtrained perceptron with more performant linear discriminatorsis introduced. For a given frame, geometrical featuresare extracted from the skeleton of the human blob (silhouette).These geometrical features are collected for a fixednumber of consecutive frames to recognize the correspondingactivity. The resulting feature vector is adopted as inputto the NTLD classifier. The performance of the proposedclassifier has been evaluated on two available databases.
这项工作提出了一种混合分类器来识别不同背景下的人类行为。特别是,所提出的混合分类器(具有线性判别节点的神经树NTLD)是一种神经树,其节点可以是简单感知器或递归fisher线性判别(RFLD)分类器。介绍了一种用性能更好的线性判别器代替训练不好的感知器的新方法。对于给定的帧,从人体斑点(轮廓)的骨架中提取几何特征。在固定数量的连续帧中收集这些几何特征以识别相应的活动。将得到的特征向量作为NTLD分类器的输入。在两个可用的数据库上对所提出的分类器的性能进行了评估。
{"title":"Human Action Recognition using a Hybrid NTLD Classifier","authors":"A. Rani, Sanjeev Kumar, C. Micheloni, G. Foresti","doi":"10.1109/AVSS.2010.11","DOIUrl":"https://doi.org/10.1109/AVSS.2010.11","url":null,"abstract":"This work proposes a hybrid classifier to recognize humanactions in different contexts. In particular, the proposedhybrid classifier (a neural tree with linear discriminantnodes NTLD), is a neural tree whose nodes can be eithersimple preceptrons or recursive fisher linear discriminant(RFLD) classifiers. A novel technique to substitute badtrained perceptron with more performant linear discriminatorsis introduced. For a given frame, geometrical featuresare extracted from the skeleton of the human blob (silhouette).These geometrical features are collected for a fixednumber of consecutive frames to recognize the correspondingactivity. The resulting feature vector is adopted as inputto the NTLD classifier. The performance of the proposedclassifier has been evaluated on two available databases.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130556956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1