首页 > 最新文献

2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance最新文献

英文 中文
TrustCAM: Security and Privacy-Protection for an Embedded Smart Camera Based on Trusted Computing TrustCAM:基于可信计算的嵌入式智能摄像头的安全与隐私保护
Thomas Winkler, B. Rinner
Security and privacy protection are critical issues forpublic acceptance of camera networks. Smart cameras,with onboard image processing, can be used to identifyand remove privacy sensitive image regions. Existing approaches,however, only address isolated aspects withoutconsidering the integration with established security technologiesand the underlying platform. This work tries to fillthis gap and presents TrustCAM, a security-enhanced smartcamera. Based on Trusted Computing, we realize integrityprotection, authenticity and confidentiality of image data.Multiple levels of privacy protection, together with accesscontrol, are supported. Impact on overall system performanceis evaluated on a real prototype implementation.
安全和隐私保护是公众接受摄像机网络的关键问题。带有机载图像处理功能的智能摄像头可用于识别和删除隐私敏感的图像区域。然而,现有的方法只处理孤立的方面,而没有考虑与已建立的安全技术和底层平台的集成。这项工作试图填补这一空白,并提出TrustCAM,一个安全增强的智能相机。基于可信计算,实现了图像数据的完整性保护、真实性和保密性。支持多级隐私保护以及访问控制。对整体系统性能的影响是在一个真实的原型实现上评估的。
{"title":"TrustCAM: Security and Privacy-Protection for an Embedded Smart Camera Based on Trusted Computing","authors":"Thomas Winkler, B. Rinner","doi":"10.1109/AVSS.2010.38","DOIUrl":"https://doi.org/10.1109/AVSS.2010.38","url":null,"abstract":"Security and privacy protection are critical issues forpublic acceptance of camera networks. Smart cameras,with onboard image processing, can be used to identifyand remove privacy sensitive image regions. Existing approaches,however, only address isolated aspects withoutconsidering the integration with established security technologiesand the underlying platform. This work tries to fillthis gap and presents TrustCAM, a security-enhanced smartcamera. Based on Trusted Computing, we realize integrityprotection, authenticity and confidentiality of image data.Multiple levels of privacy protection, together with accesscontrol, are supported. Impact on overall system performanceis evaluated on a real prototype implementation.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134078619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
A Method for Counting People in Crowded Scenes 一种在拥挤场景中计算人数的方法
Donatello Conte, P. Foggia, G. Percannella, Francesco Tufano, M. Vento
This paper presents a novel method to count people forvideo surveillance applications. Methods in the literatureeither follow a direct approach, by first detecting people andthen counting them, or an indirect approach, by establishinga relation between some easily detectable scene featuresand the estimated number of people. The indirect approachis considerably more robust, but it is not easy to take intoaccount such factors as perspective or people groups withdifferent densities.The proposed technique, while based on the indirect approach,specifically addresses these problems; furthermoreit is based on a trainable estimator that does not requirean explicit formulation of a priori knowledge about the perspectiveand density effects present in the scene at hand.In the experimental evaluation, the method has beenextensively compared with the algorithm by Albiol et al.,which provided the highest performance at the PETS 2009contest on people counting. The experimentation has usedthe public PETS 2009 datasets. The results confirm that theproposed method improves the accuracy, while retaining therobustness of the indirect approach.
本文提出了一种视频监控中人数统计的新方法。文献中的方法要么采用直接方法,先检测人,然后对他们进行计数,要么采用间接方法,通过在一些容易检测的场景特征和估计的人数之间建立关系。间接方法相当可靠,但不容易考虑诸如视角或人口密度不同的人群等因素。拟议的技术虽然基于间接方法,但具体解决了这些问题;此外,它基于一个可训练的估计器,不需要关于当前场景中存在的视角和密度效应的先验知识的明确表述。在实验评估中,该方法与Albiol等人的算法进行了广泛的比较,该算法在2009年的PETS比赛中提供了最高的计数性能。实验使用了2009年公共宠物数据集。结果表明,该方法在保持间接方法的稳健性的同时,提高了精度。
{"title":"A Method for Counting People in Crowded Scenes","authors":"Donatello Conte, P. Foggia, G. Percannella, Francesco Tufano, M. Vento","doi":"10.1109/AVSS.2010.78","DOIUrl":"https://doi.org/10.1109/AVSS.2010.78","url":null,"abstract":"This paper presents a novel method to count people forvideo surveillance applications. Methods in the literatureeither follow a direct approach, by first detecting people andthen counting them, or an indirect approach, by establishinga relation between some easily detectable scene featuresand the estimated number of people. The indirect approachis considerably more robust, but it is not easy to take intoaccount such factors as perspective or people groups withdifferent densities.The proposed technique, while based on the indirect approach,specifically addresses these problems; furthermoreit is based on a trainable estimator that does not requirean explicit formulation of a priori knowledge about the perspectiveand density effects present in the scene at hand.In the experimental evaluation, the method has beenextensively compared with the algorithm by Albiol et al.,which provided the highest performance at the PETS 2009contest on people counting. The experimentation has usedthe public PETS 2009 datasets. The results confirm that theproposed method improves the accuracy, while retaining therobustness of the indirect approach.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132431058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 87
Intelligent Sensor Information System For Public Transport – To Safely Go… 公共交通智能传感器信息系统-安全出行…
P. Miller, Weiru Liu, C. Fowler, Huiyu Zhou, Jiali Shen, Jianbing Ma, Jianguo Zhang, Weiqi Yan, K. Mclaughlin, S. Sezer
The Intelligent Sensor Information System (ISIS) isdescribed. ISIS is an active CCTV approach to reducingcrime and anti-social behavior on public transportsystems such as buses. Key to the system is the idea ofevent composition, in which directly detected atomicevents are combined to infer higher-level events withsemantic meaning. Video analytics are described thatprofile the gender of passengers and track them as theymove about a 3-D space. The overall system architectureis described which integrates the on-board eventrecognition with the control room software over a wirelessnetwork to generate a real-time alert. Data frompreliminary data-gathering trial is presented.
描述了智能传感器信息系统(ISIS)。ISIS是一种积极的闭路电视方法,可以减少公共交通系统(如公交车)上的犯罪和反社会行为。该系统的关键是事件组合的思想,将直接检测到的原子事件组合起来,推断出具有语义意义的高级事件。视频分析描述了乘客的性别特征,并在他们在三维空间中移动时跟踪他们。描述了整个系统架构,通过无线网络将机载事件识别与控制室软件集成在一起,以产生实时警报。给出了初步数据收集试验的数据。
{"title":"Intelligent Sensor Information System For Public Transport – To Safely Go…","authors":"P. Miller, Weiru Liu, C. Fowler, Huiyu Zhou, Jiali Shen, Jianbing Ma, Jianguo Zhang, Weiqi Yan, K. Mclaughlin, S. Sezer","doi":"10.1109/AVSS.2010.36","DOIUrl":"https://doi.org/10.1109/AVSS.2010.36","url":null,"abstract":"The Intelligent Sensor Information System (ISIS) isdescribed. ISIS is an active CCTV approach to reducingcrime and anti-social behavior on public transportsystems such as buses. Key to the system is the idea ofevent composition, in which directly detected atomicevents are combined to infer higher-level events withsemantic meaning. Video analytics are described thatprofile the gender of passengers and track them as theymove about a 3-D space. The overall system architectureis described which integrates the on-board eventrecognition with the control room software over a wirelessnetwork to generate a real-time alert. Data frompreliminary data-gathering trial is presented.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133718722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Local Abnormality Detection in Video Using Subspace Learning 基于子空间学习的视频局部异常检测
Ioannis Tziakos, A. Cavallaro, Li-Qun Xu
On-line abnormality detection in video without the use ofobject detection and tracking is a desirable task in surveillance.We address this problem for the case when labeledinformation about normal events is limited and informationabout abnormal events is not available. We formulatethis problem as a one-class classification, where multiplelocal novelty classifiers (detectors) are used to first learnnormal actions based on motion information and then todetect abnormal instances. Each detector is associated toa small region of interest and is trained over labeled samplesprojected on an appropriate subspace. We discover thissubspace by using both labeled and unlabeled segments.We investigate the use of subspace learning and comparetwo methodologies based on linear (Principal ComponentsAnalysis) and on non-linear subspace learning (LocalityPreserving Projections), respectively. Experimental resultson a real underground station dataset shows that the linearapproach is better suited for cases where the subspacelearning is restricted to the labeled samples, whereas thenon-linear approach is preferable in the presence of additionalunlabeled data.
在不使用目标检测和跟踪的情况下对视频进行在线异常检测是监控领域的理想任务。当有关正常事件的标签信息有限而有关异常事件的信息不可用时,我们会解决此问题。我们将这个问题表述为一类分类,其中使用多局部新颖性分类器(检测器)首先根据运动信息学习正常动作,然后检测异常实例。每个检测器与一个感兴趣的小区域相关联,并在投影在适当子空间上的标记样本上进行训练。我们通过使用标记段和未标记段来发现这个子空间。我们研究了子空间学习的使用,并分别比较了基于线性(主成分分析)和非线性子空间学习(局部保留投影)的两种方法。一个真实地铁站数据集的实验结果表明,线性方法更适合于子空间学习仅限于标记样本的情况,而非线性方法更适合于存在额外未标记数据的情况。
{"title":"Local Abnormality Detection in Video Using Subspace Learning","authors":"Ioannis Tziakos, A. Cavallaro, Li-Qun Xu","doi":"10.1109/AVSS.2010.70","DOIUrl":"https://doi.org/10.1109/AVSS.2010.70","url":null,"abstract":"On-line abnormality detection in video without the use ofobject detection and tracking is a desirable task in surveillance.We address this problem for the case when labeledinformation about normal events is limited and informationabout abnormal events is not available. We formulatethis problem as a one-class classification, where multiplelocal novelty classifiers (detectors) are used to first learnnormal actions based on motion information and then todetect abnormal instances. Each detector is associated toa small region of interest and is trained over labeled samplesprojected on an appropriate subspace. We discover thissubspace by using both labeled and unlabeled segments.We investigate the use of subspace learning and comparetwo methodologies based on linear (Principal ComponentsAnalysis) and on non-linear subspace learning (LocalityPreserving Projections), respectively. Experimental resultson a real underground station dataset shows that the linearapproach is better suited for cases where the subspacelearning is restricted to the labeled samples, whereas thenon-linear approach is preferable in the presence of additionalunlabeled data.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114139564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Privacy-Aware Object Representation for Surveillance Systems 监控系统的隐私感知对象表示
Hauke Vagts, A. Bauer
Real-time object tracking, feature assessment and classification based on video are an enabling technology for improving situation awareness of human operators as well as for automated recognition of critical situations. To bridge the gap between video signal-processing output and spatio-temporal analysis of object behavior at the semantic level, a generic and sensor-independent object representation is necessary. However, in the case of public and corporate video surveillance, centralized storage of aggregated data leads to privacy violations. This article explains how a centralized object representation, complying with the Fair Information Practice Principles (FIP) privacy constraints, can be implemented for a video surveillance system.
基于视频的实时目标跟踪、特征评估和分类是提高人类操作员态势感知和关键态势自动识别的使能技术。为了在语义层面弥合视频信号处理输出和对象行为的时空分析之间的差距,需要一种通用的、独立于传感器的对象表示。然而,在公共和企业视频监控的情况下,集中存储聚合数据会导致隐私侵犯。本文解释了如何在视频监控系统中实现符合公平信息实践原则(FIP)隐私约束的集中式对象表示。
{"title":"Privacy-Aware Object Representation for Surveillance Systems","authors":"Hauke Vagts, A. Bauer","doi":"10.1109/AVSS.2010.73","DOIUrl":"https://doi.org/10.1109/AVSS.2010.73","url":null,"abstract":"Real-time object tracking, feature assessment and classification based on video are an enabling technology for improving situation awareness of human operators as well as for automated recognition of critical situations. To bridge the gap between video signal-processing output and spatio-temporal analysis of object behavior at the semantic level, a generic and sensor-independent object representation is necessary. However, in the case of public and corporate video surveillance, centralized storage of aggregated data leads to privacy violations. This article explains how a centralized object representation, complying with the Fair Information Practice Principles (FIP) privacy constraints, can be implemented for a video surveillance system.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117274999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Incremental Mosaicking of Images from Autonomous, Small-Scale UAVs 从自主,小型无人机图像的增量镶嵌
S. Yahyanejad, D. Wischounig-Strucl, M. Quaritsch, B. Rinner
Unmanned aerial vehicles (UAVs) have been recently deployedin various civilian applications such as environmentalmonitoring, aerial imaging or surveillance. Small-scaleUAVs are of special interest for first responders since theycan rather easily provide bird’s eye view images of disasterareas. In this paper we present a hybrid approach to mosaickan overview image of the area of interest given a setof individual images captured by UAVs flying at low altitude.Our approach combines metadata-based and imagebasedstitching methods in order to overcome the challengesof low-altitude, small-scale UAV deployment such as nonnadirview, inaccurate sensor data, non-planar ground surfacesand limited computing and communication resources.For the generation of the overview image we preserve georeferencingas much as possible, since this is an importantrequirement for disaster management applications. Ourmosaicking method has been implemented on our UAV systemand evaluated based on a quality metric.
无人驾驶飞行器(uav)最近被部署在各种民用应用中,如环境监测、航空成像或监视。小型无人机对急救人员特别有兴趣,因为它们可以很容易地提供灾区的鸟瞰图。在本文中,我们提出了一种混合的方法,以获得一组由低空飞行的无人机捕获的单个图像的兴趣区域的马赛克概述图像。我们的方法结合了基于元数据和基于图像的拼接方法,以克服低空、小型无人机部署的挑战,如非导航视图、不准确的传感器数据、非平面地面以及有限的计算和通信资源。对于总览图像的生成,我们尽可能地保留地理参考,因为这是灾害管理应用程序的重要需求。我们的拼接方法已经在我们的无人机系统上实现,并基于质量度量进行了评估。
{"title":"Incremental Mosaicking of Images from Autonomous, Small-Scale UAVs","authors":"S. Yahyanejad, D. Wischounig-Strucl, M. Quaritsch, B. Rinner","doi":"10.1109/AVSS.2010.14","DOIUrl":"https://doi.org/10.1109/AVSS.2010.14","url":null,"abstract":"Unmanned aerial vehicles (UAVs) have been recently deployedin various civilian applications such as environmentalmonitoring, aerial imaging or surveillance. Small-scaleUAVs are of special interest for first responders since theycan rather easily provide bird’s eye view images of disasterareas. In this paper we present a hybrid approach to mosaickan overview image of the area of interest given a setof individual images captured by UAVs flying at low altitude.Our approach combines metadata-based and imagebasedstitching methods in order to overcome the challengesof low-altitude, small-scale UAV deployment such as nonnadirview, inaccurate sensor data, non-planar ground surfacesand limited computing and communication resources.For the generation of the overview image we preserve georeferencingas much as possible, since this is an importantrequirement for disaster management applications. Ourmosaicking method has been implemented on our UAV systemand evaluated based on a quality metric.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115256596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
Global Identification of Tracklets in Video Using Long Range Identity Sensors 基于远程身份传感器的视频轨迹全局识别
Xunyi Yu, A. Ganz
Reliable tracking of people in video and recovering theiridentities are of great importance to video analytics applications.For outdoor applications, long range identity sensorssuch as active RFID can provide good coverage in alarge open space, though they only provide coarse locationinformation. We propose a probabilistic approach usingnoisy inputs from multiple long range identity sensorsto globally associate and identify fragmented tracklets generatedby video tracking algorithms. We extend a networkflow based data association model to recover tracklet identityefficiently. Our approach is evaluated using five minutesof video and active RFID measurements capturing four peoplewearing RFID tags and a couple of passersby. Simulationis then used to evaluate performance for larger numberof targets under different scenarios.identities are of great importance to video analytics applications.For outdoor applications, long range identity sensorssuch as active RFID can provide good coverage in alarge open space, though they only provide coarse locationinformation. We propose a probabilistic approach usingnoisy inputs from multiple long range identity sensorsto globally associate and identify fragmented tracklets generatedby video tracking algorithms. We extend a networkflow based data association model to recover tracklet identityefficiently. Our approach is evaluated using five minutesof video and active RFID measurements capturing four peoplewearing RFID tags and a couple of passersby. Simulationis then used to evaluate performance for larger numberof targets under different scenarios.
视频中人物的可靠跟踪和身份恢复对视频分析应用具有重要意义。对于户外应用,远距离身份传感器(如有源RFID)可以在大的开放空间提供良好的覆盖,尽管它们只提供粗略的位置信息。我们提出了一种概率方法,使用来自多个远程身份传感器的噪声输入来全局关联和识别由视频跟踪算法生成的碎片轨迹。我们扩展了一种基于网络工作流的数据关联模型来有效地恢复轨迹识别。我们的方法是通过五分钟的视频和主动射频识别测量来评估的,这些测量捕获了四个佩戴射频识别标签的人和几个路人。然后使用模拟来评估不同场景下大量目标的性能。身份对视频分析应用非常重要。对于户外应用,远距离身份传感器(如有源RFID)可以在大的开放空间提供良好的覆盖,尽管它们只提供粗略的位置信息。我们提出了一种概率方法,使用来自多个远程身份传感器的噪声输入来全局关联和识别由视频跟踪算法生成的碎片轨迹。我们扩展了一种基于网络工作流的数据关联模型来有效地恢复轨迹识别。我们的方法是通过五分钟的视频和主动射频识别测量来评估的,这些测量捕获了四个佩戴射频识别标签的人和几个路人。然后使用模拟来评估不同场景下大量目标的性能。
{"title":"Global Identification of Tracklets in Video Using Long Range Identity Sensors","authors":"Xunyi Yu, A. Ganz","doi":"10.1109/AVSS.2010.46","DOIUrl":"https://doi.org/10.1109/AVSS.2010.46","url":null,"abstract":"Reliable tracking of people in video and recovering theiridentities are of great importance to video analytics applications.For outdoor applications, long range identity sensorssuch as active RFID can provide good coverage in alarge open space, though they only provide coarse locationinformation. We propose a probabilistic approach usingnoisy inputs from multiple long range identity sensorsto globally associate and identify fragmented tracklets generatedby video tracking algorithms. We extend a networkflow based data association model to recover tracklet identityefficiently. Our approach is evaluated using five minutesof video and active RFID measurements capturing four peoplewearing RFID tags and a couple of passersby. Simulationis then used to evaluate performance for larger numberof targets under different scenarios.identities are of great importance to video analytics applications.For outdoor applications, long range identity sensorssuch as active RFID can provide good coverage in alarge open space, though they only provide coarse locationinformation. We propose a probabilistic approach usingnoisy inputs from multiple long range identity sensorsto globally associate and identify fragmented tracklets generatedby video tracking algorithms. We extend a networkflow based data association model to recover tracklet identityefficiently. Our approach is evaluated using five minutesof video and active RFID measurements capturing four peoplewearing RFID tags and a couple of passersby. Simulationis then used to evaluate performance for larger numberof targets under different scenarios.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115656333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Human Motion Change Detection by Hierarchical Gaussian Process Dynamical Model with Particle Filter 基于粒子滤波的分层高斯过程动态模型人体运动变化检测
Yafeng Yin, H. Man, Jing Wang, Guang Yang
Human motion change detection is a challenging taskfor a surveillance sensor system. Major challenges includecomplex scenes with a large amount of targets and confusors,and complex motion behaviors of different human objects.Human motion change detection and understandinghave been intensively studied over the past decades. In thispaper, we present a Hierarchical Gaussian Process DynamicalModel (HGPDM) integrated with particle filter trackerfor humanmotion change detection. Firstly, the high dimensionalhuman motion trajectory training data is projected tothe low dimensional latent space with a two-layer hierarchy.The latent space at the leaf node in bottom layer representsa typical humanmotion trajectory, while the root node in theupper layer controls the interaction and switching amongleaf nodes. The trained HGPDM will then be used to classifytest object trajectories which are captured by the particlefilter tracker. If the motion trajectory is different fromthe motion in the previous frame, the root node will transferthe motion trajectory to the corresponding leaf node. Inaddition, HGPDM can be used to predict the next motionstate, and provide Gaussian process dynamical samples forthe particle filter framework. The experiment results indicatethat our framework can accurately track and detect thehuman motion changes despite of complex motion and occlusion.In addition, the sampling in the hierarchical latentspace has greatly improved the efficiency of the particle filterframework.
人体运动变化检测对监控传感器系统来说是一项具有挑战性的任务。主要挑战包括具有大量目标和混淆的复杂场景,以及不同人类物体的复杂运动行为。在过去的几十年里,人类运动变化的检测和理解已经得到了广泛的研究。本文提出了一种结合粒子滤波跟踪器的分层高斯过程动态模型(HGPDM)用于人体运动变化检测。首先,将高维人体运动轨迹训练数据用两层结构投影到低维潜在空间;底层叶节点的潜在空间代表典型的人体运动轨迹,上层的根节点控制叶节点之间的交互和切换。训练后的HGPDM将用于对粒子过滤器跟踪器捕获的测试对象轨迹进行分类。如果运动轨迹与前一帧的运动不同,根节点将运动轨迹转移到相应的叶节点。此外,HGPDM可用于预测下一个运动状态,并为粒子滤波框架提供高斯过程动态样本。实验结果表明,在复杂的运动和遮挡情况下,我们的框架能够准确地跟踪和检测人体的运动变化。此外,分层潜在空间的采样大大提高了粒子滤波框架的效率。
{"title":"Human Motion Change Detection by Hierarchical Gaussian Process Dynamical Model with Particle Filter","authors":"Yafeng Yin, H. Man, Jing Wang, Guang Yang","doi":"10.1109/AVSS.2010.55","DOIUrl":"https://doi.org/10.1109/AVSS.2010.55","url":null,"abstract":"Human motion change detection is a challenging taskfor a surveillance sensor system. Major challenges includecomplex scenes with a large amount of targets and confusors,and complex motion behaviors of different human objects.Human motion change detection and understandinghave been intensively studied over the past decades. In thispaper, we present a Hierarchical Gaussian Process DynamicalModel (HGPDM) integrated with particle filter trackerfor humanmotion change detection. Firstly, the high dimensionalhuman motion trajectory training data is projected tothe low dimensional latent space with a two-layer hierarchy.The latent space at the leaf node in bottom layer representsa typical humanmotion trajectory, while the root node in theupper layer controls the interaction and switching amongleaf nodes. The trained HGPDM will then be used to classifytest object trajectories which are captured by the particlefilter tracker. If the motion trajectory is different fromthe motion in the previous frame, the root node will transferthe motion trajectory to the corresponding leaf node. Inaddition, HGPDM can be used to predict the next motionstate, and provide Gaussian process dynamical samples forthe particle filter framework. The experiment results indicatethat our framework can accurately track and detect thehuman motion changes despite of complex motion and occlusion.In addition, the sampling in the hierarchical latentspace has greatly improved the efficiency of the particle filterframework.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122042661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
An Activity Monitoring System for Real Elderly at Home: Validation Study 一个真正的居家老人活动监测系统:验证研究
N. Zouba, F. Brémond, M. Thonnat
Since the population of the elderly grows highly, the improvement of the quality of life of elderly at home is of a great importance. This can be achieved through the development of technologies for monitoring their activities at home. In this context, we propose an activity monitoring system which aims to achieve behavior analysis of elderly people. The proposed system consists of an approach combining heterogeneous sensor data to recognize activities at home. This approach combines data provided by video cameras with data provided by environmental sensors attached to house furnishings. In this paper, we validate the proposed activity monitoring system for the recognition of a set of daily activities (e.g. using kitchen equipment, preparing meal) for 9 real elderly volunteers living in an experimental apartment. We compare the behavioral profile between the 9 elderly volunteers. This study shows that the proposed system is thoroughly accepted by the elderly and it is also well appreciated by the medical staff.
随着老年人口的快速增长,提高居家老人的生活质量显得尤为重要。这可以通过发展监测其在家活动的技术来实现。在此背景下,我们提出了一个旨在实现老年人行为分析的活动监测系统。所提出的系统包括一种结合异构传感器数据来识别家庭活动的方法。这种方法结合了摄像机提供的数据和安装在室内家具上的环境传感器提供的数据。在本文中,我们对所提出的活动监测系统进行了验证,以识别生活在实验公寓的9名真正的老年志愿者的一系列日常活动(例如使用厨房设备,准备饭菜)。我们比较了9位老年志愿者的行为特征。本研究表明,所提出的系统被老年人完全接受,也得到了医务人员的认可。
{"title":"An Activity Monitoring System for Real Elderly at Home: Validation Study","authors":"N. Zouba, F. Brémond, M. Thonnat","doi":"10.1109/AVSS.2010.83","DOIUrl":"https://doi.org/10.1109/AVSS.2010.83","url":null,"abstract":"Since the population of the elderly grows highly, the improvement of the quality of life of elderly at home is of a great importance. This can be achieved through the development of technologies for monitoring their activities at home. In this context, we propose an activity monitoring system which aims to achieve behavior analysis of elderly people. The proposed system consists of an approach combining heterogeneous sensor data to recognize activities at home. This approach combines data provided by video cameras with data provided by environmental sensors attached to house furnishings. In this paper, we validate the proposed activity monitoring system for the recognition of a set of daily activities (e.g. using kitchen equipment, preparing meal) for 9 real elderly volunteers living in an experimental apartment. We compare the behavioral profile between the 9 elderly volunteers. This study shows that the proposed system is thoroughly accepted by the elderly and it is also well appreciated by the medical staff.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123486918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Robust Dynamic Super Resolution under Inaccurate Motion Estimation 不准确运动估计下的鲁棒动态超分辨率
Minjae Kim, Bonhwa Ku, Daesung Chung, Hyunhak Shin, Bonghyup Kang, D. Han, Hanseok Ko
In image reconstruction, dynamic super resolutionimage reconstruction algorithms have been investigated toenhance video frames sequentially, where explicit motionestimation is considered as a major factor in theperformance. This paper proposes a novel measurementvalidation method to attain robust image reconstructionresults under inaccurate motion estimation. In addition, wepresent an effective scene change detection methoddedicated to the proposed super resolution technique forminimizing erroneous results when abrupt scene changesoccur in the video frames. Representative experimentalresults show excellent performance of the proposedalgorithm in terms of the reconstruction quality andprocessing speed.
在图像重建中,动态超分辨率图像重建算法已经被研究来依次增强视频帧,其中显式运动估计被认为是性能的主要因素。本文提出了一种新的测量验证方法,以在不准确的运动估计下获得鲁棒图像重建结果。此外,我们提出了一种有效的场景变化检测方法,专门用于所提出的超分辨率技术,以最大限度地减少视频帧中突然发生场景变化时的错误结果。具有代表性的实验结果表明,该算法在重构质量和处理速度方面具有优异的性能。
{"title":"Robust Dynamic Super Resolution under Inaccurate Motion Estimation","authors":"Minjae Kim, Bonhwa Ku, Daesung Chung, Hyunhak Shin, Bonghyup Kang, D. Han, Hanseok Ko","doi":"10.1109/AVSS.2010.49","DOIUrl":"https://doi.org/10.1109/AVSS.2010.49","url":null,"abstract":"In image reconstruction, dynamic super resolutionimage reconstruction algorithms have been investigated toenhance video frames sequentially, where explicit motionestimation is considered as a major factor in theperformance. This paper proposes a novel measurementvalidation method to attain robust image reconstructionresults under inaccurate motion estimation. In addition, wepresent an effective scene change detection methoddedicated to the proposed super resolution technique forminimizing erroneous results when abrupt scene changesoccur in the video frames. Representative experimentalresults show excellent performance of the proposedalgorithm in terms of the reconstruction quality andprocessing speed.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129892363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1