Security and privacy protection are critical issues forpublic acceptance of camera networks. Smart cameras,with onboard image processing, can be used to identifyand remove privacy sensitive image regions. Existing approaches,however, only address isolated aspects withoutconsidering the integration with established security technologiesand the underlying platform. This work tries to fillthis gap and presents TrustCAM, a security-enhanced smartcamera. Based on Trusted Computing, we realize integrityprotection, authenticity and confidentiality of image data.Multiple levels of privacy protection, together with accesscontrol, are supported. Impact on overall system performanceis evaluated on a real prototype implementation.
{"title":"TrustCAM: Security and Privacy-Protection for an Embedded Smart Camera Based on Trusted Computing","authors":"Thomas Winkler, B. Rinner","doi":"10.1109/AVSS.2010.38","DOIUrl":"https://doi.org/10.1109/AVSS.2010.38","url":null,"abstract":"Security and privacy protection are critical issues forpublic acceptance of camera networks. Smart cameras,with onboard image processing, can be used to identifyand remove privacy sensitive image regions. Existing approaches,however, only address isolated aspects withoutconsidering the integration with established security technologiesand the underlying platform. This work tries to fillthis gap and presents TrustCAM, a security-enhanced smartcamera. Based on Trusted Computing, we realize integrityprotection, authenticity and confidentiality of image data.Multiple levels of privacy protection, together with accesscontrol, are supported. Impact on overall system performanceis evaluated on a real prototype implementation.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134078619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reliable tracking of people in video and recovering theiridentities are of great importance to video analytics applications.For outdoor applications, long range identity sensorssuch as active RFID can provide good coverage in alarge open space, though they only provide coarse locationinformation. We propose a probabilistic approach usingnoisy inputs from multiple long range identity sensorsto globally associate and identify fragmented tracklets generatedby video tracking algorithms. We extend a networkflow based data association model to recover tracklet identityefficiently. Our approach is evaluated using five minutesof video and active RFID measurements capturing four peoplewearing RFID tags and a couple of passersby. Simulationis then used to evaluate performance for larger numberof targets under different scenarios.identities are of great importance to video analytics applications.For outdoor applications, long range identity sensorssuch as active RFID can provide good coverage in alarge open space, though they only provide coarse locationinformation. We propose a probabilistic approach usingnoisy inputs from multiple long range identity sensorsto globally associate and identify fragmented tracklets generatedby video tracking algorithms. We extend a networkflow based data association model to recover tracklet identityefficiently. Our approach is evaluated using five minutesof video and active RFID measurements capturing four peoplewearing RFID tags and a couple of passersby. Simulationis then used to evaluate performance for larger numberof targets under different scenarios.
{"title":"Global Identification of Tracklets in Video Using Long Range Identity Sensors","authors":"Xunyi Yu, A. Ganz","doi":"10.1109/AVSS.2010.46","DOIUrl":"https://doi.org/10.1109/AVSS.2010.46","url":null,"abstract":"Reliable tracking of people in video and recovering theiridentities are of great importance to video analytics applications.For outdoor applications, long range identity sensorssuch as active RFID can provide good coverage in alarge open space, though they only provide coarse locationinformation. We propose a probabilistic approach usingnoisy inputs from multiple long range identity sensorsto globally associate and identify fragmented tracklets generatedby video tracking algorithms. We extend a networkflow based data association model to recover tracklet identityefficiently. Our approach is evaluated using five minutesof video and active RFID measurements capturing four peoplewearing RFID tags and a couple of passersby. Simulationis then used to evaluate performance for larger numberof targets under different scenarios.identities are of great importance to video analytics applications.For outdoor applications, long range identity sensorssuch as active RFID can provide good coverage in alarge open space, though they only provide coarse locationinformation. We propose a probabilistic approach usingnoisy inputs from multiple long range identity sensorsto globally associate and identify fragmented tracklets generatedby video tracking algorithms. We extend a networkflow based data association model to recover tracklet identityefficiently. Our approach is evaluated using five minutesof video and active RFID measurements capturing four peoplewearing RFID tags and a couple of passersby. Simulationis then used to evaluate performance for larger numberof targets under different scenarios.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115656333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
On-line abnormality detection in video without the use ofobject detection and tracking is a desirable task in surveillance.We address this problem for the case when labeledinformation about normal events is limited and informationabout abnormal events is not available. We formulatethis problem as a one-class classification, where multiplelocal novelty classifiers (detectors) are used to first learnnormal actions based on motion information and then todetect abnormal instances. Each detector is associated toa small region of interest and is trained over labeled samplesprojected on an appropriate subspace. We discover thissubspace by using both labeled and unlabeled segments.We investigate the use of subspace learning and comparetwo methodologies based on linear (Principal ComponentsAnalysis) and on non-linear subspace learning (LocalityPreserving Projections), respectively. Experimental resultson a real underground station dataset shows that the linearapproach is better suited for cases where the subspacelearning is restricted to the labeled samples, whereas thenon-linear approach is preferable in the presence of additionalunlabeled data.
{"title":"Local Abnormality Detection in Video Using Subspace Learning","authors":"Ioannis Tziakos, A. Cavallaro, Li-Qun Xu","doi":"10.1109/AVSS.2010.70","DOIUrl":"https://doi.org/10.1109/AVSS.2010.70","url":null,"abstract":"On-line abnormality detection in video without the use ofobject detection and tracking is a desirable task in surveillance.We address this problem for the case when labeledinformation about normal events is limited and informationabout abnormal events is not available. We formulatethis problem as a one-class classification, where multiplelocal novelty classifiers (detectors) are used to first learnnormal actions based on motion information and then todetect abnormal instances. Each detector is associated toa small region of interest and is trained over labeled samplesprojected on an appropriate subspace. We discover thissubspace by using both labeled and unlabeled segments.We investigate the use of subspace learning and comparetwo methodologies based on linear (Principal ComponentsAnalysis) and on non-linear subspace learning (LocalityPreserving Projections), respectively. Experimental resultson a real underground station dataset shows that the linearapproach is better suited for cases where the subspacelearning is restricted to the labeled samples, whereas thenon-linear approach is preferable in the presence of additionalunlabeled data.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114139564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Real-time object tracking, feature assessment and classification based on video are an enabling technology for improving situation awareness of human operators as well as for automated recognition of critical situations. To bridge the gap between video signal-processing output and spatio-temporal analysis of object behavior at the semantic level, a generic and sensor-independent object representation is necessary. However, in the case of public and corporate video surveillance, centralized storage of aggregated data leads to privacy violations. This article explains how a centralized object representation, complying with the Fair Information Practice Principles (FIP) privacy constraints, can be implemented for a video surveillance system.
{"title":"Privacy-Aware Object Representation for Surveillance Systems","authors":"Hauke Vagts, A. Bauer","doi":"10.1109/AVSS.2010.73","DOIUrl":"https://doi.org/10.1109/AVSS.2010.73","url":null,"abstract":"Real-time object tracking, feature assessment and classification based on video are an enabling technology for improving situation awareness of human operators as well as for automated recognition of critical situations. To bridge the gap between video signal-processing output and spatio-temporal analysis of object behavior at the semantic level, a generic and sensor-independent object representation is necessary. However, in the case of public and corporate video surveillance, centralized storage of aggregated data leads to privacy violations. This article explains how a centralized object representation, complying with the Fair Information Practice Principles (FIP) privacy constraints, can be implemented for a video surveillance system.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117274999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Yahyanejad, D. Wischounig-Strucl, M. Quaritsch, B. Rinner
Unmanned aerial vehicles (UAVs) have been recently deployedin various civilian applications such as environmentalmonitoring, aerial imaging or surveillance. Small-scaleUAVs are of special interest for first responders since theycan rather easily provide bird’s eye view images of disasterareas. In this paper we present a hybrid approach to mosaickan overview image of the area of interest given a setof individual images captured by UAVs flying at low altitude.Our approach combines metadata-based and imagebasedstitching methods in order to overcome the challengesof low-altitude, small-scale UAV deployment such as nonnadirview, inaccurate sensor data, non-planar ground surfacesand limited computing and communication resources.For the generation of the overview image we preserve georeferencingas much as possible, since this is an importantrequirement for disaster management applications. Ourmosaicking method has been implemented on our UAV systemand evaluated based on a quality metric.
{"title":"Incremental Mosaicking of Images from Autonomous, Small-Scale UAVs","authors":"S. Yahyanejad, D. Wischounig-Strucl, M. Quaritsch, B. Rinner","doi":"10.1109/AVSS.2010.14","DOIUrl":"https://doi.org/10.1109/AVSS.2010.14","url":null,"abstract":"Unmanned aerial vehicles (UAVs) have been recently deployedin various civilian applications such as environmentalmonitoring, aerial imaging or surveillance. Small-scaleUAVs are of special interest for first responders since theycan rather easily provide bird’s eye view images of disasterareas. In this paper we present a hybrid approach to mosaickan overview image of the area of interest given a setof individual images captured by UAVs flying at low altitude.Our approach combines metadata-based and imagebasedstitching methods in order to overcome the challengesof low-altitude, small-scale UAV deployment such as nonnadirview, inaccurate sensor data, non-planar ground surfacesand limited computing and communication resources.For the generation of the overview image we preserve georeferencingas much as possible, since this is an importantrequirement for disaster management applications. Ourmosaicking method has been implemented on our UAV systemand evaluated based on a quality metric.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115256596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since the population of the elderly grows highly, the improvement of the quality of life of elderly at home is of a great importance. This can be achieved through the development of technologies for monitoring their activities at home. In this context, we propose an activity monitoring system which aims to achieve behavior analysis of elderly people. The proposed system consists of an approach combining heterogeneous sensor data to recognize activities at home. This approach combines data provided by video cameras with data provided by environmental sensors attached to house furnishings. In this paper, we validate the proposed activity monitoring system for the recognition of a set of daily activities (e.g. using kitchen equipment, preparing meal) for 9 real elderly volunteers living in an experimental apartment. We compare the behavioral profile between the 9 elderly volunteers. This study shows that the proposed system is thoroughly accepted by the elderly and it is also well appreciated by the medical staff.
{"title":"An Activity Monitoring System for Real Elderly at Home: Validation Study","authors":"N. Zouba, F. Brémond, M. Thonnat","doi":"10.1109/AVSS.2010.83","DOIUrl":"https://doi.org/10.1109/AVSS.2010.83","url":null,"abstract":"Since the population of the elderly grows highly, the improvement of the quality of life of elderly at home is of a great importance. This can be achieved through the development of technologies for monitoring their activities at home. In this context, we propose an activity monitoring system which aims to achieve behavior analysis of elderly people. The proposed system consists of an approach combining heterogeneous sensor data to recognize activities at home. This approach combines data provided by video cameras with data provided by environmental sensors attached to house furnishings. In this paper, we validate the proposed activity monitoring system for the recognition of a set of daily activities (e.g. using kitchen equipment, preparing meal) for 9 real elderly volunteers living in an experimental apartment. We compare the behavioral profile between the 9 elderly volunteers. This study shows that the proposed system is thoroughly accepted by the elderly and it is also well appreciated by the medical staff.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123486918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human motion change detection is a challenging taskfor a surveillance sensor system. Major challenges includecomplex scenes with a large amount of targets and confusors,and complex motion behaviors of different human objects.Human motion change detection and understandinghave been intensively studied over the past decades. In thispaper, we present a Hierarchical Gaussian Process DynamicalModel (HGPDM) integrated with particle filter trackerfor humanmotion change detection. Firstly, the high dimensionalhuman motion trajectory training data is projected tothe low dimensional latent space with a two-layer hierarchy.The latent space at the leaf node in bottom layer representsa typical humanmotion trajectory, while the root node in theupper layer controls the interaction and switching amongleaf nodes. The trained HGPDM will then be used to classifytest object trajectories which are captured by the particlefilter tracker. If the motion trajectory is different fromthe motion in the previous frame, the root node will transferthe motion trajectory to the corresponding leaf node. Inaddition, HGPDM can be used to predict the next motionstate, and provide Gaussian process dynamical samples forthe particle filter framework. The experiment results indicatethat our framework can accurately track and detect thehuman motion changes despite of complex motion and occlusion.In addition, the sampling in the hierarchical latentspace has greatly improved the efficiency of the particle filterframework.
{"title":"Human Motion Change Detection by Hierarchical Gaussian Process Dynamical Model with Particle Filter","authors":"Yafeng Yin, H. Man, Jing Wang, Guang Yang","doi":"10.1109/AVSS.2010.55","DOIUrl":"https://doi.org/10.1109/AVSS.2010.55","url":null,"abstract":"Human motion change detection is a challenging taskfor a surveillance sensor system. Major challenges includecomplex scenes with a large amount of targets and confusors,and complex motion behaviors of different human objects.Human motion change detection and understandinghave been intensively studied over the past decades. In thispaper, we present a Hierarchical Gaussian Process DynamicalModel (HGPDM) integrated with particle filter trackerfor humanmotion change detection. Firstly, the high dimensionalhuman motion trajectory training data is projected tothe low dimensional latent space with a two-layer hierarchy.The latent space at the leaf node in bottom layer representsa typical humanmotion trajectory, while the root node in theupper layer controls the interaction and switching amongleaf nodes. The trained HGPDM will then be used to classifytest object trajectories which are captured by the particlefilter tracker. If the motion trajectory is different fromthe motion in the previous frame, the root node will transferthe motion trajectory to the corresponding leaf node. Inaddition, HGPDM can be used to predict the next motionstate, and provide Gaussian process dynamical samples forthe particle filter framework. The experiment results indicatethat our framework can accurately track and detect thehuman motion changes despite of complex motion and occlusion.In addition, the sampling in the hierarchical latentspace has greatly improved the efficiency of the particle filterframework.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122042661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Inter-image homographies are essential for many differenttasks involving projective geometry. This paper proposesan adaptive correspondence estimation approach betweenperson detections in a planar scene not relying oncorrespondence features as it is the case in many otherRANSAC-based approaches. The result is a planar interimagehomography calculated from estimated point correspondences.The approach is self-configurable, adaptiveand provides robustness over time by exploiting temporaland geometric information. We demonstrate the manifoldapplicability of the proposed approach on a variety ofdatasets. Improved results compared to a common baselineapproach are shown and the influence of error sources suchas missed detections, false detections and non overlappingfield of views is investigated.
{"title":"Automatic Inter-image Homography Estimation from Person Detections","authors":"M. Thaler, R. Mörzinger","doi":"10.1109/AVSS.2010.35","DOIUrl":"https://doi.org/10.1109/AVSS.2010.35","url":null,"abstract":"Inter-image homographies are essential for many differenttasks involving projective geometry. This paper proposesan adaptive correspondence estimation approach betweenperson detections in a planar scene not relying oncorrespondence features as it is the case in many otherRANSAC-based approaches. The result is a planar interimagehomography calculated from estimated point correspondences.The approach is self-configurable, adaptiveand provides robustness over time by exploiting temporaland geometric information. We demonstrate the manifoldapplicability of the proposed approach on a variety ofdatasets. Improved results compared to a common baselineapproach are shown and the influence of error sources suchas missed detections, false detections and non overlappingfield of views is investigated.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125872306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a novel approach for discovering directedintention-driven pedestrian activities across large urban areas.The proposed approach is based on a mutual informationco-clustering technique that simultaneously clusterstrajectory start locations in the scene which have similardistributions across stop locations and vice-versa. The clusteringassignments are obtained by minimizing the loss ofmutual information between a trajectory start-stop associationmatrix and a compressed co-clustered matrix, afterwhich the scene activities are inferred from the compressedmatrix. We demonstrate our approach using a dataset oflong duration trajectories from multiple PTZ cameras coveringa large area and show improved results over two otherpopular trajectory clustering and entry-exit learning approaches.
{"title":"Learning Directed Intention-driven Activities using Co-Clustering","authors":"K. Sankaranarayanan, James W. Davis","doi":"10.1109/AVSS.2010.41","DOIUrl":"https://doi.org/10.1109/AVSS.2010.41","url":null,"abstract":"We present a novel approach for discovering directedintention-driven pedestrian activities across large urban areas.The proposed approach is based on a mutual informationco-clustering technique that simultaneously clusterstrajectory start locations in the scene which have similardistributions across stop locations and vice-versa. The clusteringassignments are obtained by minimizing the loss ofmutual information between a trajectory start-stop associationmatrix and a compressed co-clustered matrix, afterwhich the scene activities are inferred from the compressedmatrix. We demonstrate our approach using a dataset oflong duration trajectories from multiple PTZ cameras coveringa large area and show improved results over two otherpopular trajectory clustering and entry-exit learning approaches.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126021083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work proposes a hybrid classifier to recognize humanactions in different contexts. In particular, the proposedhybrid classifier (a neural tree with linear discriminantnodes NTLD), is a neural tree whose nodes can be eithersimple preceptrons or recursive fisher linear discriminant(RFLD) classifiers. A novel technique to substitute badtrained perceptron with more performant linear discriminatorsis introduced. For a given frame, geometrical featuresare extracted from the skeleton of the human blob (silhouette).These geometrical features are collected for a fixednumber of consecutive frames to recognize the correspondingactivity. The resulting feature vector is adopted as inputto the NTLD classifier. The performance of the proposedclassifier has been evaluated on two available databases.
{"title":"Human Action Recognition using a Hybrid NTLD Classifier","authors":"A. Rani, Sanjeev Kumar, C. Micheloni, G. Foresti","doi":"10.1109/AVSS.2010.11","DOIUrl":"https://doi.org/10.1109/AVSS.2010.11","url":null,"abstract":"This work proposes a hybrid classifier to recognize humanactions in different contexts. In particular, the proposedhybrid classifier (a neural tree with linear discriminantnodes NTLD), is a neural tree whose nodes can be eithersimple preceptrons or recursive fisher linear discriminant(RFLD) classifiers. A novel technique to substitute badtrained perceptron with more performant linear discriminatorsis introduced. For a given frame, geometrical featuresare extracted from the skeleton of the human blob (silhouette).These geometrical features are collected for a fixednumber of consecutive frames to recognize the correspondingactivity. The resulting feature vector is adopted as inputto the NTLD classifier. The performance of the proposedclassifier has been evaluated on two available databases.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130556956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}