Minjae Kim, Bonhwa Ku, Daesung Chung, Hyunhak Shin, Bonghyup Kang, D. Han, Hanseok Ko
In image reconstruction, dynamic super resolutionimage reconstruction algorithms have been investigated toenhance video frames sequentially, where explicit motionestimation is considered as a major factor in theperformance. This paper proposes a novel measurementvalidation method to attain robust image reconstructionresults under inaccurate motion estimation. In addition, wepresent an effective scene change detection methoddedicated to the proposed super resolution technique forminimizing erroneous results when abrupt scene changesoccur in the video frames. Representative experimentalresults show excellent performance of the proposedalgorithm in terms of the reconstruction quality andprocessing speed.
{"title":"Robust Dynamic Super Resolution under Inaccurate Motion Estimation","authors":"Minjae Kim, Bonhwa Ku, Daesung Chung, Hyunhak Shin, Bonghyup Kang, D. Han, Hanseok Ko","doi":"10.1109/AVSS.2010.49","DOIUrl":"https://doi.org/10.1109/AVSS.2010.49","url":null,"abstract":"In image reconstruction, dynamic super resolutionimage reconstruction algorithms have been investigated toenhance video frames sequentially, where explicit motionestimation is considered as a major factor in theperformance. This paper proposes a novel measurementvalidation method to attain robust image reconstructionresults under inaccurate motion estimation. In addition, wepresent an effective scene change detection methoddedicated to the proposed super resolution technique forminimizing erroneous results when abrupt scene changesoccur in the video frames. Representative experimentalresults show excellent performance of the proposedalgorithm in terms of the reconstruction quality andprocessing speed.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129892363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Miller, Weiru Liu, C. Fowler, Huiyu Zhou, Jiali Shen, Jianbing Ma, Jianguo Zhang, Weiqi Yan, K. Mclaughlin, S. Sezer
The Intelligent Sensor Information System (ISIS) isdescribed. ISIS is an active CCTV approach to reducingcrime and anti-social behavior on public transportsystems such as buses. Key to the system is the idea ofevent composition, in which directly detected atomicevents are combined to infer higher-level events withsemantic meaning. Video analytics are described thatprofile the gender of passengers and track them as theymove about a 3-D space. The overall system architectureis described which integrates the on-board eventrecognition with the control room software over a wirelessnetwork to generate a real-time alert. Data frompreliminary data-gathering trial is presented.
{"title":"Intelligent Sensor Information System For Public Transport – To Safely Go…","authors":"P. Miller, Weiru Liu, C. Fowler, Huiyu Zhou, Jiali Shen, Jianbing Ma, Jianguo Zhang, Weiqi Yan, K. Mclaughlin, S. Sezer","doi":"10.1109/AVSS.2010.36","DOIUrl":"https://doi.org/10.1109/AVSS.2010.36","url":null,"abstract":"The Intelligent Sensor Information System (ISIS) isdescribed. ISIS is an active CCTV approach to reducingcrime and anti-social behavior on public transportsystems such as buses. Key to the system is the idea ofevent composition, in which directly detected atomicevents are combined to infer higher-level events withsemantic meaning. Video analytics are described thatprofile the gender of passengers and track them as theymove about a 3-D space. The overall system architectureis described which integrates the on-board eventrecognition with the control room software over a wirelessnetwork to generate a real-time alert. Data frompreliminary data-gathering trial is presented.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133718722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Donatello Conte, P. Foggia, G. Percannella, Francesco Tufano, M. Vento
This paper presents a novel method to count people forvideo surveillance applications. Methods in the literatureeither follow a direct approach, by first detecting people andthen counting them, or an indirect approach, by establishinga relation between some easily detectable scene featuresand the estimated number of people. The indirect approachis considerably more robust, but it is not easy to take intoaccount such factors as perspective or people groups withdifferent densities.The proposed technique, while based on the indirect approach,specifically addresses these problems; furthermoreit is based on a trainable estimator that does not requirean explicit formulation of a priori knowledge about the perspectiveand density effects present in the scene at hand.In the experimental evaluation, the method has beenextensively compared with the algorithm by Albiol et al.,which provided the highest performance at the PETS 2009contest on people counting. The experimentation has usedthe public PETS 2009 datasets. The results confirm that theproposed method improves the accuracy, while retaining therobustness of the indirect approach.
{"title":"A Method for Counting People in Crowded Scenes","authors":"Donatello Conte, P. Foggia, G. Percannella, Francesco Tufano, M. Vento","doi":"10.1109/AVSS.2010.78","DOIUrl":"https://doi.org/10.1109/AVSS.2010.78","url":null,"abstract":"This paper presents a novel method to count people forvideo surveillance applications. Methods in the literatureeither follow a direct approach, by first detecting people andthen counting them, or an indirect approach, by establishinga relation between some easily detectable scene featuresand the estimated number of people. The indirect approachis considerably more robust, but it is not easy to take intoaccount such factors as perspective or people groups withdifferent densities.The proposed technique, while based on the indirect approach,specifically addresses these problems; furthermoreit is based on a trainable estimator that does not requirean explicit formulation of a priori knowledge about the perspectiveand density effects present in the scene at hand.In the experimental evaluation, the method has beenextensively compared with the algorithm by Albiol et al.,which provided the highest performance at the PETS 2009contest on people counting. The experimentation has usedthe public PETS 2009 datasets. The results confirm that theproposed method improves the accuracy, while retaining therobustness of the indirect approach.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132431058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Intelligent surveillance systems typically use a single visualspectrum modality for their input. These systems workwell in controlled conditions, but often fail when lightingis poor, or environmental effects such as shadows, dust orsmoke are present. Thermal spectrum imagery is not as susceptibleto environmental effects, however thermal imagingsensors are more sensitive to noise and they are onlygray scale, making distinguishing between objects difficult.Several approaches to combining the visual and thermalmodalities have been proposed, however they are limited byassuming that both modalities are perfuming equally well.When one modality fails, existing approaches are unable todetect the drop in performance and disregard the under performingmodality. In this paper, a novel middle fusion approachfor combining visual and thermal spectrum imagesfor object tracking is proposed. Motion and object detectionis performed on each modality and the object detectionresults for each modality are fused base on the currentperformance of each modality. Modality performance is determinedby comparing the number of objects tracked by thesystem with the number detected by each mode, with a smallallowance made for objects entering and exiting the scene.The tracking performance of the proposed fusion schemeis compared with performance of the visual and thermalmodes individually, and a baseline middle fusion scheme.Improvement in tracking performance using the proposedfusion approach is demonstrated. The proposed approachis also shown to be able to detect the failure of an individualmodality and disregard its results, ensuring performance isnot degraded in such situations.
{"title":"Multi-Modal Object Tracking using Dynamic Performance Metrics","authors":"S. Denman, C. Fookes, S. Sridharan, D. Ryan","doi":"10.1109/AVSS.2010.16","DOIUrl":"https://doi.org/10.1109/AVSS.2010.16","url":null,"abstract":"Intelligent surveillance systems typically use a single visualspectrum modality for their input. These systems workwell in controlled conditions, but often fail when lightingis poor, or environmental effects such as shadows, dust orsmoke are present. Thermal spectrum imagery is not as susceptibleto environmental effects, however thermal imagingsensors are more sensitive to noise and they are onlygray scale, making distinguishing between objects difficult.Several approaches to combining the visual and thermalmodalities have been proposed, however they are limited byassuming that both modalities are perfuming equally well.When one modality fails, existing approaches are unable todetect the drop in performance and disregard the under performingmodality. In this paper, a novel middle fusion approachfor combining visual and thermal spectrum imagesfor object tracking is proposed. Motion and object detectionis performed on each modality and the object detectionresults for each modality are fused base on the currentperformance of each modality. Modality performance is determinedby comparing the number of objects tracked by thesystem with the number detected by each mode, with a smallallowance made for objects entering and exiting the scene.The tracking performance of the proposed fusion schemeis compared with performance of the visual and thermalmodes individually, and a baseline middle fusion scheme.Improvement in tracking performance using the proposedfusion approach is demonstrated. The proposed approachis also shown to be able to detect the failure of an individualmodality and disregard its results, ensuring performance isnot degraded in such situations.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123891171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we address the task of appearance basedperson reidentification in infrared image sequences. Whilecommon approaches for appearance based person reidentificationin the visible spectrum acquire color histograms ofa person, this technique is not applicable in infrared for obviousreasons. To tackle the more difficult problem of personreidentification in infrared, we introduce an approachthat relies on local image features only and thus is completelyindependent of sensor specific features which mightbe available only in the visible spectrum. Our approachfits into an Implicit Shape Model (ISM) based person detectionand tracking strategy described in previous work.Local features collected during tracking are employed forperson reidentification while the generalizing appearancecodebook used for person detection serves as structuringelement to generate person signatures. By this, we gain anintegrated approach that allows for fast online model generation,a compact representation, and fast model matching.Since the model allows for a joined representation ofappearance and spatial information, no complex representationmodels like graph structures are needed. We evaluateour person reidentification approach on a subset of the CASIAinfrared dataset.
{"title":"Local Feature Based Person Reidentification in Infrared Image Sequences","authors":"K. Jüngling, Michael Arens","doi":"10.1109/AVSS.2010.75","DOIUrl":"https://doi.org/10.1109/AVSS.2010.75","url":null,"abstract":"In this paper, we address the task of appearance basedperson reidentification in infrared image sequences. Whilecommon approaches for appearance based person reidentificationin the visible spectrum acquire color histograms ofa person, this technique is not applicable in infrared for obviousreasons. To tackle the more difficult problem of personreidentification in infrared, we introduce an approachthat relies on local image features only and thus is completelyindependent of sensor specific features which mightbe available only in the visible spectrum. Our approachfits into an Implicit Shape Model (ISM) based person detectionand tracking strategy described in previous work.Local features collected during tracking are employed forperson reidentification while the generalizing appearancecodebook used for person detection serves as structuringelement to generate person signatures. By this, we gain anintegrated approach that allows for fast online model generation,a compact representation, and fast model matching.Since the model allows for a joined representation ofappearance and spatial information, no complex representationmodels like graph structures are needed. We evaluateour person reidentification approach on a subset of the CASIAinfrared dataset.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128948026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seunghan Han, Bonjung Koo, A. Hutter, V. Shet, W. Stechele
In forensic analysis of visual surveillance data, condi-tional knowledge representation and inference under un-certainty play an important role for deriving new contex-tual cues by fusing relevant evidential patterns. To addressthis aspect, both rule-based (aka. extensional) and statebased (aka. intensional) approaches have been adoptedfor situation or visual event analysis. The former providesflexible expressive power and computational efficiency buttypically allows only one directional inference. The latteris computationally expensive but allows bidirectional inter-pretation of conditionals by treating antecedent and conse-quent of conditionals as mutually relevant states. In visualsurveillance, considering the varying semantics and poten-tially ambiguous causality in conditionals, it would be use-ful to combine the expressive power of rule-based systemwith the ability of bidirectional interpretation. In this paper,we propose a hybrid approach that, while relying mainly ona rule-based architecture, also provides an intensional wayof on-demand conditional modeling using conditional op-erators in subjective logic. We first show how conditionalscan be assessed via explicit representation of ignorance insubjective logic. We then describe the proposed hybrid con-ditional handling framework. Finally we present an exper-imental case study from a typical airport scene taken fromvisual surveillance data.
{"title":"Subjective Logic Based Hybrid Approach to Conditional Evidence Fusion for Forensic Visual Surveillance","authors":"Seunghan Han, Bonjung Koo, A. Hutter, V. Shet, W. Stechele","doi":"10.1109/AVSS.2010.19","DOIUrl":"https://doi.org/10.1109/AVSS.2010.19","url":null,"abstract":"In forensic analysis of visual surveillance data, condi-tional knowledge representation and inference under un-certainty play an important role for deriving new contex-tual cues by fusing relevant evidential patterns. To addressthis aspect, both rule-based (aka. extensional) and statebased (aka. intensional) approaches have been adoptedfor situation or visual event analysis. The former providesflexible expressive power and computational efficiency buttypically allows only one directional inference. The latteris computationally expensive but allows bidirectional inter-pretation of conditionals by treating antecedent and conse-quent of conditionals as mutually relevant states. In visualsurveillance, considering the varying semantics and poten-tially ambiguous causality in conditionals, it would be use-ful to combine the expressive power of rule-based systemwith the ability of bidirectional interpretation. In this paper,we propose a hybrid approach that, while relying mainly ona rule-based architecture, also provides an intensional wayof on-demand conditional modeling using conditional op-erators in subjective logic. We first show how conditionalscan be assessed via explicit representation of ignorance insubjective logic. We then describe the proposed hybrid con-ditional handling framework. Finally we present an exper-imental case study from a typical airport scene taken fromvisual surveillance data.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129013624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper a task-oriented approach for object trackingin large distributed camera networks is presented. Thiswork includes three main contributions. First a generic processframework is presented, which has been designed fortask-oriented video processing. Second, system componentsof the task-oriented framework needed for the task of multicameraperson tracking are introduced in detail. Third, foran efficient task-oriented processing in large camera networksthe capability of dynamic sensor scheduling by themulti-camera tracking processes is indispensable. For thispurpose an efficient sensor selection approach is proposed.
{"title":"Task-Oriented Object Tracking in Large Distributed Camera Networks","authors":"Eduardo Monari, K. Kroschel","doi":"10.1109/AVSS.2010.66","DOIUrl":"https://doi.org/10.1109/AVSS.2010.66","url":null,"abstract":"In this paper a task-oriented approach for object trackingin large distributed camera networks is presented. Thiswork includes three main contributions. First a generic processframework is presented, which has been designed fortask-oriented video processing. Second, system componentsof the task-oriented framework needed for the task of multicameraperson tracking are introduced in detail. Third, foran efficient task-oriented processing in large camera networksthe capability of dynamic sensor scheduling by themulti-camera tracking processes is indispensable. For thispurpose an efficient sensor selection approach is proposed.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114621845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tuan Hue Thi, Jian Zhang, Li Cheng, Li Wang, S. Satoh
This paper presents a unified framework for human actionclassification and localization in video using structuredlearning of local space-time features. Each human actionclass is represented by a set of its own compact set of localpatches. In our approach, we first use a discriminativehierarchical Bayesian classifier to select those space-timeinterest points that are constructive for each particular action.Those concise local features are then passed to a SupportVector Machine with Principal Component Analysisprojection for the classification task. Meanwhile, the actionlocalization is done using Dynamic Conditional RandomFields developed to incorporate the spatial and temporalstructure constraints of superpixels extracted aroundthose features. Each superpixel in the video is defined by theshape and motion information of its corresponding featureregion. Compelling results obtained from experiments onKTH [22], Weizmann [1], HOHA [13] and TRECVid [23]datasets have proven the efficiency and robustness of ourframework for the task of human action recognition and localizationin video.
{"title":"Human Action Recognition and Localization in Video Using Structured Learning of Local Space-Time Features","authors":"Tuan Hue Thi, Jian Zhang, Li Cheng, Li Wang, S. Satoh","doi":"10.1109/AVSS.2010.76","DOIUrl":"https://doi.org/10.1109/AVSS.2010.76","url":null,"abstract":"This paper presents a unified framework for human actionclassification and localization in video using structuredlearning of local space-time features. Each human actionclass is represented by a set of its own compact set of localpatches. In our approach, we first use a discriminativehierarchical Bayesian classifier to select those space-timeinterest points that are constructive for each particular action.Those concise local features are then passed to a SupportVector Machine with Principal Component Analysisprojection for the classification task. Meanwhile, the actionlocalization is done using Dynamic Conditional RandomFields developed to incorporate the spatial and temporalstructure constraints of superpixels extracted aroundthose features. Each superpixel in the video is defined by theshape and motion information of its corresponding featureregion. Compelling results obtained from experiments onKTH [22], Weizmann [1], HOHA [13] and TRECVid [23]datasets have proven the efficiency and robustness of ourframework for the task of human action recognition and localizationin video.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115239856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human action recognition is often addressed by use oflatent-state models such as the hidden Markov model andsimilar graphical models. As such models requireExpectation-Maximisation training, arbitrary choicesmust be made for training initialisation, with major impacton the final recognition accuracy. In this paper, wepropose a histogram-based deterministic initialisation andcompare it with both random and a time-baseddeterministic initialisations. Experiments on a humanaction dataset show that the accuracy of the proposedmethod proved higher than that of the other testedmethods.
{"title":"Histogram-Based Training Initialisation of Hidden Markov Models for Human Action Recognition","authors":"Z. Moghaddam, M. Piccardi","doi":"10.1109/AVSS.2010.25","DOIUrl":"https://doi.org/10.1109/AVSS.2010.25","url":null,"abstract":"Human action recognition is often addressed by use oflatent-state models such as the hidden Markov model andsimilar graphical models. As such models requireExpectation-Maximisation training, arbitrary choicesmust be made for training initialisation, with major impacton the final recognition accuracy. In this paper, wepropose a histogram-based deterministic initialisation andcompare it with both random and a time-baseddeterministic initialisations. Experiments on a humanaction dataset show that the accuracy of the proposedmethod proved higher than that of the other testedmethods.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"550 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125342061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Calibrated cameras are an extremely useful resource forcomputer vision scenarios. Typically, cameras are calibratedthrough calibration targets, measurements of the observedscene, or self-calibrated through features matchedbetween cameras with overlapping fields of view. This paperconsiders an approach to camera calibration based onobservations of a pedestrian and compares the resultingcalibration to a commonly used approach requiring thatmeasurements be made of the scene.
{"title":"Surveillance Camera Calibration from Observations of a Pedestrian","authors":"M. Evans, J. Ferryman","doi":"10.1109/AVSS.2010.32","DOIUrl":"https://doi.org/10.1109/AVSS.2010.32","url":null,"abstract":"Calibrated cameras are an extremely useful resource forcomputer vision scenarios. Typically, cameras are calibratedthrough calibration targets, measurements of the observedscene, or self-calibrated through features matchedbetween cameras with overlapping fields of view. This paperconsiders an approach to camera calibration based onobservations of a pedestrian and compares the resultingcalibration to a commonly used approach requiring thatmeasurements be made of the scene.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125201451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}