Security and privacy protection are critical issues forpublic acceptance of camera networks. Smart cameras,with onboard image processing, can be used to identifyand remove privacy sensitive image regions. Existing approaches,however, only address isolated aspects withoutconsidering the integration with established security technologiesand the underlying platform. This work tries to fillthis gap and presents TrustCAM, a security-enhanced smartcamera. Based on Trusted Computing, we realize integrityprotection, authenticity and confidentiality of image data.Multiple levels of privacy protection, together with accesscontrol, are supported. Impact on overall system performanceis evaluated on a real prototype implementation.
{"title":"TrustCAM: Security and Privacy-Protection for an Embedded Smart Camera Based on Trusted Computing","authors":"Thomas Winkler, B. Rinner","doi":"10.1109/AVSS.2010.38","DOIUrl":"https://doi.org/10.1109/AVSS.2010.38","url":null,"abstract":"Security and privacy protection are critical issues forpublic acceptance of camera networks. Smart cameras,with onboard image processing, can be used to identifyand remove privacy sensitive image regions. Existing approaches,however, only address isolated aspects withoutconsidering the integration with established security technologiesand the underlying platform. This work tries to fillthis gap and presents TrustCAM, a security-enhanced smartcamera. Based on Trusted Computing, we realize integrityprotection, authenticity and confidentiality of image data.Multiple levels of privacy protection, together with accesscontrol, are supported. Impact on overall system performanceis evaluated on a real prototype implementation.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134078619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Donatello Conte, P. Foggia, G. Percannella, Francesco Tufano, M. Vento
This paper presents a novel method to count people forvideo surveillance applications. Methods in the literatureeither follow a direct approach, by first detecting people andthen counting them, or an indirect approach, by establishinga relation between some easily detectable scene featuresand the estimated number of people. The indirect approachis considerably more robust, but it is not easy to take intoaccount such factors as perspective or people groups withdifferent densities.The proposed technique, while based on the indirect approach,specifically addresses these problems; furthermoreit is based on a trainable estimator that does not requirean explicit formulation of a priori knowledge about the perspectiveand density effects present in the scene at hand.In the experimental evaluation, the method has beenextensively compared with the algorithm by Albiol et al.,which provided the highest performance at the PETS 2009contest on people counting. The experimentation has usedthe public PETS 2009 datasets. The results confirm that theproposed method improves the accuracy, while retaining therobustness of the indirect approach.
{"title":"A Method for Counting People in Crowded Scenes","authors":"Donatello Conte, P. Foggia, G. Percannella, Francesco Tufano, M. Vento","doi":"10.1109/AVSS.2010.78","DOIUrl":"https://doi.org/10.1109/AVSS.2010.78","url":null,"abstract":"This paper presents a novel method to count people forvideo surveillance applications. Methods in the literatureeither follow a direct approach, by first detecting people andthen counting them, or an indirect approach, by establishinga relation between some easily detectable scene featuresand the estimated number of people. The indirect approachis considerably more robust, but it is not easy to take intoaccount such factors as perspective or people groups withdifferent densities.The proposed technique, while based on the indirect approach,specifically addresses these problems; furthermoreit is based on a trainable estimator that does not requirean explicit formulation of a priori knowledge about the perspectiveand density effects present in the scene at hand.In the experimental evaluation, the method has beenextensively compared with the algorithm by Albiol et al.,which provided the highest performance at the PETS 2009contest on people counting. The experimentation has usedthe public PETS 2009 datasets. The results confirm that theproposed method improves the accuracy, while retaining therobustness of the indirect approach.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132431058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Miller, Weiru Liu, C. Fowler, Huiyu Zhou, Jiali Shen, Jianbing Ma, Jianguo Zhang, Weiqi Yan, K. Mclaughlin, S. Sezer
The Intelligent Sensor Information System (ISIS) isdescribed. ISIS is an active CCTV approach to reducingcrime and anti-social behavior on public transportsystems such as buses. Key to the system is the idea ofevent composition, in which directly detected atomicevents are combined to infer higher-level events withsemantic meaning. Video analytics are described thatprofile the gender of passengers and track them as theymove about a 3-D space. The overall system architectureis described which integrates the on-board eventrecognition with the control room software over a wirelessnetwork to generate a real-time alert. Data frompreliminary data-gathering trial is presented.
{"title":"Intelligent Sensor Information System For Public Transport – To Safely Go…","authors":"P. Miller, Weiru Liu, C. Fowler, Huiyu Zhou, Jiali Shen, Jianbing Ma, Jianguo Zhang, Weiqi Yan, K. Mclaughlin, S. Sezer","doi":"10.1109/AVSS.2010.36","DOIUrl":"https://doi.org/10.1109/AVSS.2010.36","url":null,"abstract":"The Intelligent Sensor Information System (ISIS) isdescribed. ISIS is an active CCTV approach to reducingcrime and anti-social behavior on public transportsystems such as buses. Key to the system is the idea ofevent composition, in which directly detected atomicevents are combined to infer higher-level events withsemantic meaning. Video analytics are described thatprofile the gender of passengers and track them as theymove about a 3-D space. The overall system architectureis described which integrates the on-board eventrecognition with the control room software over a wirelessnetwork to generate a real-time alert. Data frompreliminary data-gathering trial is presented.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133718722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
On-line abnormality detection in video without the use ofobject detection and tracking is a desirable task in surveillance.We address this problem for the case when labeledinformation about normal events is limited and informationabout abnormal events is not available. We formulatethis problem as a one-class classification, where multiplelocal novelty classifiers (detectors) are used to first learnnormal actions based on motion information and then todetect abnormal instances. Each detector is associated toa small region of interest and is trained over labeled samplesprojected on an appropriate subspace. We discover thissubspace by using both labeled and unlabeled segments.We investigate the use of subspace learning and comparetwo methodologies based on linear (Principal ComponentsAnalysis) and on non-linear subspace learning (LocalityPreserving Projections), respectively. Experimental resultson a real underground station dataset shows that the linearapproach is better suited for cases where the subspacelearning is restricted to the labeled samples, whereas thenon-linear approach is preferable in the presence of additionalunlabeled data.
{"title":"Local Abnormality Detection in Video Using Subspace Learning","authors":"Ioannis Tziakos, A. Cavallaro, Li-Qun Xu","doi":"10.1109/AVSS.2010.70","DOIUrl":"https://doi.org/10.1109/AVSS.2010.70","url":null,"abstract":"On-line abnormality detection in video without the use ofobject detection and tracking is a desirable task in surveillance.We address this problem for the case when labeledinformation about normal events is limited and informationabout abnormal events is not available. We formulatethis problem as a one-class classification, where multiplelocal novelty classifiers (detectors) are used to first learnnormal actions based on motion information and then todetect abnormal instances. Each detector is associated toa small region of interest and is trained over labeled samplesprojected on an appropriate subspace. We discover thissubspace by using both labeled and unlabeled segments.We investigate the use of subspace learning and comparetwo methodologies based on linear (Principal ComponentsAnalysis) and on non-linear subspace learning (LocalityPreserving Projections), respectively. Experimental resultson a real underground station dataset shows that the linearapproach is better suited for cases where the subspacelearning is restricted to the labeled samples, whereas thenon-linear approach is preferable in the presence of additionalunlabeled data.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114139564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Real-time object tracking, feature assessment and classification based on video are an enabling technology for improving situation awareness of human operators as well as for automated recognition of critical situations. To bridge the gap between video signal-processing output and spatio-temporal analysis of object behavior at the semantic level, a generic and sensor-independent object representation is necessary. However, in the case of public and corporate video surveillance, centralized storage of aggregated data leads to privacy violations. This article explains how a centralized object representation, complying with the Fair Information Practice Principles (FIP) privacy constraints, can be implemented for a video surveillance system.
{"title":"Privacy-Aware Object Representation for Surveillance Systems","authors":"Hauke Vagts, A. Bauer","doi":"10.1109/AVSS.2010.73","DOIUrl":"https://doi.org/10.1109/AVSS.2010.73","url":null,"abstract":"Real-time object tracking, feature assessment and classification based on video are an enabling technology for improving situation awareness of human operators as well as for automated recognition of critical situations. To bridge the gap between video signal-processing output and spatio-temporal analysis of object behavior at the semantic level, a generic and sensor-independent object representation is necessary. However, in the case of public and corporate video surveillance, centralized storage of aggregated data leads to privacy violations. This article explains how a centralized object representation, complying with the Fair Information Practice Principles (FIP) privacy constraints, can be implemented for a video surveillance system.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117274999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Yahyanejad, D. Wischounig-Strucl, M. Quaritsch, B. Rinner
Unmanned aerial vehicles (UAVs) have been recently deployedin various civilian applications such as environmentalmonitoring, aerial imaging or surveillance. Small-scaleUAVs are of special interest for first responders since theycan rather easily provide bird’s eye view images of disasterareas. In this paper we present a hybrid approach to mosaickan overview image of the area of interest given a setof individual images captured by UAVs flying at low altitude.Our approach combines metadata-based and imagebasedstitching methods in order to overcome the challengesof low-altitude, small-scale UAV deployment such as nonnadirview, inaccurate sensor data, non-planar ground surfacesand limited computing and communication resources.For the generation of the overview image we preserve georeferencingas much as possible, since this is an importantrequirement for disaster management applications. Ourmosaicking method has been implemented on our UAV systemand evaluated based on a quality metric.
{"title":"Incremental Mosaicking of Images from Autonomous, Small-Scale UAVs","authors":"S. Yahyanejad, D. Wischounig-Strucl, M. Quaritsch, B. Rinner","doi":"10.1109/AVSS.2010.14","DOIUrl":"https://doi.org/10.1109/AVSS.2010.14","url":null,"abstract":"Unmanned aerial vehicles (UAVs) have been recently deployedin various civilian applications such as environmentalmonitoring, aerial imaging or surveillance. Small-scaleUAVs are of special interest for first responders since theycan rather easily provide bird’s eye view images of disasterareas. In this paper we present a hybrid approach to mosaickan overview image of the area of interest given a setof individual images captured by UAVs flying at low altitude.Our approach combines metadata-based and imagebasedstitching methods in order to overcome the challengesof low-altitude, small-scale UAV deployment such as nonnadirview, inaccurate sensor data, non-planar ground surfacesand limited computing and communication resources.For the generation of the overview image we preserve georeferencingas much as possible, since this is an importantrequirement for disaster management applications. Ourmosaicking method has been implemented on our UAV systemand evaluated based on a quality metric.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115256596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reliable tracking of people in video and recovering theiridentities are of great importance to video analytics applications.For outdoor applications, long range identity sensorssuch as active RFID can provide good coverage in alarge open space, though they only provide coarse locationinformation. We propose a probabilistic approach usingnoisy inputs from multiple long range identity sensorsto globally associate and identify fragmented tracklets generatedby video tracking algorithms. We extend a networkflow based data association model to recover tracklet identityefficiently. Our approach is evaluated using five minutesof video and active RFID measurements capturing four peoplewearing RFID tags and a couple of passersby. Simulationis then used to evaluate performance for larger numberof targets under different scenarios.identities are of great importance to video analytics applications.For outdoor applications, long range identity sensorssuch as active RFID can provide good coverage in alarge open space, though they only provide coarse locationinformation. We propose a probabilistic approach usingnoisy inputs from multiple long range identity sensorsto globally associate and identify fragmented tracklets generatedby video tracking algorithms. We extend a networkflow based data association model to recover tracklet identityefficiently. Our approach is evaluated using five minutesof video and active RFID measurements capturing four peoplewearing RFID tags and a couple of passersby. Simulationis then used to evaluate performance for larger numberof targets under different scenarios.
{"title":"Global Identification of Tracklets in Video Using Long Range Identity Sensors","authors":"Xunyi Yu, A. Ganz","doi":"10.1109/AVSS.2010.46","DOIUrl":"https://doi.org/10.1109/AVSS.2010.46","url":null,"abstract":"Reliable tracking of people in video and recovering theiridentities are of great importance to video analytics applications.For outdoor applications, long range identity sensorssuch as active RFID can provide good coverage in alarge open space, though they only provide coarse locationinformation. We propose a probabilistic approach usingnoisy inputs from multiple long range identity sensorsto globally associate and identify fragmented tracklets generatedby video tracking algorithms. We extend a networkflow based data association model to recover tracklet identityefficiently. Our approach is evaluated using five minutesof video and active RFID measurements capturing four peoplewearing RFID tags and a couple of passersby. Simulationis then used to evaluate performance for larger numberof targets under different scenarios.identities are of great importance to video analytics applications.For outdoor applications, long range identity sensorssuch as active RFID can provide good coverage in alarge open space, though they only provide coarse locationinformation. We propose a probabilistic approach usingnoisy inputs from multiple long range identity sensorsto globally associate and identify fragmented tracklets generatedby video tracking algorithms. We extend a networkflow based data association model to recover tracklet identityefficiently. Our approach is evaluated using five minutesof video and active RFID measurements capturing four peoplewearing RFID tags and a couple of passersby. Simulationis then used to evaluate performance for larger numberof targets under different scenarios.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115656333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human motion change detection is a challenging taskfor a surveillance sensor system. Major challenges includecomplex scenes with a large amount of targets and confusors,and complex motion behaviors of different human objects.Human motion change detection and understandinghave been intensively studied over the past decades. In thispaper, we present a Hierarchical Gaussian Process DynamicalModel (HGPDM) integrated with particle filter trackerfor humanmotion change detection. Firstly, the high dimensionalhuman motion trajectory training data is projected tothe low dimensional latent space with a two-layer hierarchy.The latent space at the leaf node in bottom layer representsa typical humanmotion trajectory, while the root node in theupper layer controls the interaction and switching amongleaf nodes. The trained HGPDM will then be used to classifytest object trajectories which are captured by the particlefilter tracker. If the motion trajectory is different fromthe motion in the previous frame, the root node will transferthe motion trajectory to the corresponding leaf node. Inaddition, HGPDM can be used to predict the next motionstate, and provide Gaussian process dynamical samples forthe particle filter framework. The experiment results indicatethat our framework can accurately track and detect thehuman motion changes despite of complex motion and occlusion.In addition, the sampling in the hierarchical latentspace has greatly improved the efficiency of the particle filterframework.
{"title":"Human Motion Change Detection by Hierarchical Gaussian Process Dynamical Model with Particle Filter","authors":"Yafeng Yin, H. Man, Jing Wang, Guang Yang","doi":"10.1109/AVSS.2010.55","DOIUrl":"https://doi.org/10.1109/AVSS.2010.55","url":null,"abstract":"Human motion change detection is a challenging taskfor a surveillance sensor system. Major challenges includecomplex scenes with a large amount of targets and confusors,and complex motion behaviors of different human objects.Human motion change detection and understandinghave been intensively studied over the past decades. In thispaper, we present a Hierarchical Gaussian Process DynamicalModel (HGPDM) integrated with particle filter trackerfor humanmotion change detection. Firstly, the high dimensionalhuman motion trajectory training data is projected tothe low dimensional latent space with a two-layer hierarchy.The latent space at the leaf node in bottom layer representsa typical humanmotion trajectory, while the root node in theupper layer controls the interaction and switching amongleaf nodes. The trained HGPDM will then be used to classifytest object trajectories which are captured by the particlefilter tracker. If the motion trajectory is different fromthe motion in the previous frame, the root node will transferthe motion trajectory to the corresponding leaf node. Inaddition, HGPDM can be used to predict the next motionstate, and provide Gaussian process dynamical samples forthe particle filter framework. The experiment results indicatethat our framework can accurately track and detect thehuman motion changes despite of complex motion and occlusion.In addition, the sampling in the hierarchical latentspace has greatly improved the efficiency of the particle filterframework.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122042661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since the population of the elderly grows highly, the improvement of the quality of life of elderly at home is of a great importance. This can be achieved through the development of technologies for monitoring their activities at home. In this context, we propose an activity monitoring system which aims to achieve behavior analysis of elderly people. The proposed system consists of an approach combining heterogeneous sensor data to recognize activities at home. This approach combines data provided by video cameras with data provided by environmental sensors attached to house furnishings. In this paper, we validate the proposed activity monitoring system for the recognition of a set of daily activities (e.g. using kitchen equipment, preparing meal) for 9 real elderly volunteers living in an experimental apartment. We compare the behavioral profile between the 9 elderly volunteers. This study shows that the proposed system is thoroughly accepted by the elderly and it is also well appreciated by the medical staff.
{"title":"An Activity Monitoring System for Real Elderly at Home: Validation Study","authors":"N. Zouba, F. Brémond, M. Thonnat","doi":"10.1109/AVSS.2010.83","DOIUrl":"https://doi.org/10.1109/AVSS.2010.83","url":null,"abstract":"Since the population of the elderly grows highly, the improvement of the quality of life of elderly at home is of a great importance. This can be achieved through the development of technologies for monitoring their activities at home. In this context, we propose an activity monitoring system which aims to achieve behavior analysis of elderly people. The proposed system consists of an approach combining heterogeneous sensor data to recognize activities at home. This approach combines data provided by video cameras with data provided by environmental sensors attached to house furnishings. In this paper, we validate the proposed activity monitoring system for the recognition of a set of daily activities (e.g. using kitchen equipment, preparing meal) for 9 real elderly volunteers living in an experimental apartment. We compare the behavioral profile between the 9 elderly volunteers. This study shows that the proposed system is thoroughly accepted by the elderly and it is also well appreciated by the medical staff.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123486918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Minjae Kim, Bonhwa Ku, Daesung Chung, Hyunhak Shin, Bonghyup Kang, D. Han, Hanseok Ko
In image reconstruction, dynamic super resolutionimage reconstruction algorithms have been investigated toenhance video frames sequentially, where explicit motionestimation is considered as a major factor in theperformance. This paper proposes a novel measurementvalidation method to attain robust image reconstructionresults under inaccurate motion estimation. In addition, wepresent an effective scene change detection methoddedicated to the proposed super resolution technique forminimizing erroneous results when abrupt scene changesoccur in the video frames. Representative experimentalresults show excellent performance of the proposedalgorithm in terms of the reconstruction quality andprocessing speed.
{"title":"Robust Dynamic Super Resolution under Inaccurate Motion Estimation","authors":"Minjae Kim, Bonhwa Ku, Daesung Chung, Hyunhak Shin, Bonghyup Kang, D. Han, Hanseok Ko","doi":"10.1109/AVSS.2010.49","DOIUrl":"https://doi.org/10.1109/AVSS.2010.49","url":null,"abstract":"In image reconstruction, dynamic super resolutionimage reconstruction algorithms have been investigated toenhance video frames sequentially, where explicit motionestimation is considered as a major factor in theperformance. This paper proposes a novel measurementvalidation method to attain robust image reconstructionresults under inaccurate motion estimation. In addition, wepresent an effective scene change detection methoddedicated to the proposed super resolution technique forminimizing erroneous results when abrupt scene changesoccur in the video frames. Representative experimentalresults show excellent performance of the proposedalgorithm in terms of the reconstruction quality andprocessing speed.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129892363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}