{"title":"基于卡尔曼滤波的视频背景估计","authors":"J. Scott, M. Pusateri, Duane C. Cornish","doi":"10.1109/AIPR.2009.5466306","DOIUrl":null,"url":null,"abstract":"Transferring responsibility for object tracking in a video scene to computer vision rather than human operators has the appeal that the computer will remain vigilant under all circumstances while operator attention can wane. However, when operating at their peak performance, human operators often outperform computer vision because of their ability to adapt to changes in the scene. While many tracking algorithms are available, background subtraction, where a background image is subtracted from the current frame to isolate the foreground objects in a scene, remains a well proven and popular technique. Under some circumstances, a background image can be obtained manually when no foreground objects are present. In the case of persistent surveillance outdoors, the background has a time evolution due to diurnal changes, weather, and seasonal changes. Such changes render a fixed background scene inadequate. We present a method for estimating the background of a scene utilizing a Kalman filter approach. Our method applies a one-dimensional Kalman filter to each pixel of the camera array to track the pixel intensity. We designed the algorithm to track the background intensity of a scene assuming that the camera view is relatively stationary and that the time evolution of the background occurs much slower than the time evolution of relevant foreground events. This allows the background subtraction algorithm to adapt automatically to changes in the scene. The algorithm is a two step process of mean intensity update and standard deviation update. These updates are derived from standard Kalman filter equations. Our algorithm also allows objects to transition between the background and foreground as appropriate by modeling the input standard deviation. For example, a car entering a parking lot surveillance camera field of view would initially be included in the foreground. However, once parked, it will eventually transition to the background. We present results validating our algorithm's ability to estimate backgrounds in a variety of scenes. We demonstrate the application of our method to track objects using simple frame detection with no temporal coherency.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"46","resultStr":"{\"title\":\"Kalman filter based video background estimation\",\"authors\":\"J. Scott, M. Pusateri, Duane C. Cornish\",\"doi\":\"10.1109/AIPR.2009.5466306\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Transferring responsibility for object tracking in a video scene to computer vision rather than human operators has the appeal that the computer will remain vigilant under all circumstances while operator attention can wane. However, when operating at their peak performance, human operators often outperform computer vision because of their ability to adapt to changes in the scene. While many tracking algorithms are available, background subtraction, where a background image is subtracted from the current frame to isolate the foreground objects in a scene, remains a well proven and popular technique. Under some circumstances, a background image can be obtained manually when no foreground objects are present. In the case of persistent surveillance outdoors, the background has a time evolution due to diurnal changes, weather, and seasonal changes. Such changes render a fixed background scene inadequate. We present a method for estimating the background of a scene utilizing a Kalman filter approach. Our method applies a one-dimensional Kalman filter to each pixel of the camera array to track the pixel intensity. We designed the algorithm to track the background intensity of a scene assuming that the camera view is relatively stationary and that the time evolution of the background occurs much slower than the time evolution of relevant foreground events. This allows the background subtraction algorithm to adapt automatically to changes in the scene. The algorithm is a two step process of mean intensity update and standard deviation update. These updates are derived from standard Kalman filter equations. Our algorithm also allows objects to transition between the background and foreground as appropriate by modeling the input standard deviation. For example, a car entering a parking lot surveillance camera field of view would initially be included in the foreground. However, once parked, it will eventually transition to the background. We present results validating our algorithm's ability to estimate backgrounds in a variety of scenes. We demonstrate the application of our method to track objects using simple frame detection with no temporal coherency.\",\"PeriodicalId\":266025,\"journal\":{\"name\":\"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2009-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"46\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIPR.2009.5466306\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR.2009.5466306","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Transferring responsibility for object tracking in a video scene to computer vision rather than human operators has the appeal that the computer will remain vigilant under all circumstances while operator attention can wane. However, when operating at their peak performance, human operators often outperform computer vision because of their ability to adapt to changes in the scene. While many tracking algorithms are available, background subtraction, where a background image is subtracted from the current frame to isolate the foreground objects in a scene, remains a well proven and popular technique. Under some circumstances, a background image can be obtained manually when no foreground objects are present. In the case of persistent surveillance outdoors, the background has a time evolution due to diurnal changes, weather, and seasonal changes. Such changes render a fixed background scene inadequate. We present a method for estimating the background of a scene utilizing a Kalman filter approach. Our method applies a one-dimensional Kalman filter to each pixel of the camera array to track the pixel intensity. We designed the algorithm to track the background intensity of a scene assuming that the camera view is relatively stationary and that the time evolution of the background occurs much slower than the time evolution of relevant foreground events. This allows the background subtraction algorithm to adapt automatically to changes in the scene. The algorithm is a two step process of mean intensity update and standard deviation update. These updates are derived from standard Kalman filter equations. Our algorithm also allows objects to transition between the background and foreground as appropriate by modeling the input standard deviation. For example, a car entering a parking lot surveillance camera field of view would initially be included in the foreground. However, once parked, it will eventually transition to the background. We present results validating our algorithm's ability to estimate backgrounds in a variety of scenes. We demonstrate the application of our method to track objects using simple frame detection with no temporal coherency.