{"title":"Finding People by their Shadows: Aerial Surveillance Using Body Biometrics Extracted from Ground Video","authors":"Y. Iwashita, A. Stoica, R. Kurazume","doi":"10.1109/EST.2012.41","DOIUrl":null,"url":null,"abstract":"Shadow analysis has been shown to enable the extension of gait biometrics to aerial surveillance. In past work the classifiers were both trained and tested on shadow features extracted by image processing. In real scenarios this requires imagery with shadows of people to be recognized. On the other hand one rarely has available the shadow information of the person sought, however direct body movement/information may be more easily obtained from ground surveillance cameras or video recordings. This paper proposes a scenario in which gait/dynamics features from body movement are obtained from a ground video and the search for matching dynamics of shadows takes place in aerial surveillance video. A common scenario would be the recording of people by ground/city surveillance cameras and the use of information to initiate a wide-area search for shadows from aerial platforms. Vice-versa, the shadow of a suspect leaving an incident area, detected by aerial surveillance, can trigger a city-wide search on body/gait biometrics as observed with city/ground surveillance cameras. To illustrate the feasibility of this approach the paper introduces a method that compares contours of bodies in ground image frames and contours of shadows in aerial image frames, for which an alignment is made and a distance is calculated, integrated over a normalized gait cycle. While the results are preliminary, for only 5 people, and using a specific walking arrangement to avoid compensation for changes in the viewing angles, the method obtains a 70% correct classification rate which is a first step in proving the feasibility of the approach.","PeriodicalId":314247,"journal":{"name":"2012 Third International Conference on Emerging Security Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 Third International Conference on Emerging Security Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EST.2012.41","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Shadow analysis has been shown to enable the extension of gait biometrics to aerial surveillance. In past work the classifiers were both trained and tested on shadow features extracted by image processing. In real scenarios this requires imagery with shadows of people to be recognized. On the other hand one rarely has available the shadow information of the person sought, however direct body movement/information may be more easily obtained from ground surveillance cameras or video recordings. This paper proposes a scenario in which gait/dynamics features from body movement are obtained from a ground video and the search for matching dynamics of shadows takes place in aerial surveillance video. A common scenario would be the recording of people by ground/city surveillance cameras and the use of information to initiate a wide-area search for shadows from aerial platforms. Vice-versa, the shadow of a suspect leaving an incident area, detected by aerial surveillance, can trigger a city-wide search on body/gait biometrics as observed with city/ground surveillance cameras. To illustrate the feasibility of this approach the paper introduces a method that compares contours of bodies in ground image frames and contours of shadows in aerial image frames, for which an alignment is made and a distance is calculated, integrated over a normalized gait cycle. While the results are preliminary, for only 5 people, and using a specific walking arrangement to avoid compensation for changes in the viewing angles, the method obtains a 70% correct classification rate which is a first step in proving the feasibility of the approach.