{"title":"Find you wherever you are: geographic location and environment context-based pedestrian detection","authors":"Yuan Liu, Zhongchao Shi, G. Wang, Haike Guan","doi":"10.1145/2390790.2390801","DOIUrl":null,"url":null,"abstract":"Most existing approaches to pedestrian detection only use the visual appearances as the main source in real world images. However, the visual information cannot always provide reliable guidance since pedestrians often change pose or wear different clothes under different conditions. In this work, by leveraging a vast amount of Web images, we first construct a contextual image database, in which each image is automatically attached with geographic location (i.e., latitude and longitude) and environment information (i.e., season, time and weather condition), assisted by image metadata and a few pre-trained classifiers. For the further pedestrian detection, an annotation scheme is presented which can sharply decrease manual labeling efforts. Several properties of the contextual image database are studied including whether the database is authentic and helpful for pedestrian detection. Moreover, we propose a context-based pedestrian detection approach by jointly exploring visual and contextual cues in a probabilistic model. Encouraging results are reported on our contextual image database.","PeriodicalId":441886,"journal":{"name":"GeoMM '12","volume":"113 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"GeoMM '12","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2390790.2390801","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Most existing approaches to pedestrian detection only use the visual appearances as the main source in real world images. However, the visual information cannot always provide reliable guidance since pedestrians often change pose or wear different clothes under different conditions. In this work, by leveraging a vast amount of Web images, we first construct a contextual image database, in which each image is automatically attached with geographic location (i.e., latitude and longitude) and environment information (i.e., season, time and weather condition), assisted by image metadata and a few pre-trained classifiers. For the further pedestrian detection, an annotation scheme is presented which can sharply decrease manual labeling efforts. Several properties of the contextual image database are studied including whether the database is authentic and helpful for pedestrian detection. Moreover, we propose a context-based pedestrian detection approach by jointly exploring visual and contextual cues in a probabilistic model. Encouraging results are reported on our contextual image database.