{"title":"Who goes there?: approaches to mapping facial appearance diversity","authors":"Zachary Bessinger, C. Stauffer, Nathan Jacobs","doi":"10.1145/2996913.2996997","DOIUrl":null,"url":null,"abstract":"Geotagged imagery, from satellite, aerial, and ground-level cameras, provides a rich record of how the appearance of scenes and objects differ across the globe. Modern web- based mapping software makes it easy to see how different places around the world look, both from satellite and ground-level views. Unfortunately, interfaces for exploring how the appearance of objects depend on geographic location are quite limited. In this work, we focus on a particularly common object, the human face, and propose learning generative models that relate facial appearance and geographic location. We train these models using a novel dataset of geotagged face imagery we constructed for this task. We present qualitative and quantitative results that demonstrate that these models capture meaningful trends in appearance. We also describe a framework for constructing a web-based visualization that captures the geospatial distribution of human facial appearance.","PeriodicalId":20525,"journal":{"name":"Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2016-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2996913.2996997","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Geotagged imagery, from satellite, aerial, and ground-level cameras, provides a rich record of how the appearance of scenes and objects differ across the globe. Modern web- based mapping software makes it easy to see how different places around the world look, both from satellite and ground-level views. Unfortunately, interfaces for exploring how the appearance of objects depend on geographic location are quite limited. In this work, we focus on a particularly common object, the human face, and propose learning generative models that relate facial appearance and geographic location. We train these models using a novel dataset of geotagged face imagery we constructed for this task. We present qualitative and quantitative results that demonstrate that these models capture meaningful trends in appearance. We also describe a framework for constructing a web-based visualization that captures the geospatial distribution of human facial appearance.