{"title":"基于区域表征的性别认同融合","authors":"S. D. Hu, Brendan Jou, Aaron Jaech, M. Savvides","doi":"10.1109/IJCB.2011.6117602","DOIUrl":null,"url":null,"abstract":"Much of the current work on gender identification relies on legacy datasets of heavily controlled images with minimal facial appearance variations. As studies explore the effects of adding elements of variation into the data, they have met challenges in achieving granular statistical significance due to the limited size of their datasets. In this study, we aim to create a classification framework that is robust to non-studio, uncontrolled, real-world images. We show that the fusion of separate linear classifiers trained on smart-selected local patches achieves 90% accuracy, which is a 5% improvement over a baseline linear classifier on a straightforward pixel representation. These results are reported on our own uncontrolled database of ∼26; 700 images collected from the Web.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":"{\"title\":\"Fusion of region-based representations for gender identification\",\"authors\":\"S. D. Hu, Brendan Jou, Aaron Jaech, M. Savvides\",\"doi\":\"10.1109/IJCB.2011.6117602\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Much of the current work on gender identification relies on legacy datasets of heavily controlled images with minimal facial appearance variations. As studies explore the effects of adding elements of variation into the data, they have met challenges in achieving granular statistical significance due to the limited size of their datasets. In this study, we aim to create a classification framework that is robust to non-studio, uncontrolled, real-world images. We show that the fusion of separate linear classifiers trained on smart-selected local patches achieves 90% accuracy, which is a 5% improvement over a baseline linear classifier on a straightforward pixel representation. These results are reported on our own uncontrolled database of ∼26; 700 images collected from the Web.\",\"PeriodicalId\":103913,\"journal\":{\"name\":\"2011 International Joint Conference on Biometrics (IJCB)\",\"volume\":\"51 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-10-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"16\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 International Joint Conference on Biometrics (IJCB)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCB.2011.6117602\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 International Joint Conference on Biometrics (IJCB)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCB.2011.6117602","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Fusion of region-based representations for gender identification
Much of the current work on gender identification relies on legacy datasets of heavily controlled images with minimal facial appearance variations. As studies explore the effects of adding elements of variation into the data, they have met challenges in achieving granular statistical significance due to the limited size of their datasets. In this study, we aim to create a classification framework that is robust to non-studio, uncontrolled, real-world images. We show that the fusion of separate linear classifiers trained on smart-selected local patches achieves 90% accuracy, which is a 5% improvement over a baseline linear classifier on a straightforward pixel representation. These results are reported on our own uncontrolled database of ∼26; 700 images collected from the Web.