{"title":"双自适应融合注意改进行人属性识别","authors":"Wenbiao Xie, Chen Zou, Chengui Fu, Xiaomei Xie, Qiuming Liu, He Xiao","doi":"10.1145/3581807.3581814","DOIUrl":null,"url":null,"abstract":"As one of the important fields of computer vision research, pedestrian attribute recognition has received increasing attention on researchers at domestic and foreign. However, obtaining long-distance pedestrian information on actual scenes has problems, such as lack of information, incomplete feature extraction, and low attribute recognition accuracy. To address these issues, we proposed a Dual Adaptive Fusion Attention and Criss-Cross Attention Module (DAFCC). This module contains two sub-modules: First, the dual adaptive fusion attention module automatically adjusts the weights of attributes in different scales, then fusion the different scale features and makes attribute extraction more complete. Second, we employ criss-cross attention to extract rich contextual information, which is beneficial for visual understanding. By training on the public PA-100K, RAP and PETA datasets, the mean accuracies achieved 81.09%, 81.44% and 85.94%, respectively. Extensive experimental results show that the method has strong competitiveness among many current classical algorithms.","PeriodicalId":292813,"journal":{"name":"Proceedings of the 2022 11th International Conference on Computing and Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Improving Pedestrian Attribute Recognition with Dual Adaptive Fusion Attention\",\"authors\":\"Wenbiao Xie, Chen Zou, Chengui Fu, Xiaomei Xie, Qiuming Liu, He Xiao\",\"doi\":\"10.1145/3581807.3581814\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As one of the important fields of computer vision research, pedestrian attribute recognition has received increasing attention on researchers at domestic and foreign. However, obtaining long-distance pedestrian information on actual scenes has problems, such as lack of information, incomplete feature extraction, and low attribute recognition accuracy. To address these issues, we proposed a Dual Adaptive Fusion Attention and Criss-Cross Attention Module (DAFCC). This module contains two sub-modules: First, the dual adaptive fusion attention module automatically adjusts the weights of attributes in different scales, then fusion the different scale features and makes attribute extraction more complete. Second, we employ criss-cross attention to extract rich contextual information, which is beneficial for visual understanding. By training on the public PA-100K, RAP and PETA datasets, the mean accuracies achieved 81.09%, 81.44% and 85.94%, respectively. Extensive experimental results show that the method has strong competitiveness among many current classical algorithms.\",\"PeriodicalId\":292813,\"journal\":{\"name\":\"Proceedings of the 2022 11th International Conference on Computing and Pattern Recognition\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2022 11th International Conference on Computing and Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3581807.3581814\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 11th International Conference on Computing and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3581807.3581814","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Improving Pedestrian Attribute Recognition with Dual Adaptive Fusion Attention
As one of the important fields of computer vision research, pedestrian attribute recognition has received increasing attention on researchers at domestic and foreign. However, obtaining long-distance pedestrian information on actual scenes has problems, such as lack of information, incomplete feature extraction, and low attribute recognition accuracy. To address these issues, we proposed a Dual Adaptive Fusion Attention and Criss-Cross Attention Module (DAFCC). This module contains two sub-modules: First, the dual adaptive fusion attention module automatically adjusts the weights of attributes in different scales, then fusion the different scale features and makes attribute extraction more complete. Second, we employ criss-cross attention to extract rich contextual information, which is beneficial for visual understanding. By training on the public PA-100K, RAP and PETA datasets, the mean accuracies achieved 81.09%, 81.44% and 85.94%, respectively. Extensive experimental results show that the method has strong competitiveness among many current classical algorithms.