{"title":"PA-AWCNN: RGB-D动作识别的双流并行注意自适应权重网络","authors":"Lu Yao, Sheng Liu, Chaonan Li, Siyu Zou, Shengyong Chen, Diyi Guan","doi":"10.1109/icra46639.2022.9811995","DOIUrl":null,"url":null,"abstract":"Due to overly relying on appearance information or adopting direct static feature fusion, most of the existing action recognition methods based on multi-modality have poor robustness and insufficient consideration of modality differences. To address these problems, we propose a two-stream adaptive weight integration network with a three-dimensional parallel attention module, PA-AWCNN. Firstly, a three-dimensional Parallel Attention (PA) module is proposed to effectively extract features of spatial, temporal and channel dimensions and reduce the cross-dimensional interference, to achieve better robustness. Secondly, a Common Feature-driven (CFD) feature integration module is proposed to dynamically integrate appearance and depth features with adaptive weights, utilizing modality differences to redeem the lack of each feature, thereby balance the influence of both. The proposed PA-AW CNN uses the representative integrated feature generated by attention enhancement and feature integration for action recognition; it can not only get higher recognition accuracy but also improve the performance of distinguishing similar actions. Experiments illustrate that the proposed method achieves com-parable performances to state-of-the-art methods and obtains the accuracy of 92.76% and 95.65% on NTU RGB+D Dataset and SBU Kinect Interaction Dataset, respectively. The code is publicly available at: https://github.com/Luu-Yao/PA-AWCNN.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"PA-AWCNN: Two-stream Parallel Attention Adaptive Weight Network for RGB-D Action Recognition\",\"authors\":\"Lu Yao, Sheng Liu, Chaonan Li, Siyu Zou, Shengyong Chen, Diyi Guan\",\"doi\":\"10.1109/icra46639.2022.9811995\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Due to overly relying on appearance information or adopting direct static feature fusion, most of the existing action recognition methods based on multi-modality have poor robustness and insufficient consideration of modality differences. To address these problems, we propose a two-stream adaptive weight integration network with a three-dimensional parallel attention module, PA-AWCNN. Firstly, a three-dimensional Parallel Attention (PA) module is proposed to effectively extract features of spatial, temporal and channel dimensions and reduce the cross-dimensional interference, to achieve better robustness. Secondly, a Common Feature-driven (CFD) feature integration module is proposed to dynamically integrate appearance and depth features with adaptive weights, utilizing modality differences to redeem the lack of each feature, thereby balance the influence of both. The proposed PA-AW CNN uses the representative integrated feature generated by attention enhancement and feature integration for action recognition; it can not only get higher recognition accuracy but also improve the performance of distinguishing similar actions. Experiments illustrate that the proposed method achieves com-parable performances to state-of-the-art methods and obtains the accuracy of 92.76% and 95.65% on NTU RGB+D Dataset and SBU Kinect Interaction Dataset, respectively. The code is publicly available at: https://github.com/Luu-Yao/PA-AWCNN.\",\"PeriodicalId\":341244,\"journal\":{\"name\":\"2022 International Conference on Robotics and Automation (ICRA)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Conference on Robotics and Automation (ICRA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/icra46639.2022.9811995\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Robotics and Automation (ICRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icra46639.2022.9811995","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Due to overly relying on appearance information or adopting direct static feature fusion, most of the existing action recognition methods based on multi-modality have poor robustness and insufficient consideration of modality differences. To address these problems, we propose a two-stream adaptive weight integration network with a three-dimensional parallel attention module, PA-AWCNN. Firstly, a three-dimensional Parallel Attention (PA) module is proposed to effectively extract features of spatial, temporal and channel dimensions and reduce the cross-dimensional interference, to achieve better robustness. Secondly, a Common Feature-driven (CFD) feature integration module is proposed to dynamically integrate appearance and depth features with adaptive weights, utilizing modality differences to redeem the lack of each feature, thereby balance the influence of both. The proposed PA-AW CNN uses the representative integrated feature generated by attention enhancement and feature integration for action recognition; it can not only get higher recognition accuracy but also improve the performance of distinguishing similar actions. Experiments illustrate that the proposed method achieves com-parable performances to state-of-the-art methods and obtains the accuracy of 92.76% and 95.65% on NTU RGB+D Dataset and SBU Kinect Interaction Dataset, respectively. The code is publicly available at: https://github.com/Luu-Yao/PA-AWCNN.