Linfang Yu;Zhen Qin;Liqun Xu;Zhiguang Qin;Kim-Kwang Raymond Choo
{"title":"SSpose: Self-Supervised Spatial-Aware Model for Human Pose Estimation","authors":"Linfang Yu;Zhen Qin;Liqun Xu;Zhiguang Qin;Kim-Kwang Raymond Choo","doi":"10.1109/TAI.2024.3440220","DOIUrl":null,"url":null,"abstract":"Human pose estimation (HPE) relies on the anatomical relationships among different body parts to locate keypoints. Despite the significant progress achieved by convolutional neural networks (CNN)-based models in HPE, they typically fail to explicitly learn the global dependencies among various body parts. To overcome this limitation, we propose a spatial-aware HPE model called SSpose that explicitly captures the spatial dependencies between specific key points and different locations in an image. The proposed SSpose model adopts a hybrid CNN-Transformer encoder to simultaneously capture local features and global dependencies. To better preserve image details, a multiscale fusion module is introduced to integrate coarse- and fine-grained image information. By establishing a connection with the activation maximization (AM) principle, the final attention layer of the Transformer aggregates contributions (i.e., attention scores) from all image positions and forms the maximum position in the heatmap, thereby achieving keypoint localization in the head structure. Additionally, to address the issue of visible information leakage in convolutional reconstruction, we have devised a self-supervised training framework for the SSpose model. This framework incorporates mask autoencoder (MAE) technology into SSpose models by utilizing masked convolution and hierarchical masking strategy, thereby facilitating efficient self-supervised learning. Extensive experiments demonstrate that SSpose performs exceptionally well in the pose estimation task. On the COCO val set, it achieves an AP and AR of 77.3% and 82.1%, respectively, while on the COCO test-dev set, the AP and AR are 76.4% and 81.5%. Moreover, the model exhibits strong generalization capabilities on MPII.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 11","pages":"5403-5417"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10631686/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Human pose estimation (HPE) relies on the anatomical relationships among different body parts to locate keypoints. Despite the significant progress achieved by convolutional neural networks (CNN)-based models in HPE, they typically fail to explicitly learn the global dependencies among various body parts. To overcome this limitation, we propose a spatial-aware HPE model called SSpose that explicitly captures the spatial dependencies between specific key points and different locations in an image. The proposed SSpose model adopts a hybrid CNN-Transformer encoder to simultaneously capture local features and global dependencies. To better preserve image details, a multiscale fusion module is introduced to integrate coarse- and fine-grained image information. By establishing a connection with the activation maximization (AM) principle, the final attention layer of the Transformer aggregates contributions (i.e., attention scores) from all image positions and forms the maximum position in the heatmap, thereby achieving keypoint localization in the head structure. Additionally, to address the issue of visible information leakage in convolutional reconstruction, we have devised a self-supervised training framework for the SSpose model. This framework incorporates mask autoencoder (MAE) technology into SSpose models by utilizing masked convolution and hierarchical masking strategy, thereby facilitating efficient self-supervised learning. Extensive experiments demonstrate that SSpose performs exceptionally well in the pose estimation task. On the COCO val set, it achieves an AP and AR of 77.3% and 82.1%, respectively, while on the COCO test-dev set, the AP and AR are 76.4% and 81.5%. Moreover, the model exhibits strong generalization capabilities on MPII.