{"title":"PoseSDF++: Point Cloud-Based 3-D Human Pose Estimation via Implicit Neural Representation","authors":"Jianxin Yang;Yuxuan Liu;Jinkai Li;Xiao Gu;Guang-Zhong Yang;Yao Guo","doi":"10.1109/TII.2024.3514159","DOIUrl":null,"url":null,"abstract":"Predicting accurate human pose from 3-D visual observation presents a formidable challenge in computer vision, with numerous applications across various industries. However, most existing studies tackled this issue by regressing the 3-D pose from depth maps via 2-D convolutional neural networks or parametric human models, with limited development in point cloud-based methods. To this end, we propose PoseSDF++, i.e., a point cloud-based encoder–decoder network utilizing implicit neural representation to perform 3-D human pose estimation (HPE) and nonparametric shape reconstruction simultaneously. Leveraging the representative capacity of the signed distance function (SDF), we conceptualize the 3-D HPE as a multiple-shape reconstruction task and propose a distance-aware regression method to accurately estimate the 3-D joint positions. In specific, our PoseSDF++ consists of three modules: first, <italic>a hierarchical encoder</i> with vector neuron layers extracts the multiscale rotation equivariant features from the point clouds captured from an arbitrary viewpoint, addressing the degradation issue caused by viewpoint variation of implicit representation; second, <italic>a shape decoder</i> maps the extracted feature and the query to its corresponding shape SDF; third, <italic>a pose decoder</i> computes the distance between the query and the target keypoints, namely, the pose SDF. Extensive experiments on four publicly available datasets demonstrate that our PoseSDF++ achieves competitive performance against the state-of-the-art point cloud-based methods and covering the human hand (HANDS 2019), lower limbs (ICL-Gait), and full body (DFAUST, LiDARHuman2.6M) pose estimation.","PeriodicalId":13301,"journal":{"name":"IEEE Transactions on Industrial Informatics","volume":"21 3","pages":"2689-2698"},"PeriodicalIF":9.9000,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Industrial Informatics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10811749/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Predicting accurate human pose from 3-D visual observation presents a formidable challenge in computer vision, with numerous applications across various industries. However, most existing studies tackled this issue by regressing the 3-D pose from depth maps via 2-D convolutional neural networks or parametric human models, with limited development in point cloud-based methods. To this end, we propose PoseSDF++, i.e., a point cloud-based encoder–decoder network utilizing implicit neural representation to perform 3-D human pose estimation (HPE) and nonparametric shape reconstruction simultaneously. Leveraging the representative capacity of the signed distance function (SDF), we conceptualize the 3-D HPE as a multiple-shape reconstruction task and propose a distance-aware regression method to accurately estimate the 3-D joint positions. In specific, our PoseSDF++ consists of three modules: first, a hierarchical encoder with vector neuron layers extracts the multiscale rotation equivariant features from the point clouds captured from an arbitrary viewpoint, addressing the degradation issue caused by viewpoint variation of implicit representation; second, a shape decoder maps the extracted feature and the query to its corresponding shape SDF; third, a pose decoder computes the distance between the query and the target keypoints, namely, the pose SDF. Extensive experiments on four publicly available datasets demonstrate that our PoseSDF++ achieves competitive performance against the state-of-the-art point cloud-based methods and covering the human hand (HANDS 2019), lower limbs (ICL-Gait), and full body (DFAUST, LiDARHuman2.6M) pose estimation.
期刊介绍:
The IEEE Transactions on Industrial Informatics is a multidisciplinary journal dedicated to publishing technical papers that connect theory with practical applications of informatics in industrial settings. It focuses on the utilization of information in intelligent, distributed, and agile industrial automation and control systems. The scope includes topics such as knowledge-based and AI-enhanced automation, intelligent computer control systems, flexible and collaborative manufacturing, industrial informatics in software-defined vehicles and robotics, computer vision, industrial cyber-physical and industrial IoT systems, real-time and networked embedded systems, security in industrial processes, industrial communications, systems interoperability, and human-machine interaction.