{"title":"基于帧内骨架约束建模和分组策略的多尺度图卷积网络三维人体运动预测","authors":"Zhihan Zhuang, Yuan Li, Songlin Du, T. Ikenaga","doi":"10.23919/MVA57639.2023.10216076","DOIUrl":null,"url":null,"abstract":"Attention-based feed-forward networks and graph convolution networks have recently shown great promise in 3D skeleton-based human motion prediction for their good performance in learning temporal and spatial relations. However, previous methods have two critical issues: first, spatial dependencies for distal joints in each independent frame are hard to learn; second, the basic architecture of graph convolution network ignores hierarchical structure and diverse motion patterns of different body parts. To address these issues, this paper proposes an intra-frame skeleton constraints modeling method and a Grouping based Multi-Scale Graph Convolution Network (GMS-GCN) model. The intra-frame skeleton constraints modeling method leverages self-attention mechanism and a designed adjacency matrix to model the skeleton constraints of distal joints in each independent frame. The GMS-GCN utilizes a grouping strategy to learn the dynamics of various body parts separately. Instead of mapping features in the same feature space, GMS-GCN extracts human body features in different dimensions by up-sample and down-sample GCN layers. Experiment results demonstrate that our method achieves an average MPJPE of 34.7mm for short-term prediction and 93.2mm for long-term prediction and both outperform the state-of-the-art approaches.","PeriodicalId":338734,"journal":{"name":"2023 18th International Conference on Machine Vision and Applications (MVA)","volume":"353 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Intra-frame Skeleton Constraints Modeling and Grouping Strategy Based Multi-Scale Graph Convolution Network for 3D Human Motion Prediction\",\"authors\":\"Zhihan Zhuang, Yuan Li, Songlin Du, T. Ikenaga\",\"doi\":\"10.23919/MVA57639.2023.10216076\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Attention-based feed-forward networks and graph convolution networks have recently shown great promise in 3D skeleton-based human motion prediction for their good performance in learning temporal and spatial relations. However, previous methods have two critical issues: first, spatial dependencies for distal joints in each independent frame are hard to learn; second, the basic architecture of graph convolution network ignores hierarchical structure and diverse motion patterns of different body parts. To address these issues, this paper proposes an intra-frame skeleton constraints modeling method and a Grouping based Multi-Scale Graph Convolution Network (GMS-GCN) model. The intra-frame skeleton constraints modeling method leverages self-attention mechanism and a designed adjacency matrix to model the skeleton constraints of distal joints in each independent frame. The GMS-GCN utilizes a grouping strategy to learn the dynamics of various body parts separately. Instead of mapping features in the same feature space, GMS-GCN extracts human body features in different dimensions by up-sample and down-sample GCN layers. Experiment results demonstrate that our method achieves an average MPJPE of 34.7mm for short-term prediction and 93.2mm for long-term prediction and both outperform the state-of-the-art approaches.\",\"PeriodicalId\":338734,\"journal\":{\"name\":\"2023 18th International Conference on Machine Vision and Applications (MVA)\",\"volume\":\"353 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 18th International Conference on Machine Vision and Applications (MVA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/MVA57639.2023.10216076\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 18th International Conference on Machine Vision and Applications (MVA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/MVA57639.2023.10216076","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Intra-frame Skeleton Constraints Modeling and Grouping Strategy Based Multi-Scale Graph Convolution Network for 3D Human Motion Prediction
Attention-based feed-forward networks and graph convolution networks have recently shown great promise in 3D skeleton-based human motion prediction for their good performance in learning temporal and spatial relations. However, previous methods have two critical issues: first, spatial dependencies for distal joints in each independent frame are hard to learn; second, the basic architecture of graph convolution network ignores hierarchical structure and diverse motion patterns of different body parts. To address these issues, this paper proposes an intra-frame skeleton constraints modeling method and a Grouping based Multi-Scale Graph Convolution Network (GMS-GCN) model. The intra-frame skeleton constraints modeling method leverages self-attention mechanism and a designed adjacency matrix to model the skeleton constraints of distal joints in each independent frame. The GMS-GCN utilizes a grouping strategy to learn the dynamics of various body parts separately. Instead of mapping features in the same feature space, GMS-GCN extracts human body features in different dimensions by up-sample and down-sample GCN layers. Experiment results demonstrate that our method achieves an average MPJPE of 34.7mm for short-term prediction and 93.2mm for long-term prediction and both outperform the state-of-the-art approaches.