{"title":"Human Motion Prediction based on IMUs and MetaFormer","authors":"Tian Xu, Chunyu Zhi, Qiongjie Cui","doi":"10.1145/3582177.3582179","DOIUrl":null,"url":null,"abstract":"Human motion prediction forecasts future human poses from the histories, which is necessary for all tasks that need human-robot interactions. Currently, almost existing approaches make predictions based on visual observations, while vision-based motion capture (Mocap) systems have a significant limitation, e.g. occlusions. The vision-based Mocap systems will inevitably suffer from the occlusions. The first reason is the deep ambiguity of mapping the single-view observations to the 3D human pose; and then considering the complex environments in the wild, other objects will lead to the missing observations of the subject. Considering these factors, some researchers utilize non-visual systems as alternatives. We propose to utilize inertial measurement units (IMUs) to capture human poses and make predictions. To bump up the accuracy, we propose a novel model based on MetaFormer with spatial MLP and Temporal pooling (SMTPFormer) to learn the structural and temporal relationships. With extensive experiments on both TotalCapture and DIP-IMU, the proposed SMTPFormer has achieved superior accuracy compared with the existing baselines.","PeriodicalId":170327,"journal":{"name":"Proceedings of the 2023 5th International Conference on Image Processing and Machine Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 5th International Conference on Image Processing and Machine Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3582177.3582179","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Human motion prediction forecasts future human poses from the histories, which is necessary for all tasks that need human-robot interactions. Currently, almost existing approaches make predictions based on visual observations, while vision-based motion capture (Mocap) systems have a significant limitation, e.g. occlusions. The vision-based Mocap systems will inevitably suffer from the occlusions. The first reason is the deep ambiguity of mapping the single-view observations to the 3D human pose; and then considering the complex environments in the wild, other objects will lead to the missing observations of the subject. Considering these factors, some researchers utilize non-visual systems as alternatives. We propose to utilize inertial measurement units (IMUs) to capture human poses and make predictions. To bump up the accuracy, we propose a novel model based on MetaFormer with spatial MLP and Temporal pooling (SMTPFormer) to learn the structural and temporal relationships. With extensive experiments on both TotalCapture and DIP-IMU, the proposed SMTPFormer has achieved superior accuracy compared with the existing baselines.