虚拟现实中威胁下人类行为的运动跟踪与动作分类

Ulises Daniel Serratos Hernandez, Jack Brookes, Samson Hall, Juliana K. Sporrer, Sajjad Zabbah, Dominik R. Bach
{"title":"虚拟现实中威胁下人类行为的运动跟踪与动作分类","authors":"Ulises Daniel Serratos Hernandez, Jack Brookes, Samson Hall, Juliana K. Sporrer, Sajjad Zabbah, Dominik R. Bach","doi":"10.1016/j.gaitpost.2023.07.230","DOIUrl":null,"url":null,"abstract":"Understanding and characterising human movements is complex due to the diversity of human actions and their inherent inter, intra, and secular variability. Traditional marker-based, and more recently, some marker-less motion capture (MoCap) systems have demonstrated to be reliable tools for movement analysis. However, in complex experimental set ups involving virtual reality (VR) and free movements (as in [1]), accuracy and reliability tend to decrease due to occlusion, sensor blind spots, marker detachment, and other artifacts. Furthermore, when actions are less distinct, e.g., fast walk and slow run, current classification methods tend to fail when actions overlap, which is expected as even researchers struggle to manually label such actions. Can current marker-less MoCap systems, pose estimation (PE) algorithms, and advanced action classification (AC) methods: (1) accurately track participant movements in VR; (2) cluster participant actions. The experiment consisted of avoiding threats (Fig. 1A) whilst collecting fruit in VR environments (n=29 participants, 5x10m area), see [1]. The Unity® software [2], based on the Unity Experiment Framework [3], was used to create the VR experiment, which was streamed through an HTC vive pro (HTC Corporation) VR headset. Movements were recorded using 5 ELP cameras (1280×720 @120 Hz) synchronised with the Open Broadcaster Software® (OBS) [4]. Openpose [5] was employed for PE (Fig. 1B). Euclidean distances, and angular positions, velocities, and accelerations were derived from cartesian positions. Finally, Uniform Manifold Approximation and Projection (UMAP) was used to embed high-dimensional features into a low-dimensional space, and Hierarchical Density Based Spatial Clustering of Applications (HDBSCAN) was used for classification (see Fig. 1E), similar to B-SOiD [6]. Participants were virtually killed by the threat in 223 episodes, for which the participants’ last poses were estimated. After applying UMAP and HDBSCAN, 5 pose clusters were found (see Fig. 1C-D), which depict: (a) stand up, picking fruit with slow escape; (b) stand up, arms extended and slow escape; (c) long retreat at fast speed; (d) short retreat at medium speed; (e) crouching and picking fruit; (x) 4% unlabelled. Fig. 1. (A) VR-threat, (B) Participant estimated 3D-pose, (C) Pose clusters, (D) Cluster examples, (E) Methodology.Download : Download high-res image (176KB)Download : Download full-size image Marker-less MoCap and PE methods were mostly successful for participants’ last poses. However, in some cases, and during exploration, tracking was lost due to occlusion and sensor blind spots. The results from the AC methods are an indication of the potential use of unsupervised methods to find participant actions under threat in VR. Nevertheless, such clustering is rather general, and had some AC errors, which could not be quantified as further work is needed to understand and define where the threshold of overlapping actions occurs. The results are exciting and promising; however, further investigation is needed to validate the findings, and to improve the AC methods.","PeriodicalId":94018,"journal":{"name":"Gait & posture","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Movement tracking and action classification for human behaviour under threat in virtual reality\",\"authors\":\"Ulises Daniel Serratos Hernandez, Jack Brookes, Samson Hall, Juliana K. Sporrer, Sajjad Zabbah, Dominik R. Bach\",\"doi\":\"10.1016/j.gaitpost.2023.07.230\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Understanding and characterising human movements is complex due to the diversity of human actions and their inherent inter, intra, and secular variability. Traditional marker-based, and more recently, some marker-less motion capture (MoCap) systems have demonstrated to be reliable tools for movement analysis. However, in complex experimental set ups involving virtual reality (VR) and free movements (as in [1]), accuracy and reliability tend to decrease due to occlusion, sensor blind spots, marker detachment, and other artifacts. Furthermore, when actions are less distinct, e.g., fast walk and slow run, current classification methods tend to fail when actions overlap, which is expected as even researchers struggle to manually label such actions. Can current marker-less MoCap systems, pose estimation (PE) algorithms, and advanced action classification (AC) methods: (1) accurately track participant movements in VR; (2) cluster participant actions. The experiment consisted of avoiding threats (Fig. 1A) whilst collecting fruit in VR environments (n=29 participants, 5x10m area), see [1]. The Unity® software [2], based on the Unity Experiment Framework [3], was used to create the VR experiment, which was streamed through an HTC vive pro (HTC Corporation) VR headset. Movements were recorded using 5 ELP cameras (1280×720 @120 Hz) synchronised with the Open Broadcaster Software® (OBS) [4]. Openpose [5] was employed for PE (Fig. 1B). Euclidean distances, and angular positions, velocities, and accelerations were derived from cartesian positions. Finally, Uniform Manifold Approximation and Projection (UMAP) was used to embed high-dimensional features into a low-dimensional space, and Hierarchical Density Based Spatial Clustering of Applications (HDBSCAN) was used for classification (see Fig. 1E), similar to B-SOiD [6]. Participants were virtually killed by the threat in 223 episodes, for which the participants’ last poses were estimated. After applying UMAP and HDBSCAN, 5 pose clusters were found (see Fig. 1C-D), which depict: (a) stand up, picking fruit with slow escape; (b) stand up, arms extended and slow escape; (c) long retreat at fast speed; (d) short retreat at medium speed; (e) crouching and picking fruit; (x) 4% unlabelled. Fig. 1. (A) VR-threat, (B) Participant estimated 3D-pose, (C) Pose clusters, (D) Cluster examples, (E) Methodology.Download : Download high-res image (176KB)Download : Download full-size image Marker-less MoCap and PE methods were mostly successful for participants’ last poses. However, in some cases, and during exploration, tracking was lost due to occlusion and sensor blind spots. The results from the AC methods are an indication of the potential use of unsupervised methods to find participant actions under threat in VR. Nevertheless, such clustering is rather general, and had some AC errors, which could not be quantified as further work is needed to understand and define where the threshold of overlapping actions occurs. The results are exciting and promising; however, further investigation is needed to validate the findings, and to improve the AC methods.\",\"PeriodicalId\":94018,\"journal\":{\"name\":\"Gait & posture\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Gait & posture\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1016/j.gaitpost.2023.07.230\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Gait & posture","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.gaitpost.2023.07.230","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

由于人类行为的多样性及其内在的内在、内部和世俗的可变性,理解和描述人类运动是复杂的。传统的基于标记的运动捕捉(MoCap)系统和最近的一些无标记运动捕捉(MoCap)系统已被证明是运动分析的可靠工具。然而,在涉及虚拟现实(VR)和自由运动的复杂实验设置中(如[1]),由于遮挡、传感器盲点、标记脱离和其他人为因素,准确性和可靠性往往会降低。此外,当动作不太明显时,例如快走和慢跑,当动作重叠时,当前的分类方法往往会失败,这是意料之中的,因为即使研究人员也很难手动标记这些动作。当前无标记动作捕捉系统、姿态估计(PE)算法和高级动作分类(AC)方法能否:(1)准确跟踪VR中的参与者运动;(2)集群参与者行为。实验包括在VR环境中(n=29名参与者,5 × 10m面积),在收集水果的同时避开威胁(图1A),见[1]。使用Unity®软件[2],基于Unity实验框架[3]创建VR实验,通过HTC vive pro (HTC Corporation) VR头显进行流式传输。使用与开放广播软件®(OBS)同步的5台ELP摄像机(1280×720 @120 Hz)记录运动[4]。采用Openpose[5]进行PE(图1B)。欧几里得距离、角位置、速度和加速度都是从笛卡尔位置推导出来的。最后,使用统一流形逼近和投影(UMAP)将高维特征嵌入到低维空间中,并使用基于分层密度的应用空间聚类(HDBSCAN)进行分类(见图1E),类似于B-SOiD[6]。在223集中,参与者几乎被威胁杀死,参与者的最后姿势被估计出来。应用UMAP和HDBSCAN后,发现了5个姿态簇(见图1C-D),它们描绘了:(a)站起来,摘水果,缓慢逃脱;(b)站立,双臂伸展,缓慢逃离;(c)快速长退;(d)中速短退;(e)蹲着摘水果;(x) 4%未标记。图1所示。(A) vr威胁,(B)参与者估计的3d姿态,(C)姿态集群,(D)集群示例,(E)方法。下载:下载高分辨率图片(176KB)下载:下载全尺寸图片无标记动作捕捉和PE方法对参与者的最后姿势最成功。然而,在某些情况下,在探索过程中,由于遮挡和传感器盲点,跟踪丢失。AC方法的结果表明,在虚拟现实中,无监督方法可以用于发现受到威胁的参与者行为。然而,这种聚类是相当普遍的,并且有一些AC误差,这是无法量化的,因为需要进一步的工作来理解和定义重叠动作的阈值。结果是令人兴奋和有希望的;然而,需要进一步的研究来验证这些发现,并改进AC方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Movement tracking and action classification for human behaviour under threat in virtual reality
Understanding and characterising human movements is complex due to the diversity of human actions and their inherent inter, intra, and secular variability. Traditional marker-based, and more recently, some marker-less motion capture (MoCap) systems have demonstrated to be reliable tools for movement analysis. However, in complex experimental set ups involving virtual reality (VR) and free movements (as in [1]), accuracy and reliability tend to decrease due to occlusion, sensor blind spots, marker detachment, and other artifacts. Furthermore, when actions are less distinct, e.g., fast walk and slow run, current classification methods tend to fail when actions overlap, which is expected as even researchers struggle to manually label such actions. Can current marker-less MoCap systems, pose estimation (PE) algorithms, and advanced action classification (AC) methods: (1) accurately track participant movements in VR; (2) cluster participant actions. The experiment consisted of avoiding threats (Fig. 1A) whilst collecting fruit in VR environments (n=29 participants, 5x10m area), see [1]. The Unity® software [2], based on the Unity Experiment Framework [3], was used to create the VR experiment, which was streamed through an HTC vive pro (HTC Corporation) VR headset. Movements were recorded using 5 ELP cameras (1280×720 @120 Hz) synchronised with the Open Broadcaster Software® (OBS) [4]. Openpose [5] was employed for PE (Fig. 1B). Euclidean distances, and angular positions, velocities, and accelerations were derived from cartesian positions. Finally, Uniform Manifold Approximation and Projection (UMAP) was used to embed high-dimensional features into a low-dimensional space, and Hierarchical Density Based Spatial Clustering of Applications (HDBSCAN) was used for classification (see Fig. 1E), similar to B-SOiD [6]. Participants were virtually killed by the threat in 223 episodes, for which the participants’ last poses were estimated. After applying UMAP and HDBSCAN, 5 pose clusters were found (see Fig. 1C-D), which depict: (a) stand up, picking fruit with slow escape; (b) stand up, arms extended and slow escape; (c) long retreat at fast speed; (d) short retreat at medium speed; (e) crouching and picking fruit; (x) 4% unlabelled. Fig. 1. (A) VR-threat, (B) Participant estimated 3D-pose, (C) Pose clusters, (D) Cluster examples, (E) Methodology.Download : Download high-res image (176KB)Download : Download full-size image Marker-less MoCap and PE methods were mostly successful for participants’ last poses. However, in some cases, and during exploration, tracking was lost due to occlusion and sensor blind spots. The results from the AC methods are an indication of the potential use of unsupervised methods to find participant actions under threat in VR. Nevertheless, such clustering is rather general, and had some AC errors, which could not be quantified as further work is needed to understand and define where the threshold of overlapping actions occurs. The results are exciting and promising; however, further investigation is needed to validate the findings, and to improve the AC methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
WITHDRAWN: L5-S1 arthrodesis impact on spino-pelvic parameters, gait, and quality-of-life in a patient with chronic low back pain with spondylolisthesis. WITHDRAWN: Lumbar Spine Muscle Force analysis in different Arm Swing States during gait. WITHDRAWN: Multidisciplinary biomechanical evaluation of orthopedic foot surgery in cerebral palsy: A clinical case study. WITHDRAWN: Personalized clinical decision-making by evaluating the effects of a selective nerve block on cycling and gait: A clinical case study. WITHDRAWN: Predicting botulinum toxin-a injection effects on gait in a child with hemiparetic cerebral palsy: A case study.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1