Real-Time Recognition of In-Place Body Actions and Head Gestures using Only a Head-Mounted Display

Jingbo Zhao, Mingjun Shao, Yaojun Wang, Ruolin Xu
{"title":"Real-Time Recognition of In-Place Body Actions and Head Gestures using Only a Head-Mounted Display","authors":"Jingbo Zhao, Mingjun Shao, Yaojun Wang, Ruolin Xu","doi":"10.1109/VR55154.2023.00026","DOIUrl":null,"url":null,"abstract":"Body actions and head gestures are natural interfaces for interaction in virtual environments. Existing methods for in-place body action recognition often require hardware more than a head-mounted display (HMD), making body action interfaces difficult to be introduced to ordinary virtual reality (VR) users as they usually only possess an HMD. In addition, there lacks a unified solution to recognize in-place body actions and head gestures. This potentially hinders the exploration of the use of in-place body actions and head gestures for novel interaction experiences in virtual environments. We present a unified two-stream 1-D convolutional neural network (CNN) for recognition of body actions when a user performs walking-in-place (WIP) and for recognition of head gestures when a user stands still wearing only an HMD. Compared to previous approaches, our method does not require specialized hardware and/or additional tracking devices other than an HMD and can recognize a significantly larger number of body actions and head gestures than other existing methods. In total, ten in-place body actions and eight head gestures can be recognized with the proposed method, which makes this method a readily available body action interface (head gestures included) for interaction with virtual environments. We demonstrate one utility of the interface through a virtual locomotion task. Results show that the present body action interface is reliable in detecting body actions for the VR locomotion task but is physically demanding compared to a touch controller interface. The present body action interface is promising for new VR experiences and applications, especially for VR fitness applications where workouts are intended.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VR55154.2023.00026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Body actions and head gestures are natural interfaces for interaction in virtual environments. Existing methods for in-place body action recognition often require hardware more than a head-mounted display (HMD), making body action interfaces difficult to be introduced to ordinary virtual reality (VR) users as they usually only possess an HMD. In addition, there lacks a unified solution to recognize in-place body actions and head gestures. This potentially hinders the exploration of the use of in-place body actions and head gestures for novel interaction experiences in virtual environments. We present a unified two-stream 1-D convolutional neural network (CNN) for recognition of body actions when a user performs walking-in-place (WIP) and for recognition of head gestures when a user stands still wearing only an HMD. Compared to previous approaches, our method does not require specialized hardware and/or additional tracking devices other than an HMD and can recognize a significantly larger number of body actions and head gestures than other existing methods. In total, ten in-place body actions and eight head gestures can be recognized with the proposed method, which makes this method a readily available body action interface (head gestures included) for interaction with virtual environments. We demonstrate one utility of the interface through a virtual locomotion task. Results show that the present body action interface is reliable in detecting body actions for the VR locomotion task but is physically demanding compared to a touch controller interface. The present body action interface is promising for new VR experiences and applications, especially for VR fitness applications where workouts are intended.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
仅使用头戴式显示器实时识别原位身体动作和头部手势
在虚拟环境中,身体动作和头部手势是交互的自然界面。现有的原位身体动作识别方法通常需要硬件而不是头戴式显示器(HMD),这使得身体动作界面很难介绍给普通虚拟现实(VR)用户,因为他们通常只拥有一个头戴式显示器。此外,缺乏一个统一的解决方案来识别原地的身体动作和头部手势。这可能会阻碍在虚拟环境中使用原位身体动作和头部手势进行新颖交互体验的探索。我们提出了一个统一的两流一维卷积神经网络(CNN),用于识别用户在原地行走(WIP)时的身体动作,以及当用户只戴着头戴式头盔站立不动时的头部手势。与以前的方法相比,我们的方法不需要专门的硬件和/或除HMD以外的其他跟踪设备,并且可以识别比其他现有方法多得多的身体动作和头部手势。总的来说,该方法可以识别10个原地身体动作和8个头部手势,这使得该方法成为与虚拟环境交互的一个现成的身体动作界面(包括头部手势)。我们通过虚拟运动任务演示了该接口的一个实用程序。结果表明,目前的身体动作界面在检测VR运动任务中的身体动作方面是可靠的,但与触摸控制器界面相比,对身体的要求较高。目前的身体动作界面对于新的VR体验和应用来说是很有希望的,特别是对于想要锻炼的VR健身应用来说。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Simultaneous Scene-independent Camera Localization and Category-level Object Pose Estimation via Multi-level Feature Fusion A study of the influence of AR on the perception, comprehension and projection levels of situation awareness A Large-Scale Study of Proxemics and Gaze in Groups Investigating Guardian Awareness Techniques to Promote Safety in Virtual Reality Locomotion-aware Foveated Rendering
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1