首页 > 最新文献

Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology最新文献

英文 中文
EXController EXController
Junjian Zhang, Yaohao Chen, Satoshi Hashizume, Naoya Muramatsu, Kotaro Omomo, Riku Iwasaki, Kaji Wataru, Yoichi Ochiai
This paper presents EXController, a new controller-mounted finger posture recognition device specially designed for VR handheld controllers. We seek to provide additional input through real-time vision sensing by attaching a near infrared (NIR) camera onto the controller. We designed and implemented an exploratory prototype with a HTC Vive controller. The NIR camera is modified from a traditional webcam and applied with a data-driven Convolutional Neural Network (CNN) classifier. We designed 12 different finger gestures and trained the CNN classifier with a dataset from 20 subjects, achieving an average accuracy of 86.17% across - subjects, and, approximately more than 92% on three of the finger postures, and more than 89% on the top-4 accuracy postures. We also developed a Unity demo that shows matched finger animations, running at approximately 27 fps in real-time.
{"title":"EXController","authors":"Junjian Zhang, Yaohao Chen, Satoshi Hashizume, Naoya Muramatsu, Kotaro Omomo, Riku Iwasaki, Kaji Wataru, Yoichi Ochiai","doi":"10.1145/3281505.3283385","DOIUrl":"https://doi.org/10.1145/3281505.3283385","url":null,"abstract":"This paper presents EXController, a new controller-mounted finger posture recognition device specially designed for VR handheld controllers. We seek to provide additional input through real-time vision sensing by attaching a near infrared (NIR) camera onto the controller. We designed and implemented an exploratory prototype with a HTC Vive controller. The NIR camera is modified from a traditional webcam and applied with a data-driven Convolutional Neural Network (CNN) classifier. We designed 12 different finger gestures and trained the CNN classifier with a dataset from 20 subjects, achieving an average accuracy of 86.17% across - subjects, and, approximately more than 92% on three of the finger postures, and more than 89% on the top-4 accuracy postures. We also developed a Unity demo that shows matched finger animations, running at approximately 27 fps in real-time.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122726671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The impact of camera height in cinematic virtual reality 电影虚拟现实中摄像机高度的影响
Sylvia Rothe, Boris Kegeles, Mathias Allary, H. Hussmann
Watching a 360° movie with Head Mounted Displays (HMDs) the viewer feels to be inside the movie and can experience it in an immersive way. The head of the viewer is exactly in the same place as the camera was when the scene was recorded. Viewing a movie by HMDs from the perspective of the camera can raise some challenges, e.g. heights of well-known objects can irritate the viewer in the case the camera height does not correspond to the physical eye height. The aim of this work is to study how the position of the camera influences presence, sickness and the user experience of the viewer. For that we considered several watching postures as well as various camera heights. The results of our experiments suggest that differences between camera and eye heights are more accepted, if the camera position is lower than the viewer's own eye height. Additionally, sitting postures are preferred and can be adapted easier than standing postures. These results can be applied to improve guidelines for 360° filmmakers.
通过头戴式显示器(hmd)观看360°电影,观众会感觉身临其境,并能以身临其境的方式体验电影。观看者的头部与摄像机拍摄时的位置完全相同。通过头戴式显示器从摄像机的角度观看电影会带来一些挑战,例如,在摄像机高度与人眼物理高度不对应的情况下,已知物体的高度会激怒观看者。这项工作的目的是研究相机的位置如何影响观众的存在,疾病和用户体验。为此,我们考虑了几种观看姿势以及不同的相机高度。我们的实验结果表明,如果相机位置低于观看者自己的眼睛高度,那么相机和眼睛高度之间的差异更容易被接受。此外,坐着的姿势比站着的姿势更容易适应。这些结果可以应用于改进360°电影制作人的指导方针。
{"title":"The impact of camera height in cinematic virtual reality","authors":"Sylvia Rothe, Boris Kegeles, Mathias Allary, H. Hussmann","doi":"10.1145/3281505.3283383","DOIUrl":"https://doi.org/10.1145/3281505.3283383","url":null,"abstract":"Watching a 360° movie with Head Mounted Displays (HMDs) the viewer feels to be inside the movie and can experience it in an immersive way. The head of the viewer is exactly in the same place as the camera was when the scene was recorded. Viewing a movie by HMDs from the perspective of the camera can raise some challenges, e.g. heights of well-known objects can irritate the viewer in the case the camera height does not correspond to the physical eye height. The aim of this work is to study how the position of the camera influences presence, sickness and the user experience of the viewer. For that we considered several watching postures as well as various camera heights. The results of our experiments suggest that differences between camera and eye heights are more accepted, if the camera position is lower than the viewer's own eye height. Additionally, sitting postures are preferred and can be adapted easier than standing postures. These results can be applied to improve guidelines for 360° filmmakers.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131276263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Effects of head-display lag on presence in the oculus rift 头显延迟对眼隙存在的影响
Juno Kim, Matthew Moroz, Benjamin Arcioni, S. Palmisano
We measured presence and perceived scene stability in a virtual environment viewed with different head-to-display lag (i.e., system lag) on the Oculus Rift (CV1). System lag was added on top of the measured benchmark system latency (22.3 ms) for our visual scene rendered in OpenGL Shading Language (GLSL). Participants made active head oscillations in pitch at 1.0Hz while viewing displays. We found that perceived scene instability increased and presence decreased when increasing system lag, which we attribute to the effect of multisensory visual-vestibular interactions on the interpretation of the visual information presented.
我们在Oculus Rift (CV1)上使用不同的头显延迟(即系统延迟)来测量虚拟环境中的存在感和感知场景稳定性。在OpenGL着色语言(GLSL)渲染的视觉场景中,系统延迟是在测量的基准系统延迟(22.3 ms)之上添加的。在观看显示器时,参与者的头部以1.0Hz的频率振动。我们发现,当系统滞后增加时,感知到的场景不稳定性增加,存在感减少,我们将其归因于多感官视觉-前庭相互作用对所呈现的视觉信息的解释的影响。
{"title":"Effects of head-display lag on presence in the oculus rift","authors":"Juno Kim, Matthew Moroz, Benjamin Arcioni, S. Palmisano","doi":"10.1145/3281505.3281607","DOIUrl":"https://doi.org/10.1145/3281505.3281607","url":null,"abstract":"We measured presence and perceived scene stability in a virtual environment viewed with different head-to-display lag (i.e., system lag) on the Oculus Rift (CV1). System lag was added on top of the measured benchmark system latency (22.3 ms) for our visual scene rendered in OpenGL Shading Language (GLSL). Participants made active head oscillations in pitch at 1.0Hz while viewing displays. We found that perceived scene instability increased and presence decreased when increasing system lag, which we attribute to the effect of multisensory visual-vestibular interactions on the interpretation of the visual information presented.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127577240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Towards unobtrusive obstacle detection and notification for VR 面向VR的不显眼障碍物检测和通知
P. Wozniak, Antonio Capobianco, N. Javahiraly, D. Curticapean
We present results of a preliminary study on our planned system for the detection of obstacles in the physical environment by means of an RGB-D sensor and their unobtrusive signalling using metaphors within the virtual environment (VE).
我们介绍了我们计划的系统的初步研究结果,该系统通过RGB-D传感器检测物理环境中的障碍物,并使用虚拟环境(VE)中的隐喻进行不显眼的信号发送。
{"title":"Towards unobtrusive obstacle detection and notification for VR","authors":"P. Wozniak, Antonio Capobianco, N. Javahiraly, D. Curticapean","doi":"10.1145/3281505.3283391","DOIUrl":"https://doi.org/10.1145/3281505.3283391","url":null,"abstract":"We present results of a preliminary study on our planned system for the detection of obstacles in the physical environment by means of an RGB-D sensor and their unobtrusive signalling using metaphors within the virtual environment (VE).","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133141419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
PeriTextAR: utilizing peripheral vision for reading text on augmented reality smart glasses PeriTextAR:利用周边视觉在增强现实智能眼镜上阅读文本
Yu-Chih Lin, L. Hsu, Mike Y. Chen
Augmented Reality (AR) provides real-time information by superimposing virtual information onto users' view of the real world. Our work is the first to explore how peripheral vision, instead of central vision, can be used to read text on AR and smart glasses. We present PeriTextAR, a multiword reading interface using rapid serial visual presentation (RSVP)[5]. This enables users to observe the real world using central vision, while using peripheral vision to read virtual information. We first conducted a lab-based study to determine the effect of different text transformation by comparing reading efficiency among 3 capitalization schemes, 2 font faces, 2 text animation methods, and 3 different numbers of words for RSVP paradigm. Another lab-based study followed, investigating the performance of the PeriTextAR against control text, and the results showed significant better performance.
增强现实(AR)通过将虚拟信息叠加到用户对现实世界的看法上来提供实时信息。我们的工作是第一个探索如何使用周边视觉而不是中心视觉来读取AR和智能眼镜上的文本。我们提出了PeriTextAR,一个使用快速串行视觉呈现(RSVP)的多词阅读界面[5]。这使用户能够使用中心视觉观察真实世界,同时使用周边视觉读取虚拟信息。本文首先通过对比RSVP范式中3种大写方案、2种字体面、2种文本动画方法和3种不同字数的阅读效率,研究了不同文本转换对阅读效率的影响。随后,另一项基于实验室的研究调查了PeriTextAR对对照文本的性能,结果显示性能明显更好。
{"title":"PeriTextAR: utilizing peripheral vision for reading text on augmented reality smart glasses","authors":"Yu-Chih Lin, L. Hsu, Mike Y. Chen","doi":"10.1145/3281505.3284396","DOIUrl":"https://doi.org/10.1145/3281505.3284396","url":null,"abstract":"Augmented Reality (AR) provides real-time information by superimposing virtual information onto users' view of the real world. Our work is the first to explore how peripheral vision, instead of central vision, can be used to read text on AR and smart glasses. We present PeriTextAR, a multiword reading interface using rapid serial visual presentation (RSVP)[5]. This enables users to observe the real world using central vision, while using peripheral vision to read virtual information. We first conducted a lab-based study to determine the effect of different text transformation by comparing reading efficiency among 3 capitalization schemes, 2 font faces, 2 text animation methods, and 3 different numbers of words for RSVP paradigm. Another lab-based study followed, investigating the performance of the PeriTextAR against control text, and the results showed significant better performance.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115845370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Towards first person gamer modeling and the problem with game classification in user studies 关于第一人称玩家建模和用户研究中的游戏分类问题
Katherine Tarre, Adam S. Williams, Lukas Borges, N. Rishe, A. Barreto, F. Ortega
Understanding gaming expertise is important in user studies. We present a study comprised of 60 participants playing a First Person Shooter Game (Counter-Strike: Global Offensive). This study provides results related to a keyboard model used to determine an objective measurement of gamers' skill. We also show that there is no correlation between frequency questionnaires and user skill.
在用户研究中,理解游戏专业知识非常重要。我们提出了一项由60名参与者参与的第一人称射击游戏(反恐精英:全球攻势)的研究。这项研究提供了与键盘模型相关的结果,用于确定玩家技能的客观测量。我们还表明,问卷频率和用户技能之间没有相关性。
{"title":"Towards first person gamer modeling and the problem with game classification in user studies","authors":"Katherine Tarre, Adam S. Williams, Lukas Borges, N. Rishe, A. Barreto, F. Ortega","doi":"10.1145/3281505.3281590","DOIUrl":"https://doi.org/10.1145/3281505.3281590","url":null,"abstract":"Understanding gaming expertise is important in user studies. We present a study comprised of 60 participants playing a First Person Shooter Game (Counter-Strike: Global Offensive). This study provides results related to a keyboard model used to determine an objective measurement of gamers' skill. We also show that there is no correlation between frequency questionnaires and user skill.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116223126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A lightweight and efficient system for tracking handheld objects in virtual reality 一个轻量级和高效的系统,用于跟踪虚拟现实中的手持对象
Ya-Kuei Chang, Jui-Wei Huang, Chien-Hua Chen, Chien-Wen Chen, Jian-Wei Peng, Min-Chun Hu, Chih-Yuan Yao, Hung-Kuo Chu
While the content of virtual reality (VR) has grown explosively in recent years, the advance of designing user-friendly control interfaces in VR still remains a slow pace. The most commonly used device, such as gamepad or controller, has fixed shape and weight, and thus can not provide realistic haptic feedback when interacting with virtual objects in VR. In this work, we present a novel and lightweight tracking system in the context of manipulating handheld objects in VR. Specifically, our system can effortlessly synchronize the 3D pose of arbitrary handheld objects between the real world and VR in realtime performance. The tracking algorithm is simple, which delicately leverages the power of Leap Motion and IMU sensor to respectively track object's location and orientation. We demonstrate the effectiveness of our system with three VR applications use pencil, ping-pong paddle, and smartphone as control interfaces to provide users more immersive VR experience.
虽然近年来虚拟现实(VR)的内容呈爆炸式增长,但在设计用户友好的虚拟现实控制界面方面的进展仍然缓慢。最常用的设备,如游戏手柄或控制器,具有固定的形状和重量,因此在VR中与虚拟物体交互时无法提供真实的触觉反馈。在这项工作中,我们提出了一种新颖的轻量级跟踪系统,用于在VR中操纵手持物体。具体来说,我们的系统可以毫不费力地在现实世界和VR之间实时同步任意手持物体的3D姿态。跟踪算法简单,巧妙地利用Leap Motion和IMU传感器的力量分别跟踪目标的位置和方向。我们通过三个VR应用程序展示了我们系统的有效性,使用铅笔,乒乓球球拍和智能手机作为控制界面,为用户提供更身临其境的VR体验。
{"title":"A lightweight and efficient system for tracking handheld objects in virtual reality","authors":"Ya-Kuei Chang, Jui-Wei Huang, Chien-Hua Chen, Chien-Wen Chen, Jian-Wei Peng, Min-Chun Hu, Chih-Yuan Yao, Hung-Kuo Chu","doi":"10.1145/3281505.3281567","DOIUrl":"https://doi.org/10.1145/3281505.3281567","url":null,"abstract":"While the content of virtual reality (VR) has grown explosively in recent years, the advance of designing user-friendly control interfaces in VR still remains a slow pace. The most commonly used device, such as gamepad or controller, has fixed shape and weight, and thus can not provide realistic haptic feedback when interacting with virtual objects in VR. In this work, we present a novel and lightweight tracking system in the context of manipulating handheld objects in VR. Specifically, our system can effortlessly synchronize the 3D pose of arbitrary handheld objects between the real world and VR in realtime performance. The tracking algorithm is simple, which delicately leverages the power of Leap Motion and IMU sensor to respectively track object's location and orientation. We demonstrate the effectiveness of our system with three VR applications use pencil, ping-pong paddle, and smartphone as control interfaces to provide users more immersive VR experience.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121514455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Keep my head on my shoulders!: why third-person is bad for navigation in VR 保持头脑清醒!:为什么第三人称在VR中不利于导航
Daniel Medeiros, R. K. D. Anjos, Daniel Mendes, J. Pereira, A. Raposo, Joaquim Jorge
Head-Mounted Displays are useful to place users in virtual reality (VR). They do this by totally occluding the physical world, including users' bodies. This can make self-awareness problematic. Indeed, researchers have shown that users' feeling of presence and spatial awareness are highly influenced by their virtual representations, and that self-embodied representations (avatars) of their anatomy can make the experience more engaging. On the other hand, recent user studies show a penchant towards a third-person view of one's own body to seemingly improve spatial awareness. However, due to its unnaturality, we argue that a third-person perspective is not as effective or convenient as a first-person view for task execution in VR. In this paper, we investigate, through a user evaluation, how these perspectives affect task performance and embodiment, focusing on navigation tasks, namely walking while avoiding obstacles. For each perspective, we also compare three different levels of realism for users' representation, specifically a stylized abstract avatar, a mesh-based generic human, and a real-time point-cloud rendering of the users' own body. Our results show that only when a third-person perspective is coupled with a realistic representation, a similar sense of embodiment and spatial awareness is felt. In all other cases, a first-person perspective is still better suited for navigation tasks, regardless of representation.
头戴式显示器对于将用户置于虚拟现实(VR)中非常有用。它们通过完全封闭物理世界来实现这一点,包括用户的身体。这可能会使自我意识出现问题。事实上,研究人员已经表明,用户的存在感和空间意识受到他们的虚拟表现的高度影响,而他们解剖结构的自我体现表现(化身)可以使体验更具吸引力。另一方面,最近的用户研究表明,人们倾向于以第三人称视角看待自己的身体,似乎可以提高空间意识。然而,由于其不自然性,我们认为第三人称视角在VR任务执行中不如第一人称视角有效或方便。在本文中,我们通过用户评估来研究这些视角如何影响任务绩效和体现,重点关注导航任务,即行走时避开障碍物。对于每个视角,我们还比较了用户表示的三种不同级别的现实主义,特别是风格化的抽象头像,基于网格的通用人类,以及用户自己身体的实时点云渲染。我们的研究结果表明,只有当第三人称视角与现实表现相结合时,才能感受到类似的身临其境感和空间意识。在所有其他情况下,第一人称视角仍然更适合导航任务,不管表现形式如何。
{"title":"Keep my head on my shoulders!: why third-person is bad for navigation in VR","authors":"Daniel Medeiros, R. K. D. Anjos, Daniel Mendes, J. Pereira, A. Raposo, Joaquim Jorge","doi":"10.1145/3281505.3281511","DOIUrl":"https://doi.org/10.1145/3281505.3281511","url":null,"abstract":"Head-Mounted Displays are useful to place users in virtual reality (VR). They do this by totally occluding the physical world, including users' bodies. This can make self-awareness problematic. Indeed, researchers have shown that users' feeling of presence and spatial awareness are highly influenced by their virtual representations, and that self-embodied representations (avatars) of their anatomy can make the experience more engaging. On the other hand, recent user studies show a penchant towards a third-person view of one's own body to seemingly improve spatial awareness. However, due to its unnaturality, we argue that a third-person perspective is not as effective or convenient as a first-person view for task execution in VR. In this paper, we investigate, through a user evaluation, how these perspectives affect task performance and embodiment, focusing on navigation tasks, namely walking while avoiding obstacles. For each perspective, we also compare three different levels of realism for users' representation, specifically a stylized abstract avatar, a mesh-based generic human, and a real-time point-cloud rendering of the users' own body. Our results show that only when a third-person perspective is coupled with a realistic representation, a similar sense of embodiment and spatial awareness is felt. In all other cases, a first-person perspective is still better suited for navigation tasks, regardless of representation.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124129973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Eyestrain impacts on learning job interview with a serious game in virtual reality: a randomized double-blinded study 眼疲劳对虚拟现实严肃游戏求职面试学习的影响:随机双盲研究
Alexis D. Souchet, Stéphanie Philippe, Dimitri Zobel, Floriane Ober, Aurélien Léveque, Laure Leroy
Purpose: This study explores eyestrain and its possible impacts on learning performances and quality of experience using different apparatuses and imaging. Materials and Methods: 69 participants played a serious game simulating a job interview with a Samsung Gear VR Head Mounted Display (HMD) or a computer screen. The study was conducted according to a double-blinded protocol. Participants were randomly assigned to 3 groups: PC, HMD biocular and HMD stereoscopy (S3D). Participants played the game twice, allowing between group analyses. Eyestrain was assessed pre- and post-exposure on a chin-head rest with optometric measures. Learning traces were obtained in-game by registering response time and scores. Quality of experience was measured with questionnaires assessing Presence, Flow and Visual Comfort. Results: eyestrain was significantly higher with HMDs than PC based on Punctum Proximum of accommodation and visual acuity variables and tends to be higher with S3D. Learning was more efficient in HMDs conditions based on time for answering but the group with stereoscopy performed lower than the binocular imaging one. Quality of Experience was better based on visual discomfort with the PC condition than with HMDs. Conclusion: learning expected answers from a job interview is more efficient while using HMDs than a computer screen. However, eyestrain tends to be higher while using HMDs and S3D. The quality of experience was also negatively impacted with HMDs compared to computer screen. Not using S3D or lowering its impact should be explored to provide comfortable learning experience.1
目的:探讨眼疲劳对使用不同仪器和影像的学习表现和体验质量的影响。材料和方法:69名参与者使用三星Gear VR头戴式显示器(HMD)或电脑屏幕进行模拟求职面试的严肃游戏。这项研究是按照双盲法进行的。参与者随机分为3组:PC、HMD生物眼和HMD立体(S3D)。参与者玩了两次游戏,允许在两组之间进行分析。用验光测量方法评估暴露前和暴露后的眼疲劳。学习痕迹是通过记录反应时间和分数在游戏中获得的。体验质量通过评估存在感、心流和视觉舒适度的问卷来衡量。结果:基于近点调节和视力变量,hmd组眼疲劳明显高于PC组,且S3D组眼疲劳明显高于PC组。在hmd条件下,基于回答时间的学习效率更高,但使用立体镜组的学习效率低于双眼成像组。基于PC条件下视觉不适的体验质量优于hmd条件。结论:使用头戴式显示器比使用电脑屏幕更有效地从面试中获得预期的答案。然而,使用头戴式显示器和S3D时,眼睛疲劳往往更高。与电脑屏幕相比,头戴式显示器对体验质量也有负面影响。应探索不使用S3D或降低其影响,以提供舒适的学习体验
{"title":"Eyestrain impacts on learning job interview with a serious game in virtual reality: a randomized double-blinded study","authors":"Alexis D. Souchet, Stéphanie Philippe, Dimitri Zobel, Floriane Ober, Aurélien Léveque, Laure Leroy","doi":"10.1145/3281505.3281509","DOIUrl":"https://doi.org/10.1145/3281505.3281509","url":null,"abstract":"Purpose: This study explores eyestrain and its possible impacts on learning performances and quality of experience using different apparatuses and imaging. Materials and Methods: 69 participants played a serious game simulating a job interview with a Samsung Gear VR Head Mounted Display (HMD) or a computer screen. The study was conducted according to a double-blinded protocol. Participants were randomly assigned to 3 groups: PC, HMD biocular and HMD stereoscopy (S3D). Participants played the game twice, allowing between group analyses. Eyestrain was assessed pre- and post-exposure on a chin-head rest with optometric measures. Learning traces were obtained in-game by registering response time and scores. Quality of experience was measured with questionnaires assessing Presence, Flow and Visual Comfort. Results: eyestrain was significantly higher with HMDs than PC based on Punctum Proximum of accommodation and visual acuity variables and tends to be higher with S3D. Learning was more efficient in HMDs conditions based on time for answering but the group with stereoscopy performed lower than the binocular imaging one. Quality of Experience was better based on visual discomfort with the PC condition than with HMDs. Conclusion: learning expected answers from a job interview is more efficient while using HMDs than a computer screen. However, eyestrain tends to be higher while using HMDs and S3D. The quality of experience was also negatively impacted with HMDs compared to computer screen. Not using S3D or lowering its impact should be explored to provide comfortable learning experience.1","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121983962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Resolving occlusion for 3D object manipulation with hands in mixed reality 在混合现实中用手解决3D物体操作的遮挡问题
Qi Feng, Hubert P. H. Shum, S. Morishima
Due to the need to interact with virtual objects, the hand-object interaction has become an important element in mixed reality (MR) applications. In this paper, we propose a novel approach to handle the occlusion of augmented 3D object manipulation with hands by exploiting the nature of hand poses combined with tracking-based and model-based methods, to achieve a complete mixed reality experience without necessities of heavy computations, complex manual segmentation processes or wearing special gloves. The experimental results show a frame rate faster than real-time and a great accuracy of rendered virtual appearances, and a user study verifies a more immersive experience compared to past approaches. We believe that the proposed method can improve a wide range of mixed reality applications that involve hand-object interactions.
由于需要与虚拟物体进行交互,手-物交互已成为混合现实(MR)应用中的重要元素。在本文中,我们提出了一种新的方法,利用手部姿势的性质,结合基于跟踪和基于模型的方法来处理手部增强3D物体操作的遮挡,从而实现完整的混合现实体验,而无需繁重的计算,复杂的人工分割过程或佩戴特殊的手套。实验结果表明,帧速率比实时更快,呈现的虚拟外观具有很高的准确性,并且与过去的方法相比,用户研究验证了更身临其境的体验。我们相信所提出的方法可以改善涉及手-对象交互的广泛混合现实应用。
{"title":"Resolving occlusion for 3D object manipulation with hands in mixed reality","authors":"Qi Feng, Hubert P. H. Shum, S. Morishima","doi":"10.1145/3281505.3283390","DOIUrl":"https://doi.org/10.1145/3281505.3283390","url":null,"abstract":"Due to the need to interact with virtual objects, the hand-object interaction has become an important element in mixed reality (MR) applications. In this paper, we propose a novel approach to handle the occlusion of augmented 3D object manipulation with hands by exploiting the nature of hand poses combined with tracking-based and model-based methods, to achieve a complete mixed reality experience without necessities of heavy computations, complex manual segmentation processes or wearing special gloves. The experimental results show a frame rate faster than real-time and a great accuracy of rendered virtual appearances, and a user study verifies a more immersive experience compared to past approaches. We believe that the proposed method can improve a wide range of mixed reality applications that involve hand-object interactions.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130513421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1