首页 > 最新文献

Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology最新文献

英文 中文
EXController EXController
Junjian Zhang, Yaohao Chen, Satoshi Hashizume, Naoya Muramatsu, Kotaro Omomo, Riku Iwasaki, Kaji Wataru, Yoichi Ochiai
This paper presents EXController, a new controller-mounted finger posture recognition device specially designed for VR handheld controllers. We seek to provide additional input through real-time vision sensing by attaching a near infrared (NIR) camera onto the controller. We designed and implemented an exploratory prototype with a HTC Vive controller. The NIR camera is modified from a traditional webcam and applied with a data-driven Convolutional Neural Network (CNN) classifier. We designed 12 different finger gestures and trained the CNN classifier with a dataset from 20 subjects, achieving an average accuracy of 86.17% across - subjects, and, approximately more than 92% on three of the finger postures, and more than 89% on the top-4 accuracy postures. We also developed a Unity demo that shows matched finger animations, running at approximately 27 fps in real-time.
{"title":"EXController","authors":"Junjian Zhang, Yaohao Chen, Satoshi Hashizume, Naoya Muramatsu, Kotaro Omomo, Riku Iwasaki, Kaji Wataru, Yoichi Ochiai","doi":"10.1145/3281505.3283385","DOIUrl":"https://doi.org/10.1145/3281505.3283385","url":null,"abstract":"This paper presents EXController, a new controller-mounted finger posture recognition device specially designed for VR handheld controllers. We seek to provide additional input through real-time vision sensing by attaching a near infrared (NIR) camera onto the controller. We designed and implemented an exploratory prototype with a HTC Vive controller. The NIR camera is modified from a traditional webcam and applied with a data-driven Convolutional Neural Network (CNN) classifier. We designed 12 different finger gestures and trained the CNN classifier with a dataset from 20 subjects, achieving an average accuracy of 86.17% across - subjects, and, approximately more than 92% on three of the finger postures, and more than 89% on the top-4 accuracy postures. We also developed a Unity demo that shows matched finger animations, running at approximately 27 fps in real-time.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122726671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The impact of camera height in cinematic virtual reality 电影虚拟现实中摄像机高度的影响
Sylvia Rothe, Boris Kegeles, Mathias Allary, H. Hussmann
Watching a 360° movie with Head Mounted Displays (HMDs) the viewer feels to be inside the movie and can experience it in an immersive way. The head of the viewer is exactly in the same place as the camera was when the scene was recorded. Viewing a movie by HMDs from the perspective of the camera can raise some challenges, e.g. heights of well-known objects can irritate the viewer in the case the camera height does not correspond to the physical eye height. The aim of this work is to study how the position of the camera influences presence, sickness and the user experience of the viewer. For that we considered several watching postures as well as various camera heights. The results of our experiments suggest that differences between camera and eye heights are more accepted, if the camera position is lower than the viewer's own eye height. Additionally, sitting postures are preferred and can be adapted easier than standing postures. These results can be applied to improve guidelines for 360° filmmakers.
通过头戴式显示器(hmd)观看360°电影,观众会感觉身临其境,并能以身临其境的方式体验电影。观看者的头部与摄像机拍摄时的位置完全相同。通过头戴式显示器从摄像机的角度观看电影会带来一些挑战,例如,在摄像机高度与人眼物理高度不对应的情况下,已知物体的高度会激怒观看者。这项工作的目的是研究相机的位置如何影响观众的存在,疾病和用户体验。为此,我们考虑了几种观看姿势以及不同的相机高度。我们的实验结果表明,如果相机位置低于观看者自己的眼睛高度,那么相机和眼睛高度之间的差异更容易被接受。此外,坐着的姿势比站着的姿势更容易适应。这些结果可以应用于改进360°电影制作人的指导方针。
{"title":"The impact of camera height in cinematic virtual reality","authors":"Sylvia Rothe, Boris Kegeles, Mathias Allary, H. Hussmann","doi":"10.1145/3281505.3283383","DOIUrl":"https://doi.org/10.1145/3281505.3283383","url":null,"abstract":"Watching a 360° movie with Head Mounted Displays (HMDs) the viewer feels to be inside the movie and can experience it in an immersive way. The head of the viewer is exactly in the same place as the camera was when the scene was recorded. Viewing a movie by HMDs from the perspective of the camera can raise some challenges, e.g. heights of well-known objects can irritate the viewer in the case the camera height does not correspond to the physical eye height. The aim of this work is to study how the position of the camera influences presence, sickness and the user experience of the viewer. For that we considered several watching postures as well as various camera heights. The results of our experiments suggest that differences between camera and eye heights are more accepted, if the camera position is lower than the viewer's own eye height. Additionally, sitting postures are preferred and can be adapted easier than standing postures. These results can be applied to improve guidelines for 360° filmmakers.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131276263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Effects of head-display lag on presence in the oculus rift 头显延迟对眼隙存在的影响
Juno Kim, Matthew Moroz, Benjamin Arcioni, S. Palmisano
We measured presence and perceived scene stability in a virtual environment viewed with different head-to-display lag (i.e., system lag) on the Oculus Rift (CV1). System lag was added on top of the measured benchmark system latency (22.3 ms) for our visual scene rendered in OpenGL Shading Language (GLSL). Participants made active head oscillations in pitch at 1.0Hz while viewing displays. We found that perceived scene instability increased and presence decreased when increasing system lag, which we attribute to the effect of multisensory visual-vestibular interactions on the interpretation of the visual information presented.
我们在Oculus Rift (CV1)上使用不同的头显延迟(即系统延迟)来测量虚拟环境中的存在感和感知场景稳定性。在OpenGL着色语言(GLSL)渲染的视觉场景中,系统延迟是在测量的基准系统延迟(22.3 ms)之上添加的。在观看显示器时,参与者的头部以1.0Hz的频率振动。我们发现,当系统滞后增加时,感知到的场景不稳定性增加,存在感减少,我们将其归因于多感官视觉-前庭相互作用对所呈现的视觉信息的解释的影响。
{"title":"Effects of head-display lag on presence in the oculus rift","authors":"Juno Kim, Matthew Moroz, Benjamin Arcioni, S. Palmisano","doi":"10.1145/3281505.3281607","DOIUrl":"https://doi.org/10.1145/3281505.3281607","url":null,"abstract":"We measured presence and perceived scene stability in a virtual environment viewed with different head-to-display lag (i.e., system lag) on the Oculus Rift (CV1). System lag was added on top of the measured benchmark system latency (22.3 ms) for our visual scene rendered in OpenGL Shading Language (GLSL). Participants made active head oscillations in pitch at 1.0Hz while viewing displays. We found that perceived scene instability increased and presence decreased when increasing system lag, which we attribute to the effect of multisensory visual-vestibular interactions on the interpretation of the visual information presented.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127577240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Towards unobtrusive obstacle detection and notification for VR 面向VR的不显眼障碍物检测和通知
P. Wozniak, Antonio Capobianco, N. Javahiraly, D. Curticapean
We present results of a preliminary study on our planned system for the detection of obstacles in the physical environment by means of an RGB-D sensor and their unobtrusive signalling using metaphors within the virtual environment (VE).
我们介绍了我们计划的系统的初步研究结果,该系统通过RGB-D传感器检测物理环境中的障碍物,并使用虚拟环境(VE)中的隐喻进行不显眼的信号发送。
{"title":"Towards unobtrusive obstacle detection and notification for VR","authors":"P. Wozniak, Antonio Capobianco, N. Javahiraly, D. Curticapean","doi":"10.1145/3281505.3283391","DOIUrl":"https://doi.org/10.1145/3281505.3283391","url":null,"abstract":"We present results of a preliminary study on our planned system for the detection of obstacles in the physical environment by means of an RGB-D sensor and their unobtrusive signalling using metaphors within the virtual environment (VE).","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133141419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Eyestrain impacts on learning job interview with a serious game in virtual reality: a randomized double-blinded study 眼疲劳对虚拟现实严肃游戏求职面试学习的影响:随机双盲研究
Alexis D. Souchet, Stéphanie Philippe, Dimitri Zobel, Floriane Ober, Aurélien Léveque, Laure Leroy
Purpose: This study explores eyestrain and its possible impacts on learning performances and quality of experience using different apparatuses and imaging. Materials and Methods: 69 participants played a serious game simulating a job interview with a Samsung Gear VR Head Mounted Display (HMD) or a computer screen. The study was conducted according to a double-blinded protocol. Participants were randomly assigned to 3 groups: PC, HMD biocular and HMD stereoscopy (S3D). Participants played the game twice, allowing between group analyses. Eyestrain was assessed pre- and post-exposure on a chin-head rest with optometric measures. Learning traces were obtained in-game by registering response time and scores. Quality of experience was measured with questionnaires assessing Presence, Flow and Visual Comfort. Results: eyestrain was significantly higher with HMDs than PC based on Punctum Proximum of accommodation and visual acuity variables and tends to be higher with S3D. Learning was more efficient in HMDs conditions based on time for answering but the group with stereoscopy performed lower than the binocular imaging one. Quality of Experience was better based on visual discomfort with the PC condition than with HMDs. Conclusion: learning expected answers from a job interview is more efficient while using HMDs than a computer screen. However, eyestrain tends to be higher while using HMDs and S3D. The quality of experience was also negatively impacted with HMDs compared to computer screen. Not using S3D or lowering its impact should be explored to provide comfortable learning experience.1
目的:探讨眼疲劳对使用不同仪器和影像的学习表现和体验质量的影响。材料和方法:69名参与者使用三星Gear VR头戴式显示器(HMD)或电脑屏幕进行模拟求职面试的严肃游戏。这项研究是按照双盲法进行的。参与者随机分为3组:PC、HMD生物眼和HMD立体(S3D)。参与者玩了两次游戏,允许在两组之间进行分析。用验光测量方法评估暴露前和暴露后的眼疲劳。学习痕迹是通过记录反应时间和分数在游戏中获得的。体验质量通过评估存在感、心流和视觉舒适度的问卷来衡量。结果:基于近点调节和视力变量,hmd组眼疲劳明显高于PC组,且S3D组眼疲劳明显高于PC组。在hmd条件下,基于回答时间的学习效率更高,但使用立体镜组的学习效率低于双眼成像组。基于PC条件下视觉不适的体验质量优于hmd条件。结论:使用头戴式显示器比使用电脑屏幕更有效地从面试中获得预期的答案。然而,使用头戴式显示器和S3D时,眼睛疲劳往往更高。与电脑屏幕相比,头戴式显示器对体验质量也有负面影响。应探索不使用S3D或降低其影响,以提供舒适的学习体验
{"title":"Eyestrain impacts on learning job interview with a serious game in virtual reality: a randomized double-blinded study","authors":"Alexis D. Souchet, Stéphanie Philippe, Dimitri Zobel, Floriane Ober, Aurélien Léveque, Laure Leroy","doi":"10.1145/3281505.3281509","DOIUrl":"https://doi.org/10.1145/3281505.3281509","url":null,"abstract":"Purpose: This study explores eyestrain and its possible impacts on learning performances and quality of experience using different apparatuses and imaging. Materials and Methods: 69 participants played a serious game simulating a job interview with a Samsung Gear VR Head Mounted Display (HMD) or a computer screen. The study was conducted according to a double-blinded protocol. Participants were randomly assigned to 3 groups: PC, HMD biocular and HMD stereoscopy (S3D). Participants played the game twice, allowing between group analyses. Eyestrain was assessed pre- and post-exposure on a chin-head rest with optometric measures. Learning traces were obtained in-game by registering response time and scores. Quality of experience was measured with questionnaires assessing Presence, Flow and Visual Comfort. Results: eyestrain was significantly higher with HMDs than PC based on Punctum Proximum of accommodation and visual acuity variables and tends to be higher with S3D. Learning was more efficient in HMDs conditions based on time for answering but the group with stereoscopy performed lower than the binocular imaging one. Quality of Experience was better based on visual discomfort with the PC condition than with HMDs. Conclusion: learning expected answers from a job interview is more efficient while using HMDs than a computer screen. However, eyestrain tends to be higher while using HMDs and S3D. The quality of experience was also negatively impacted with HMDs compared to computer screen. Not using S3D or lowering its impact should be explored to provide comfortable learning experience.1","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121983962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Keep my head on my shoulders!: why third-person is bad for navigation in VR 保持头脑清醒!:为什么第三人称在VR中不利于导航
Daniel Medeiros, R. K. D. Anjos, Daniel Mendes, J. Pereira, A. Raposo, Joaquim Jorge
Head-Mounted Displays are useful to place users in virtual reality (VR). They do this by totally occluding the physical world, including users' bodies. This can make self-awareness problematic. Indeed, researchers have shown that users' feeling of presence and spatial awareness are highly influenced by their virtual representations, and that self-embodied representations (avatars) of their anatomy can make the experience more engaging. On the other hand, recent user studies show a penchant towards a third-person view of one's own body to seemingly improve spatial awareness. However, due to its unnaturality, we argue that a third-person perspective is not as effective or convenient as a first-person view for task execution in VR. In this paper, we investigate, through a user evaluation, how these perspectives affect task performance and embodiment, focusing on navigation tasks, namely walking while avoiding obstacles. For each perspective, we also compare three different levels of realism for users' representation, specifically a stylized abstract avatar, a mesh-based generic human, and a real-time point-cloud rendering of the users' own body. Our results show that only when a third-person perspective is coupled with a realistic representation, a similar sense of embodiment and spatial awareness is felt. In all other cases, a first-person perspective is still better suited for navigation tasks, regardless of representation.
头戴式显示器对于将用户置于虚拟现实(VR)中非常有用。它们通过完全封闭物理世界来实现这一点,包括用户的身体。这可能会使自我意识出现问题。事实上,研究人员已经表明,用户的存在感和空间意识受到他们的虚拟表现的高度影响,而他们解剖结构的自我体现表现(化身)可以使体验更具吸引力。另一方面,最近的用户研究表明,人们倾向于以第三人称视角看待自己的身体,似乎可以提高空间意识。然而,由于其不自然性,我们认为第三人称视角在VR任务执行中不如第一人称视角有效或方便。在本文中,我们通过用户评估来研究这些视角如何影响任务绩效和体现,重点关注导航任务,即行走时避开障碍物。对于每个视角,我们还比较了用户表示的三种不同级别的现实主义,特别是风格化的抽象头像,基于网格的通用人类,以及用户自己身体的实时点云渲染。我们的研究结果表明,只有当第三人称视角与现实表现相结合时,才能感受到类似的身临其境感和空间意识。在所有其他情况下,第一人称视角仍然更适合导航任务,不管表现形式如何。
{"title":"Keep my head on my shoulders!: why third-person is bad for navigation in VR","authors":"Daniel Medeiros, R. K. D. Anjos, Daniel Mendes, J. Pereira, A. Raposo, Joaquim Jorge","doi":"10.1145/3281505.3281511","DOIUrl":"https://doi.org/10.1145/3281505.3281511","url":null,"abstract":"Head-Mounted Displays are useful to place users in virtual reality (VR). They do this by totally occluding the physical world, including users' bodies. This can make self-awareness problematic. Indeed, researchers have shown that users' feeling of presence and spatial awareness are highly influenced by their virtual representations, and that self-embodied representations (avatars) of their anatomy can make the experience more engaging. On the other hand, recent user studies show a penchant towards a third-person view of one's own body to seemingly improve spatial awareness. However, due to its unnaturality, we argue that a third-person perspective is not as effective or convenient as a first-person view for task execution in VR. In this paper, we investigate, through a user evaluation, how these perspectives affect task performance and embodiment, focusing on navigation tasks, namely walking while avoiding obstacles. For each perspective, we also compare three different levels of realism for users' representation, specifically a stylized abstract avatar, a mesh-based generic human, and a real-time point-cloud rendering of the users' own body. Our results show that only when a third-person perspective is coupled with a realistic representation, a similar sense of embodiment and spatial awareness is felt. In all other cases, a first-person perspective is still better suited for navigation tasks, regardless of representation.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124129973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
VR sickness measurement with EEG using DNN algorithm 基于DNN算法的脑电VR疾病测量
D. Jeong, Sangbong Yoo, Yun Jang
Recently, VR technology is rapidly developing and attracting public attention. However, VR Sickness is a problem that is still not solved in the VR experience. The VR sickness is presumed to be caused by crosstalk between sensory and cognitive systems [1]. However, since there is no objective way to measure sensory and cognitive systems, it is difficult to measure VR sickness. In this paper, we collect EEG data while participants experience VR videos. We propose a Deep Neural Network (DNN) deep learning algorithm by measuring VR sickness through electroencephalogram (EEG) data. Experiments have been conducted to search for an appropriate EEG data preprocessing method and DNN structure suitable for the deep learning, and the accuracy of 99.12% is obtained in our study.
近年来,虚拟现实技术正在迅速发展并引起了公众的关注。然而,在VR体验中,VR病仍然是一个没有解决的问题。VR病被认为是由感觉和认知系统之间的串扰引起的[1]。然而,由于没有客观的方法来测量感觉和认知系统,因此很难测量VR疾病。在本文中,我们收集了参与者在观看VR视频时的脑电图数据。我们提出了一种深度神经网络(DNN)深度学习算法,通过脑电图(EEG)数据来测量VR疾病。通过实验寻找合适的脑电数据预处理方法和适合深度学习的DNN结构,获得了99.12%的准确率。
{"title":"VR sickness measurement with EEG using DNN algorithm","authors":"D. Jeong, Sangbong Yoo, Yun Jang","doi":"10.1145/3281505.3283387","DOIUrl":"https://doi.org/10.1145/3281505.3283387","url":null,"abstract":"Recently, VR technology is rapidly developing and attracting public attention. However, VR Sickness is a problem that is still not solved in the VR experience. The VR sickness is presumed to be caused by crosstalk between sensory and cognitive systems [1]. However, since there is no objective way to measure sensory and cognitive systems, it is difficult to measure VR sickness. In this paper, we collect EEG data while participants experience VR videos. We propose a Deep Neural Network (DNN) deep learning algorithm by measuring VR sickness through electroencephalogram (EEG) data. Experiments have been conducted to search for an appropriate EEG data preprocessing method and DNN structure suitable for the deep learning, and the accuracy of 99.12% is obtained in our study.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126123766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A real-time golf-swing training system using sonification and sound image localization 一种实时高尔夫挥杆训练系统,采用声化和声音图像定位
Yuka Tanaka, Homare Kon, H. Koike
There are real-time training systems to learn the correct golf swing form by providing visual feedback to the users. However, real-time visual feedback requires the users to see the display during their motion that leads to the wrong posture. This paper proposed a real-time golf-swing training system using sonification and sound image localization. The system provides real-time audio feedback based on the difference between the pre-recorded model data and real-time user data, which consists of the roll, pitch, and yaw angles of a golf club shaft. The system also used sound image localization so that the user can hear the audio feedback from the direction of the club head. The user can recognize the current posture of the club without moving their gaze.
有实时训练系统,通过向用户提供视觉反馈来学习正确的高尔夫挥杆形式。然而,实时视觉反馈要求用户在运动过程中看到显示器,从而导致错误的姿势。提出了一种基于声化和声音图像定位的高尔夫挥杆实时训练系统。该系统根据预先记录的模型数据和实时用户数据之间的差异提供实时音频反馈,这些数据包括高尔夫球杆轴的滚转、俯仰和偏航角。该系统还使用了声音图像定位,以便用户可以听到来自杆头方向的音频反馈。用户无需移动视线就能识别球杆的当前姿势。
{"title":"A real-time golf-swing training system using sonification and sound image localization","authors":"Yuka Tanaka, Homare Kon, H. Koike","doi":"10.1145/3281505.3281604","DOIUrl":"https://doi.org/10.1145/3281505.3281604","url":null,"abstract":"There are real-time training systems to learn the correct golf swing form by providing visual feedback to the users. However, real-time visual feedback requires the users to see the display during their motion that leads to the wrong posture. This paper proposed a real-time golf-swing training system using sonification and sound image localization. The system provides real-time audio feedback based on the difference between the pre-recorded model data and real-time user data, which consists of the roll, pitch, and yaw angles of a golf club shaft. The system also used sound image localization so that the user can hear the audio feedback from the direction of the club head. The user can recognize the current posture of the club without moving their gaze.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129805925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
PeriTextAR: utilizing peripheral vision for reading text on augmented reality smart glasses PeriTextAR:利用周边视觉在增强现实智能眼镜上阅读文本
Yu-Chih Lin, L. Hsu, Mike Y. Chen
Augmented Reality (AR) provides real-time information by superimposing virtual information onto users' view of the real world. Our work is the first to explore how peripheral vision, instead of central vision, can be used to read text on AR and smart glasses. We present PeriTextAR, a multiword reading interface using rapid serial visual presentation (RSVP)[5]. This enables users to observe the real world using central vision, while using peripheral vision to read virtual information. We first conducted a lab-based study to determine the effect of different text transformation by comparing reading efficiency among 3 capitalization schemes, 2 font faces, 2 text animation methods, and 3 different numbers of words for RSVP paradigm. Another lab-based study followed, investigating the performance of the PeriTextAR against control text, and the results showed significant better performance.
增强现实(AR)通过将虚拟信息叠加到用户对现实世界的看法上来提供实时信息。我们的工作是第一个探索如何使用周边视觉而不是中心视觉来读取AR和智能眼镜上的文本。我们提出了PeriTextAR,一个使用快速串行视觉呈现(RSVP)的多词阅读界面[5]。这使用户能够使用中心视觉观察真实世界,同时使用周边视觉读取虚拟信息。本文首先通过对比RSVP范式中3种大写方案、2种字体面、2种文本动画方法和3种不同字数的阅读效率,研究了不同文本转换对阅读效率的影响。随后,另一项基于实验室的研究调查了PeriTextAR对对照文本的性能,结果显示性能明显更好。
{"title":"PeriTextAR: utilizing peripheral vision for reading text on augmented reality smart glasses","authors":"Yu-Chih Lin, L. Hsu, Mike Y. Chen","doi":"10.1145/3281505.3284396","DOIUrl":"https://doi.org/10.1145/3281505.3284396","url":null,"abstract":"Augmented Reality (AR) provides real-time information by superimposing virtual information onto users' view of the real world. Our work is the first to explore how peripheral vision, instead of central vision, can be used to read text on AR and smart glasses. We present PeriTextAR, a multiword reading interface using rapid serial visual presentation (RSVP)[5]. This enables users to observe the real world using central vision, while using peripheral vision to read virtual information. We first conducted a lab-based study to determine the effect of different text transformation by comparing reading efficiency among 3 capitalization schemes, 2 font faces, 2 text animation methods, and 3 different numbers of words for RSVP paradigm. Another lab-based study followed, investigating the performance of the PeriTextAR against control text, and the results showed significant better performance.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115845370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Towards first person gamer modeling and the problem with game classification in user studies 关于第一人称玩家建模和用户研究中的游戏分类问题
Katherine Tarre, Adam S. Williams, Lukas Borges, N. Rishe, A. Barreto, F. Ortega
Understanding gaming expertise is important in user studies. We present a study comprised of 60 participants playing a First Person Shooter Game (Counter-Strike: Global Offensive). This study provides results related to a keyboard model used to determine an objective measurement of gamers' skill. We also show that there is no correlation between frequency questionnaires and user skill.
在用户研究中,理解游戏专业知识非常重要。我们提出了一项由60名参与者参与的第一人称射击游戏(反恐精英:全球攻势)的研究。这项研究提供了与键盘模型相关的结果,用于确定玩家技能的客观测量。我们还表明,问卷频率和用户技能之间没有相关性。
{"title":"Towards first person gamer modeling and the problem with game classification in user studies","authors":"Katherine Tarre, Adam S. Williams, Lukas Borges, N. Rishe, A. Barreto, F. Ortega","doi":"10.1145/3281505.3281590","DOIUrl":"https://doi.org/10.1145/3281505.3281590","url":null,"abstract":"Understanding gaming expertise is important in user studies. We present a study comprised of 60 participants playing a First Person Shooter Game (Counter-Strike: Global Offensive). This study provides results related to a keyboard model used to determine an objective measurement of gamers' skill. We also show that there is no correlation between frequency questionnaires and user skill.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116223126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1