首页 > 最新文献

ACM Symposium on Applied Perception 2019最新文献

英文 中文
Assessment of Driver Attention during a Safety Critical Situation in VR to Generate VR-based Training 基于VR的驾驶员注意力安全紧急情况评估
Pub Date : 2019-09-19 DOI: 10.1145/3343036.3343138
Efe Bozkir, David Geisler, Enkelejda Kasneci
Crashes involving pedestrians on urban roads can be fatal. In order to prevent such crashes and provide safer driving experience, adaptive pedestrian warning cues can help to detect risky pedestrians. However, it is difficult to test such systems in the wild, and train drivers using these systems in safety critical situations. This work investigates whether low-cost virtual reality (VR) setups, along with gaze-aware warning cues, could be used for driver training by analyzing driver attention during an unexpected pedestrian crossing on an urban road. Our analyses show significant differences in distances to crossing pedestrians, pupil diameters, and driver accelerator inputs when the warning cues were provided. Overall, there is a strong indication that VR and Head-Mounted-Displays (HMDs) could be used for generating attention increasing driver training packages for safety critical situations.
城市道路上涉及行人的交通事故可能是致命的。为了防止此类碰撞并提供更安全的驾驶体验,自适应行人警告提示可以帮助检测危险行人。然而,很难在野外测试这些系统,并在安全关键情况下训练驾驶员使用这些系统。这项工作调查了低成本的虚拟现实(VR)装置,以及视线感知警告线索,是否可以通过分析驾驶员在城市道路上意外的行人过街时的注意力来用于驾驶员培训。我们的分析显示,当提供警告提示时,与过路行人的距离、瞳孔直径和驾驶员油门输入都有显著差异。总的来说,有强烈的迹象表明,VR和头戴式显示器(hmd)可以用于提高驾驶员对安全危急情况的注意力。
{"title":"Assessment of Driver Attention during a Safety Critical Situation in VR to Generate VR-based Training","authors":"Efe Bozkir, David Geisler, Enkelejda Kasneci","doi":"10.1145/3343036.3343138","DOIUrl":"https://doi.org/10.1145/3343036.3343138","url":null,"abstract":"Crashes involving pedestrians on urban roads can be fatal. In order to prevent such crashes and provide safer driving experience, adaptive pedestrian warning cues can help to detect risky pedestrians. However, it is difficult to test such systems in the wild, and train drivers using these systems in safety critical situations. This work investigates whether low-cost virtual reality (VR) setups, along with gaze-aware warning cues, could be used for driver training by analyzing driver attention during an unexpected pedestrian crossing on an urban road. Our analyses show significant differences in distances to crossing pedestrians, pupil diameters, and driver accelerator inputs when the warning cues were provided. Overall, there is a strong indication that VR and Head-Mounted-Displays (HMDs) could be used for generating attention increasing driver training packages for safety critical situations.","PeriodicalId":228010,"journal":{"name":"ACM Symposium on Applied Perception 2019","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125719605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
The Influence of the Viewpoint in a Self-Avatar on Body Part and Self-Localization 《自我化身》中的观点对身体部位和自我定位的影响
Pub Date : 2019-09-19 DOI: 10.1145/3343036.3343124
Albert H. van der Veer, Adrian J. T. Alsmith, M. Longo, Hong Yu Wong, D. Diers, Matthias Bues, Anna P. Giron, B. Mohler
The goal of this study is to determine how a self-avatar in virtual reality, experienced from different viewpoints on the body (at eye- or chest-height), might influence body part localization, as well as self-localization within the body. Previous literature shows that people do not locate themselves in only one location, but rather primarily in the face and the upper torso. Therefore, we aimed to determine if manipulating the viewpoint to either the height of the eyes or to the height of the chest would influence self-location estimates towards these commonly identified locations of self. In a virtual reality (VR) headset, participants were asked to point at several of their body parts (body part localization) as well as ”directly at you” (self-localization) with a virtual pointer. Both pointing tasks were performed before and after a self-avatar adaptation phase where participants explored a co-located, scaled, gender-matched, and animated self-avatar. We hypothesized that experiencing a self-avatar might reduce inaccuracies in body part localization, and that viewpoint would influence pointing responses for both body part and self-localization. Participants overall pointed relatively accurately to some of their body parts (shoulders, chin, and eyes), but very inaccurately to others, with large undershooting for the hips, knees, and feet, and large overshooting for the top of the head. Self-localization was spread across the body (as well as above the head) with the following distribution: the upper face (25%), the upper torso (25%), above the head (15%) and below the torso (12%). We only found an influence of viewpoint (eye- vs chest-height) during the self-avatar adaptation phase for body part localization and not for self-localization. The overall change in error distance for body part localization for the viewpoint at eye-height was small (M = –2.8 cm), while the overall change in error distance for the viewpoint at chest-height was significantly larger, and in the upwards direction relative to the body parts (M = 21.1 cm). In a post-questionnaire, there was no significant difference in embodiment scores between the viewpoint conditions. Most interestingly, having a self-avatar did not change the results on the self-localization pointing task, even with a novel viewpoint (chest-height). Possibly, body-based cues, or memory, ground the self when in VR. However, the present results caution the use of altered viewpoints in applications where veridical position sense of body parts is required.
本研究的目的是确定虚拟现实中的自我化身,从身体的不同角度体验(在眼睛或胸部高度),可能会影响身体部位定位,以及身体内部的自我定位。先前的文献表明,人们不会只把自己定位在一个位置,而是主要集中在脸部和上半身。因此,我们的目的是确定将视点操纵到眼睛的高度或胸部的高度是否会影响对这些通常识别的自我位置的自我定位估计。在虚拟现实(VR)耳机中,参与者被要求用虚拟指针指向他们身体的几个部位(身体部位定位),以及“直接指向你”(自我定位)。这两项指向任务都是在自我化身适应阶段之前和之后进行的,在这个阶段中,参与者探索了一个位于同一位置的、缩放的、性别匹配的和动画的自我化身。我们假设,体验自我化身可能会减少身体部位定位的不准确性,并且这种观点会影响身体部位和自我定位的指向反应。总的来说,参与者相对准确地指向了他们身体的某些部位(肩膀、下巴和眼睛),但对其他部位的指向却非常不准确,臀部、膝盖和脚的指向明显偏下,而头顶的指向则明显偏上。自我定位遍布全身(以及头部以上),分布如下:上脸(25%)、上躯干(25%)、头部上方(15%)和躯干下方(12%)。在自我化身适应阶段,我们只发现视点(眼高和胸高)对身体部位定位有影响,而对自我定位没有影响。眼高视点的身体部位定位误差距离总体变化较小(M = -2.8 cm),而胸高视点的身体部位定位误差距离总体变化较大,且相对于身体部位呈向上变化(M = 21.1 cm)。在问卷后测中,不同视点条件的体现得分无显著差异。最有趣的是,拥有一个自我化身并没有改变自我定位指向任务的结果,即使是一个新的视角(胸部高度)。可能是基于身体的线索或记忆,在虚拟现实中奠定了自我。然而,目前的结果警告使用改变视点的应用,其中身体部位的垂直位置感是必需的。
{"title":"The Influence of the Viewpoint in a Self-Avatar on Body Part and Self-Localization","authors":"Albert H. van der Veer, Adrian J. T. Alsmith, M. Longo, Hong Yu Wong, D. Diers, Matthias Bues, Anna P. Giron, B. Mohler","doi":"10.1145/3343036.3343124","DOIUrl":"https://doi.org/10.1145/3343036.3343124","url":null,"abstract":"The goal of this study is to determine how a self-avatar in virtual reality, experienced from different viewpoints on the body (at eye- or chest-height), might influence body part localization, as well as self-localization within the body. Previous literature shows that people do not locate themselves in only one location, but rather primarily in the face and the upper torso. Therefore, we aimed to determine if manipulating the viewpoint to either the height of the eyes or to the height of the chest would influence self-location estimates towards these commonly identified locations of self. In a virtual reality (VR) headset, participants were asked to point at several of their body parts (body part localization) as well as ”directly at you” (self-localization) with a virtual pointer. Both pointing tasks were performed before and after a self-avatar adaptation phase where participants explored a co-located, scaled, gender-matched, and animated self-avatar. We hypothesized that experiencing a self-avatar might reduce inaccuracies in body part localization, and that viewpoint would influence pointing responses for both body part and self-localization. Participants overall pointed relatively accurately to some of their body parts (shoulders, chin, and eyes), but very inaccurately to others, with large undershooting for the hips, knees, and feet, and large overshooting for the top of the head. Self-localization was spread across the body (as well as above the head) with the following distribution: the upper face (25%), the upper torso (25%), above the head (15%) and below the torso (12%). We only found an influence of viewpoint (eye- vs chest-height) during the self-avatar adaptation phase for body part localization and not for self-localization. The overall change in error distance for body part localization for the viewpoint at eye-height was small (M = –2.8 cm), while the overall change in error distance for the viewpoint at chest-height was significantly larger, and in the upwards direction relative to the body parts (M = 21.1 cm). In a post-questionnaire, there was no significant difference in embodiment scores between the viewpoint conditions. Most interestingly, having a self-avatar did not change the results on the self-localization pointing task, even with a novel viewpoint (chest-height). Possibly, body-based cues, or memory, ground the self when in VR. However, the present results caution the use of altered viewpoints in applications where veridical position sense of body parts is required.","PeriodicalId":228010,"journal":{"name":"ACM Symposium on Applied Perception 2019","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123814887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
The Effect of Motion on the Perception of Material Appearance 运动对物质外观感知的影响
Pub Date : 2019-09-19 DOI: 10.1145/3343036.3343122
Ruiquan Mao, Manuel Lagunas, B. Masiá, D. Gutierrez
We analyze the effect of motion in the perception of material appearance. First, we create a set of stimuli containing 72 realistic materials, rendered with varying degrees of linear motion blur. Then we launch a large-scale study on Mechanical Turk to rate a given set of perceptual attributes, such as brightness, roughness, or the perceived strength of reflections. Our statistical analysis shows that certain attributes undergo a significant change, varying appearance perception under motion. In addition, we further investigate the perception of brightness, for the particular cases of rubber and plastic materials. We create new stimuli, with ten different luminance levels and seven motion degrees. We launch a new user study to retrieve their perceived brightness. From the users’ judgements, we build two-dimensional maps showing how perceived brightness varies as a function of the luminance and motion of the material.
我们分析了运动对物质外观感知的影响。首先,我们创建了一组包含72个现实材料的刺激,用不同程度的线性运动模糊渲染。然后,我们在Mechanical Turk上启动了一个大规模的研究,对一组给定的感知属性进行评级,比如亮度、粗糙度或反射的感知强度。我们的统计分析表明,某些属性经历了显著的变化,在运动下不同的外观感知。此外,我们进一步研究了对亮度的感知,对于橡胶和塑料材料的特殊情况。我们创造新的刺激,有十个不同的亮度水平和七个运动度。我们发起了一项新的用户研究来检索他们感知到的亮度。根据用户的判断,我们建立了二维地图,显示了感知亮度如何随着材料的亮度和运动而变化。
{"title":"The Effect of Motion on the Perception of Material Appearance","authors":"Ruiquan Mao, Manuel Lagunas, B. Masiá, D. Gutierrez","doi":"10.1145/3343036.3343122","DOIUrl":"https://doi.org/10.1145/3343036.3343122","url":null,"abstract":"We analyze the effect of motion in the perception of material appearance. First, we create a set of stimuli containing 72 realistic materials, rendered with varying degrees of linear motion blur. Then we launch a large-scale study on Mechanical Turk to rate a given set of perceptual attributes, such as brightness, roughness, or the perceived strength of reflections. Our statistical analysis shows that certain attributes undergo a significant change, varying appearance perception under motion. In addition, we further investigate the perception of brightness, for the particular cases of rubber and plastic materials. We create new stimuli, with ten different luminance levels and seven motion degrees. We launch a new user study to retrieve their perceived brightness. From the users’ judgements, we build two-dimensional maps showing how perceived brightness varies as a function of the luminance and motion of the material.","PeriodicalId":228010,"journal":{"name":"ACM Symposium on Applied Perception 2019","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124735724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
How Video Game Locomotion Methods Affect Navigation in Virtual Environments 电子游戏的运动方法如何影响虚拟环境中的导航
Pub Date : 2019-09-19 DOI: 10.1145/3343036.3343131
Richard A. Paris, Joshua Klag, P. Rajan, Lauren E. Buck, T. McNamara, Bobby Bodenheimer
Navigation, or the means by which people find their way in an environment, depends on the ability to combine information from multiple sources so that properties of an environment, such as the location of a goal, can be estimated. An important source of information for navigation are spatial cues generated by self-motion. Navigation based solely on body-based cues generated by self-motion is called path integration. In virtual reality and video games, many locomotion systems, that is, methods that move users through a virtual environment, can often distort or deprive users of important self-motion cues. There has been much study of this issue, and in this paper, we extend that study in novel directions by assessing the effect of four game-like locomotion interfaces on navigation performance using path integration. The salient features of our locomotion interfaces are that two are primarily continuous, i.e., more like a joystick, and two are primarily discrete, i.e., more like teleportation. Our main findings are that the perspective of path integration, people are able to use all methods, although continuous methods outperform discrete methods.
导航,或者说是人们在环境中找到路的方法,依赖于将来自多个来源的信息结合起来的能力,这样就可以估计环境的属性,比如目标的位置。自我运动产生的空间线索是导航信息的重要来源。仅仅基于由自我运动产生的基于身体的线索的导航被称为路径整合。在虚拟现实和视频游戏中,许多运动系统,即在虚拟环境中移动用户的方法,经常会扭曲或剥夺用户重要的自我运动线索。关于这个问题已经有很多研究,在本文中,我们通过评估四种类似游戏的运动界面对使用路径集成的导航性能的影响,将研究扩展到新的方向。我们的运动界面的显著特征是,两个主要是连续的,即更像操纵杆,两个主要是离散的,即更像传送。我们的主要发现是,从路径整合的角度来看,人们能够使用所有方法,尽管连续方法优于离散方法。
{"title":"How Video Game Locomotion Methods Affect Navigation in Virtual Environments","authors":"Richard A. Paris, Joshua Klag, P. Rajan, Lauren E. Buck, T. McNamara, Bobby Bodenheimer","doi":"10.1145/3343036.3343131","DOIUrl":"https://doi.org/10.1145/3343036.3343131","url":null,"abstract":"Navigation, or the means by which people find their way in an environment, depends on the ability to combine information from multiple sources so that properties of an environment, such as the location of a goal, can be estimated. An important source of information for navigation are spatial cues generated by self-motion. Navigation based solely on body-based cues generated by self-motion is called path integration. In virtual reality and video games, many locomotion systems, that is, methods that move users through a virtual environment, can often distort or deprive users of important self-motion cues. There has been much study of this issue, and in this paper, we extend that study in novel directions by assessing the effect of four game-like locomotion interfaces on navigation performance using path integration. The salient features of our locomotion interfaces are that two are primarily continuous, i.e., more like a joystick, and two are primarily discrete, i.e., more like teleportation. Our main findings are that the perspective of path integration, people are able to use all methods, although continuous methods outperform discrete methods.","PeriodicalId":228010,"journal":{"name":"ACM Symposium on Applied Perception 2019","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133666468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Stimulating the Brain in VR: Effects of Transcranial Direct-Current Stimulation on Redirected Walking 在VR中刺激大脑:经颅直流电刺激对重定向行走的影响
Pub Date : 2019-09-19 DOI: 10.1145/3343036.3343125
E. Langbehn, Frank Steinicke, Ping Koo-Poeggel, L. Marshall, G. Bruder
Redirected walking (RDW) enables virtual reality (VR) users to explore large virtual environments (VE) in confined tracking spaces by guiding users on different paths in the real world than in the VE. However, so far, spaces larger than typical room-scale setups of 5m × 5m are still required to allow infinitely straight walking, i. e., to prevent a subjective mismatch between real and virtual paths. This mismatch could in theory be reduced by interacting with the underlying brain activity. Transcranial direct-current stimulation (tDCS) presents a simply method able to modify ongoing cortical activity and excitability levels. Hence, this approach provides enormous potential to widen detection thresholds for RDW, and consequently reduce the above mentioned space requirements. In this paper, we conducted a psychophysical experiment using tDCS to evaluate detection thresholds for RDW gains. In the stimulation conditon 1.25 mA cathodal tDCS were applid over the prefrontal cortex (AF4 with Pz for the return current) for 20 minutes. TDCS failed to exert a significant overall effect on detection thresholds. However, for the highest gain only, path deviance was significantly modified by tDCS. In addition, subjectively reported disorientation was significantly lower during the tDCS as compared to the sham condition. Along the same line, oculomotor cyber sickness symptoms after the session were significantly decreased compared to baseline in tDCS, while there was no significant effect in sham. This work presents the first use of tDCS during virtual walking which provides new vistas for future research in the area of neurostimulation in VR.
重定向行走(Redirected walking, RDW)是一种虚拟现实(VR)技术,通过引导用户在现实世界中行走不同于在虚拟现实中行走的路径,在有限的跟踪空间中探索大型虚拟环境(VE)。然而,到目前为止,仍然需要比典型房间尺寸5m × 5m更大的空间来允许无限直线行走,即防止真实路径和虚拟路径之间的主观不匹配。从理论上讲,这种不匹配可以通过与潜在的大脑活动相互作用来减少。经颅直流电刺激(tDCS)提供了一种简单的方法,能够改变正在进行的皮层活动和兴奋性水平。因此,这种方法为扩大RDW的检测阈值提供了巨大的潜力,从而减少了上述空间需求。在本文中,我们使用tDCS进行了一项心理物理实验,以评估RDW增益的检测阈值。在刺激条件下,在前额皮质施加1.25 mA的阴极tDCS (AF4, Pz为返回电流)20分钟。TDCS对检测阈值的总体影响不显著。然而,仅对于最高增益,tDCS显著改变了路径偏差。此外,主观上报告的定向障碍在tDCS期间明显低于假条件。同样,与基线相比,tDCS组的动眼病症状在治疗后显著减少,而假手术组没有显著影响。这项工作首次展示了tDCS在虚拟行走中的应用,为未来VR神经刺激领域的研究提供了新的前景。
{"title":"Stimulating the Brain in VR: Effects of Transcranial Direct-Current Stimulation on Redirected Walking","authors":"E. Langbehn, Frank Steinicke, Ping Koo-Poeggel, L. Marshall, G. Bruder","doi":"10.1145/3343036.3343125","DOIUrl":"https://doi.org/10.1145/3343036.3343125","url":null,"abstract":"Redirected walking (RDW) enables virtual reality (VR) users to explore large virtual environments (VE) in confined tracking spaces by guiding users on different paths in the real world than in the VE. However, so far, spaces larger than typical room-scale setups of 5m × 5m are still required to allow infinitely straight walking, i. e., to prevent a subjective mismatch between real and virtual paths. This mismatch could in theory be reduced by interacting with the underlying brain activity. Transcranial direct-current stimulation (tDCS) presents a simply method able to modify ongoing cortical activity and excitability levels. Hence, this approach provides enormous potential to widen detection thresholds for RDW, and consequently reduce the above mentioned space requirements. In this paper, we conducted a psychophysical experiment using tDCS to evaluate detection thresholds for RDW gains. In the stimulation conditon 1.25 mA cathodal tDCS were applid over the prefrontal cortex (AF4 with Pz for the return current) for 20 minutes. TDCS failed to exert a significant overall effect on detection thresholds. However, for the highest gain only, path deviance was significantly modified by tDCS. In addition, subjectively reported disorientation was significantly lower during the tDCS as compared to the sham condition. Along the same line, oculomotor cyber sickness symptoms after the session were significantly decreased compared to baseline in tDCS, while there was no significant effect in sham. This work presents the first use of tDCS during virtual walking which provides new vistas for future research in the area of neurostimulation in VR.","PeriodicalId":228010,"journal":{"name":"ACM Symposium on Applied Perception 2019","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117161718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Perceptual Comparison of Procedural and Data-Driven Eye Motion Jitter 程序性和数据驱动眼动抖动的感知比较
Pub Date : 2019-09-19 DOI: 10.1145/3343036.3343130
S. Jörg, A. Duchowski, Krzysztof Krejtz, Anna Niedzielska
Research has shown that keyframed eye motions are perceived as more realistic when some noise is added to eyeball motions and to pupil size changes. We investigate whether this noise, in contrast to being motion captured, can be synthesized with standard techniques, e.g., procedural or data-driven approaches. In a two-alternative forced choice task, we compare eye animations created with four different techniques: motion captured, procedural, data-driven, and keyframed (lacking noise). Our perceptual experiment uses three character models with different levels of realism and two motions. Our results suggest that procedural and data-driven noise can be used to create animations at similar perceived naturalness to our motion captured approach. Participants’ eye movements when viewing the animations show that animations without jitter yielded fewer fixations, suggesting ease of dismissal as unnatural.
研究表明,当在眼球运动和瞳孔大小变化中加入一些噪音时,关键帧眼运动被认为更真实。我们研究这种噪音,与运动捕捉相比,是否可以用标准技术合成,例如,程序或数据驱动的方法。在一个两种选择的强制选择任务中,我们比较了用四种不同的技术创建的眼睛动画:动作捕捉、程序、数据驱动和关键帧(缺乏噪声)。我们的知觉实验使用了三个具有不同真实感水平和两种动作的角色模型。我们的研究结果表明,程序和数据驱动的噪声可以用来创建动画,在类似的感知自然我们的动作捕捉方法。参与者在观看动画时的眼球运动表明,没有抖动的动画引起的注视更少,这表明人们很容易将其视为不自然的。
{"title":"Perceptual Comparison of Procedural and Data-Driven Eye Motion Jitter","authors":"S. Jörg, A. Duchowski, Krzysztof Krejtz, Anna Niedzielska","doi":"10.1145/3343036.3343130","DOIUrl":"https://doi.org/10.1145/3343036.3343130","url":null,"abstract":"Research has shown that keyframed eye motions are perceived as more realistic when some noise is added to eyeball motions and to pupil size changes. We investigate whether this noise, in contrast to being motion captured, can be synthesized with standard techniques, e.g., procedural or data-driven approaches. In a two-alternative forced choice task, we compare eye animations created with four different techniques: motion captured, procedural, data-driven, and keyframed (lacking noise). Our perceptual experiment uses three character models with different levels of realism and two motions. Our results suggest that procedural and data-driven noise can be used to create animations at similar perceived naturalness to our motion captured approach. Participants’ eye movements when viewing the animations show that animations without jitter yielded fewer fixations, suggesting ease of dismissal as unnatural.","PeriodicalId":228010,"journal":{"name":"ACM Symposium on Applied Perception 2019","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117002723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Infinity Walk in VR: Effects of Cognitive Load on Velocity during Continuous Long-Distance Walking VR中的无限行走:认知负荷对持续长距离行走速度的影响
Pub Date : 2019-09-19 DOI: 10.1145/3343036.3343119
Omar Janeh, Nikolaos Katzakis, Jonathan Tong, Frank Steinicke
Bipedal walking is generally considered to be the most natural and common locomotion technique in the physical world, for humans, and the most presence-enhancing form of locomotion in virtual reality (VR). However, there are significant differences in the way people walk in VR compared to their walking behaviour in the real world. For instance, previous studies have shown a significant decrease of gait parameters, in particular, velocity and step length in the virtual environment (VE). However, those studies have only considered short periods of walking. In contrast, many VR applications involve extended exposures to the VE and often include additional cognitive tasks such as way-finding. Hence, it remains an open question whether velocity during VR walking will further slowdown over time or if users of VR will eventually speed-up and adapt their velocity to the VE and move with the same speed as in the real world. In this paper we present a study to compare the effects of cognitive task on velocity during long-distance walking in VR compared to walking in the real world. Therefore, we used an exact virtual replica model of the users’ real surrounding. To reliably evaluate locomotion performance, we analyzed walking velocity during long-distance walking. This was achieved by 60 consecutive cycles using a left/right figure-8 protocol, which avoids the limitations of treadmill and non-consecutive walking protocols (i. e., start-stop). The results show a significant decrease of velocity in the VE compared to the real world even after 60 consecutive cycles with and without the cognitive task.
对于人类来说,双足行走通常被认为是物理世界中最自然、最常见的运动技术,也是虚拟现实(VR)中最能增强存在感的运动形式。然而,人们在虚拟现实中的行走方式与他们在现实世界中的行走行为存在显著差异。例如,先前的研究表明,在虚拟环境(VE)中,步态参数,特别是速度和步长显着降低。然而,这些研究只考虑了短时间的步行。相比之下,许多VR应用涉及到长时间的VE暴露,并且通常包括额外的认知任务,如寻路。因此,VR行走的速度是否会随着时间的推移而进一步放缓,还是VR用户最终会加速并适应虚拟现实,以与现实世界相同的速度移动,这仍然是一个悬而未决的问题。在本文中,我们提出了一项研究,比较了认知任务对虚拟现实中长距离行走速度的影响。因此,我们使用了用户真实环境的精确虚拟复制模型。为了可靠地评估运动性能,我们分析了长距离步行时的步行速度。这是通过使用左/右图8方案的60个连续循环实现的,该方案避免了跑步机和非连续步行方案(即开始-停止)的局限性。结果显示,即使在有或没有认知任务的连续60个循环后,虚拟现实的速度也比现实世界显著下降。
{"title":"Infinity Walk in VR: Effects of Cognitive Load on Velocity during Continuous Long-Distance Walking","authors":"Omar Janeh, Nikolaos Katzakis, Jonathan Tong, Frank Steinicke","doi":"10.1145/3343036.3343119","DOIUrl":"https://doi.org/10.1145/3343036.3343119","url":null,"abstract":"Bipedal walking is generally considered to be the most natural and common locomotion technique in the physical world, for humans, and the most presence-enhancing form of locomotion in virtual reality (VR). However, there are significant differences in the way people walk in VR compared to their walking behaviour in the real world. For instance, previous studies have shown a significant decrease of gait parameters, in particular, velocity and step length in the virtual environment (VE). However, those studies have only considered short periods of walking. In contrast, many VR applications involve extended exposures to the VE and often include additional cognitive tasks such as way-finding. Hence, it remains an open question whether velocity during VR walking will further slowdown over time or if users of VR will eventually speed-up and adapt their velocity to the VE and move with the same speed as in the real world. In this paper we present a study to compare the effects of cognitive task on velocity during long-distance walking in VR compared to walking in the real world. Therefore, we used an exact virtual replica model of the users’ real surrounding. To reliably evaluate locomotion performance, we analyzed walking velocity during long-distance walking. This was achieved by 60 consecutive cycles using a left/right figure-8 protocol, which avoids the limitations of treadmill and non-consecutive walking protocols (i. e., start-stop). The results show a significant decrease of velocity in the VE compared to the real world even after 60 consecutive cycles with and without the cognitive task.","PeriodicalId":228010,"journal":{"name":"ACM Symposium on Applied Perception 2019","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132654656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Differences in Haptic and Visual Perception of Expressive 1DoF Motion 表达性1自由度运动的触觉和视觉感知差异
Pub Date : 2019-09-19 DOI: 10.1145/3343036.3343136
Elyse D. Z. Chase, Sean Follmer
Humans can perceive motion through a variety of different modalities. Vision is a well explored modality; however haptics can greatly increase the richness of information provided to the user. The detailed differences in perception of motion between these two modalities are not well studied and can provide an additional avenue for communication between humans and haptic devices or robots. We analyze these differences in the context of users interactions with a non-anthropomorphic haptic device. In this study, participants experienced different levels and combinations of stiffness, jitter, and acceleration curves via a one degree of freedom linear motion display. These conditions were presented with and without the opportunity for users to touch the setup. Participants rated the experiences within the contexts of emotion, anthropomorphism, likeability, and safety using the SAM scale, HRI metrics, as well as with qualitative feedback. A positive correlation between stiffness and dominance, specifically due to the haptic condition, was found; additionally, with the introduction of jitter, decreases in perceived arousal and likeability were recorded. Trends relating acceleration curves to perceived dominance as well as stiffness and jitter to valence, arousal, dominance, likeability, and safety were also found. These results suggest the importance of considering which sensory modalities are more actively engaged during interactions and, concomitantly, which behaviors designers should employ in the creation of non-anthropomorphic interactive haptic devices to achieve a particular interpreted affective state.
人类可以通过各种不同的方式感知运动。视觉是一种被充分探索的形态;然而,触觉可以极大地增加提供给用户的信息的丰富性。这两种模式之间运动感知的详细差异尚未得到很好的研究,可以为人类和触觉设备或机器人之间的通信提供额外的途径。我们在用户与非拟人化触觉设备交互的背景下分析这些差异。在这项研究中,参与者通过一个自由度的线性运动显示器体验了不同程度的刚度、抖动和加速度曲线的组合。这些条件在有和没有机会让用户触摸设置的情况下呈现。参与者使用SAM量表、HRI指标以及定性反馈对情感、拟人化、受欢迎程度和安全性等方面的体验进行评分。研究发现,僵硬度和支配度之间存在正相关关系,特别是由于触觉条件;此外,随着抖动的引入,记录了感知觉醒和受欢迎程度的下降。我们还发现了加速曲线与感知的支配地位以及与效价、唤醒、支配地位、受欢迎程度和安全性相关的刚度和抖动的趋势。这些结果表明,考虑在交互过程中哪种感觉模式更积极参与的重要性,同时,设计师在创造非拟人化的交互式触觉设备时应该采用哪些行为来实现特定的情感状态。
{"title":"Differences in Haptic and Visual Perception of Expressive 1DoF Motion","authors":"Elyse D. Z. Chase, Sean Follmer","doi":"10.1145/3343036.3343136","DOIUrl":"https://doi.org/10.1145/3343036.3343136","url":null,"abstract":"Humans can perceive motion through a variety of different modalities. Vision is a well explored modality; however haptics can greatly increase the richness of information provided to the user. The detailed differences in perception of motion between these two modalities are not well studied and can provide an additional avenue for communication between humans and haptic devices or robots. We analyze these differences in the context of users interactions with a non-anthropomorphic haptic device. In this study, participants experienced different levels and combinations of stiffness, jitter, and acceleration curves via a one degree of freedom linear motion display. These conditions were presented with and without the opportunity for users to touch the setup. Participants rated the experiences within the contexts of emotion, anthropomorphism, likeability, and safety using the SAM scale, HRI metrics, as well as with qualitative feedback. A positive correlation between stiffness and dominance, specifically due to the haptic condition, was found; additionally, with the introduction of jitter, decreases in perceived arousal and likeability were recorded. Trends relating acceleration curves to perceived dominance as well as stiffness and jitter to valence, arousal, dominance, likeability, and safety were also found. These results suggest the importance of considering which sensory modalities are more actively engaged during interactions and, concomitantly, which behaviors designers should employ in the creation of non-anthropomorphic interactive haptic devices to achieve a particular interpreted affective state.","PeriodicalId":228010,"journal":{"name":"ACM Symposium on Applied Perception 2019","volume":"220 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134373382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Empirical Evaluation of the Interplay of Emotion and Visual Attention in Human-Virtual Human Interaction 人-虚拟人互动中情感和视觉注意相互作用的实证评价
Pub Date : 2019-09-19 DOI: 10.1145/3343036.3343118
Matias Volonte, Reza Ghaiumy Anaraky, Bart P. Knijnenburg, A. Duchowski, Sabarish V. Babu
We examined the effect of rendering style and the interplay between attention and emotion in users during interaction with a virtual patient in a medical training simulator. The virtual simulation was rendered representing a sample from the photo-realistic to the non-photorealistic continuum, namely Near-Realistic, Cartoon or Pencil-Shader. In a mixed design study, we collected 45 participants’ emotional responses and gaze behavior using surveys and an eye tracker while interacting with a virtual patient who was medically deteriorating over time. We used a cross-lagged panel analysis of attention and emotion to understand their reciprocal relationship over time. We also performed a mediation analysis to compare the extent to which the virtual agent’s appearance and his affective behavior impacted users’ emotional and attentional responses. Results showed the interplay between participants’ visual attention and emotion over time and also showed that attention was a stronger variable than emotion during the interaction with the virtual human.
我们研究了渲染风格的影响,以及用户在与医疗培训模拟器中的虚拟患者交互过程中注意力和情感之间的相互作用。虚拟模拟被渲染为从照片逼真到非照片逼真连续体的样本,即近逼真,卡通或铅笔着色。在一项混合设计研究中,我们通过调查和眼动仪收集了45名参与者的情绪反应和凝视行为,同时与一位随着时间推移病情恶化的虚拟患者互动。我们使用了注意力和情绪的交叉滞后面板分析来理解它们随时间的相互关系。我们还进行了中介分析,以比较虚拟代理的外观和他的情感行为影响用户的情绪和注意力反应的程度。结果表明,随着时间的推移,参与者的视觉注意力和情绪之间存在相互作用,并且在与虚拟人的互动中,注意力是一个比情绪更强的变量。
{"title":"Empirical Evaluation of the Interplay of Emotion and Visual Attention in Human-Virtual Human Interaction","authors":"Matias Volonte, Reza Ghaiumy Anaraky, Bart P. Knijnenburg, A. Duchowski, Sabarish V. Babu","doi":"10.1145/3343036.3343118","DOIUrl":"https://doi.org/10.1145/3343036.3343118","url":null,"abstract":"We examined the effect of rendering style and the interplay between attention and emotion in users during interaction with a virtual patient in a medical training simulator. The virtual simulation was rendered representing a sample from the photo-realistic to the non-photorealistic continuum, namely Near-Realistic, Cartoon or Pencil-Shader. In a mixed design study, we collected 45 participants’ emotional responses and gaze behavior using surveys and an eye tracker while interacting with a virtual patient who was medically deteriorating over time. We used a cross-lagged panel analysis of attention and emotion to understand their reciprocal relationship over time. We also performed a mediation analysis to compare the extent to which the virtual agent’s appearance and his affective behavior impacted users’ emotional and attentional responses. Results showed the interplay between participants’ visual attention and emotion over time and also showed that attention was a stronger variable than emotion during the interaction with the virtual human.","PeriodicalId":228010,"journal":{"name":"ACM Symposium on Applied Perception 2019","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116397954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Transsaccadic Awareness of Scene Transformations in a 3D Virtual Environment 三维虚拟环境中场景变换的跨眼感知
Pub Date : 2019-09-19 DOI: 10.1145/3343036.3343121
Maryam Keyvanara, R. Allison
In gaze-contingent displays, the viewer’s eye movement data are processed in real-time to adjust the graphical content. To provide a high-quality user experience, these graphical updates must occur with minimum delay. Such updates can be used to introduce imperceptible changes in virtual camera pose in applications such as networked gaming, collaborative virtual reality and redirected walking. For such applications, perceptual saccadic suppression can help to hide the graphical artifacts. We investigated whether the visibility of these updates depends on the type of image transformation. Users viewed 3D scenes in which the displacement of a target object triggered them to generate a vertical or horizontal saccade, during which a translation or rotation was applied to the virtual camera used to render the scene. After each trial, users indicated the direction of the scene change in a forced-choice task. Results show that type and size of the image transformation affected change detectability. During horizontal or vertical saccades, rotations along the roll axis were the most detectable, while horizontal and vertical translations were least noticed. We confirm that large 3D adjustments to the scene viewpoint can be introduced unobtrusively and with low latency during saccades, but the allowable extent of the correction varies with the transformation applied.
在视向显示中,观看者的眼球运动数据被实时处理以调整图形内容。为了提供高质量的用户体验,这些图形更新必须以最小的延迟进行。这样的更新可以用来在网络游戏、协作虚拟现实和重定向行走等应用中引入虚拟相机姿势的难以察觉的变化。对于这样的应用,感知跳进抑制可以帮助隐藏图形伪影。我们调查了这些更新的可见性是否取决于图像转换的类型。用户观看的3D场景中,目标物体的位移触发他们产生垂直或水平扫视,在此期间,用于渲染场景的虚拟摄像机应用平移或旋转。每次试验后,用户在一个强制选择任务中指出场景变化的方向。结果表明,图像变换的类型和大小影响变化的可检测性。在水平或垂直扫视时,沿滚动轴的旋转是最容易检测到的,而水平和垂直的平移是最不容易注意到的。我们确认,在扫视过程中,对场景视点进行大的3D调整可以在不显眼的情况下进行,并且延迟较低,但允许的校正程度随应用的变换而变化。
{"title":"Transsaccadic Awareness of Scene Transformations in a 3D Virtual Environment","authors":"Maryam Keyvanara, R. Allison","doi":"10.1145/3343036.3343121","DOIUrl":"https://doi.org/10.1145/3343036.3343121","url":null,"abstract":"In gaze-contingent displays, the viewer’s eye movement data are processed in real-time to adjust the graphical content. To provide a high-quality user experience, these graphical updates must occur with minimum delay. Such updates can be used to introduce imperceptible changes in virtual camera pose in applications such as networked gaming, collaborative virtual reality and redirected walking. For such applications, perceptual saccadic suppression can help to hide the graphical artifacts. We investigated whether the visibility of these updates depends on the type of image transformation. Users viewed 3D scenes in which the displacement of a target object triggered them to generate a vertical or horizontal saccade, during which a translation or rotation was applied to the virtual camera used to render the scene. After each trial, users indicated the direction of the scene change in a forced-choice task. Results show that type and size of the image transformation affected change detectability. During horizontal or vertical saccades, rotations along the roll axis were the most detectable, while horizontal and vertical translations were least noticed. We confirm that large 3D adjustments to the scene viewpoint can be introduced unobtrusively and with low latency during saccades, but the allowable extent of the correction varies with the transformation applied.","PeriodicalId":228010,"journal":{"name":"ACM Symposium on Applied Perception 2019","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124695679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
ACM Symposium on Applied Perception 2019
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1