首页 > 最新文献

Proceedings of the 15th ACM Symposium on Applied Perception最新文献

英文 中文
Comparing input methods and cursors for 3D positioning with head-mounted displays 比较3D定位与头戴式显示器的输入法和光标
Pub Date : 2018-08-10 DOI: 10.1145/3225153.3225167
Junwei Sun, W. Stuerzlinger, B. Riecke
Moving objects is an important task in 3D user interfaces. In this work, we focus on (precise) 3D object positioning in immersive virtual reality systems, especially head-mounted displays (HMDs). To evaluate input method performance for 3D positioning, we focus on an existing sliding algorithm, in which objects slide on any contact surface. Sliding enables rapid positioning of objects in 3D scenes on a desktop system but is yet to be evaluated in an immersive system. We performed a user study that compared the efficiency and accuracy of different input methods (mouse, hand-tracking, and trackpad) and cursor display conditions (stereo cursor and one-eyed cursor) for 3D positioning tasks with the HTC Vive. The results showed that the mouse outperformed hand-tracking and the trackpad, in terms of efficiency and accuracy. Stereo cursor and one-eyed cursor did not demonstrate a significant difference in performance, yet the stereo cursor condition was rated more favourable. For situations where the user is seated in immersive VR, the mouse is thus still the best input device for precise 3D positioning.
移动对象是三维用户界面中的一项重要任务。在这项工作中,我们专注于沉浸式虚拟现实系统,特别是头戴式显示器(hmd)中的(精确)3D物体定位。为了评估输入法在3D定位中的性能,我们重点研究了一种现有的滑动算法,其中物体在任何接触面上滑动。滑动可以在桌面系统的3D场景中快速定位物体,但尚未在沉浸式系统中进行评估。我们进行了一项用户研究,比较了不同输入法(鼠标、手控和触控板)和光标显示条件(立体光标和独眼光标)在HTC Vive 3D定位任务中的效率和准确性。结果显示,鼠标在效率和准确性方面都优于手控和触控板。立体光标和独眼光标在性能上没有显著差异,但立体光标条件被评为更有利。对于用户坐在沉浸式VR中的情况,鼠标仍然是精确3D定位的最佳输入设备。
{"title":"Comparing input methods and cursors for 3D positioning with head-mounted displays","authors":"Junwei Sun, W. Stuerzlinger, B. Riecke","doi":"10.1145/3225153.3225167","DOIUrl":"https://doi.org/10.1145/3225153.3225167","url":null,"abstract":"Moving objects is an important task in 3D user interfaces. In this work, we focus on (precise) 3D object positioning in immersive virtual reality systems, especially head-mounted displays (HMDs). To evaluate input method performance for 3D positioning, we focus on an existing sliding algorithm, in which objects slide on any contact surface. Sliding enables rapid positioning of objects in 3D scenes on a desktop system but is yet to be evaluated in an immersive system. We performed a user study that compared the efficiency and accuracy of different input methods (mouse, hand-tracking, and trackpad) and cursor display conditions (stereo cursor and one-eyed cursor) for 3D positioning tasks with the HTC Vive. The results showed that the mouse outperformed hand-tracking and the trackpad, in terms of efficiency and accuracy. Stereo cursor and one-eyed cursor did not demonstrate a significant difference in performance, yet the stereo cursor condition was rated more favourable. For situations where the user is seated in immersive VR, the mouse is thus still the best input device for precise 3D positioning.","PeriodicalId":185507,"journal":{"name":"Proceedings of the 15th ACM Symposium on Applied Perception","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124360853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Effects of anthropomorphic fidelity of self-avatars on reach boundary estimation in immersive virtual environments 沉浸式虚拟环境中自我化身的拟人化保真度对到达边界估计的影响
Pub Date : 2018-08-10 DOI: 10.1145/3225153.3225170
Elham Ebrahimi, Andrew C. Robb, Leah S. Hartman, C. Pagano, Sabarish V. Babu
Research has shown that self-avatars (life-size representations of the user in Virtual Reality (VR)) can affect how people perceive virtual environments. In this paper, we investigated whether the visual fidelity of a self-avatar affects reach boundary perception, as assessed through two variables: 1) action taken (or verbal response) and 2) correct judgment. Participants were randomly assigned to one of four conditions: i) high-fidelity self-avatar, ii) low-fidelity self-avatar, iii) no avatar (end-effector), and iv) real-world as reference task group. Results indicate that all three VR viewing conditions were significantly different from real world in regards to correctly judging the reachability of the target. However, based on verbal responses, only the "no avatar" condition had a non-trivial difference with real world condition. Taken together with reachability data, participants in "no avatar" condition were less likely to correctly reach to the reachable targets. Overall, participant performance improved after completing a calibration phase with feedback, such that correct judgments increased and participants reached to fewer unreachable targets.
研究表明,自我形象(虚拟现实(VR)中用户的真人大小的代表)可以影响人们对虚拟环境的感知。在本文中,我们研究了自我化身的视觉保真度是否会影响到达边界感知,通过两个变量来评估:1)采取的行动(或口头反应)和2)正确的判断。参与者被随机分配到四个条件中的一个:i)高保真的自我化身,ii)低保真的自我化身,iii)没有化身(末端执行器),iv)现实世界作为参考任务组。结果表明,在正确判断目标可达性方面,三种虚拟现实观看条件与现实世界有显著差异。然而,基于口头反应,只有“没有化身”条件与现实世界条件有显著差异。综合可达性数据,“无化身”条件下的参与者不太可能正确到达可达目标。总体而言,参与者的表现在完成有反馈的校准阶段后得到了改善,例如正确的判断增加了,参与者达到的不可达目标也减少了。
{"title":"Effects of anthropomorphic fidelity of self-avatars on reach boundary estimation in immersive virtual environments","authors":"Elham Ebrahimi, Andrew C. Robb, Leah S. Hartman, C. Pagano, Sabarish V. Babu","doi":"10.1145/3225153.3225170","DOIUrl":"https://doi.org/10.1145/3225153.3225170","url":null,"abstract":"Research has shown that self-avatars (life-size representations of the user in Virtual Reality (VR)) can affect how people perceive virtual environments. In this paper, we investigated whether the visual fidelity of a self-avatar affects reach boundary perception, as assessed through two variables: 1) action taken (or verbal response) and 2) correct judgment. Participants were randomly assigned to one of four conditions: i) high-fidelity self-avatar, ii) low-fidelity self-avatar, iii) no avatar (end-effector), and iv) real-world as reference task group. Results indicate that all three VR viewing conditions were significantly different from real world in regards to correctly judging the reachability of the target. However, based on verbal responses, only the \"no avatar\" condition had a non-trivial difference with real world condition. Taken together with reachability data, participants in \"no avatar\" condition were less likely to correctly reach to the reachable targets. Overall, participant performance improved after completing a calibration phase with feedback, such that correct judgments increased and participants reached to fewer unreachable targets.","PeriodicalId":185507,"journal":{"name":"Proceedings of the 15th ACM Symposium on Applied Perception","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122161810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Judging action capabilities in augmented reality 判断增强现实中的行动能力
Pub Date : 2018-08-10 DOI: 10.1145/3225153.3225168
Grant D. Pointon, Chelsey Thompson, Sarah H. Creem-Regehr, Jeanine K. Stefanucci, Miti Joshi, Richard A. Paris, Bobby Bodenheimer
The utility of mediated environments increases when environmental scale (size and distance) is perceived accurately. We present the use of perceived affordances---judgments of action capabilities---as an objective way to assess space perception in an augmented reality (AR) environment. The current study extends the previous use of this methodology in virtual reality (VR) to AR. We tested two locomotion-based affordance tasks. In the first experiment, observers judged whether they could pass through a virtual aperture presented at different widths and distances, and also judged the distance to the aperture. In the second experiment, observers judged whether they could step over a virtual gap on the ground. In both experiments, the virtual objects were displayed with the HoloLens in a real laboratory environment. We demonstrate that affordances for passing through and perceived distance to the aperture are similar in AR to those measured in the real world, but that judgments of gap-crossing in AR were underestimated. These differences across two affordances may result from the different spatial characteristics of the virtual objects (on the ground versus extending off the ground).
当环境尺度(大小和距离)被准确感知时,中介环境的效用就会增加。我们提出使用感知可视性——对行动能力的判断——作为一种客观的方法来评估增强现实(AR)环境中的空间感知。目前的研究将该方法在虚拟现实(VR)中的先前使用扩展到AR。我们测试了两个基于运动的功能任务。在第一个实验中,观察者判断他们是否可以通过一个不同宽度和距离的虚拟孔径,并判断到该孔径的距离。在第二个实验中,观察者判断他们是否能跨过地面上的一个虚拟缺口。在这两个实验中,虚拟物体在真实的实验室环境中与HoloLens一起显示。我们证明了在AR中通过和感知到的距离与在现实世界中测量的相似,但是在AR中对间隙交叉的判断被低估了。两种可见性之间的差异可能源于虚拟对象的不同空间特征(在地面上与在地面上延伸)。
{"title":"Judging action capabilities in augmented reality","authors":"Grant D. Pointon, Chelsey Thompson, Sarah H. Creem-Regehr, Jeanine K. Stefanucci, Miti Joshi, Richard A. Paris, Bobby Bodenheimer","doi":"10.1145/3225153.3225168","DOIUrl":"https://doi.org/10.1145/3225153.3225168","url":null,"abstract":"The utility of mediated environments increases when environmental scale (size and distance) is perceived accurately. We present the use of perceived affordances---judgments of action capabilities---as an objective way to assess space perception in an augmented reality (AR) environment. The current study extends the previous use of this methodology in virtual reality (VR) to AR. We tested two locomotion-based affordance tasks. In the first experiment, observers judged whether they could pass through a virtual aperture presented at different widths and distances, and also judged the distance to the aperture. In the second experiment, observers judged whether they could step over a virtual gap on the ground. In both experiments, the virtual objects were displayed with the HoloLens in a real laboratory environment. We demonstrate that affordances for passing through and perceived distance to the aperture are similar in AR to those measured in the real world, but that judgments of gap-crossing in AR were underestimated. These differences across two affordances may result from the different spatial characteristics of the virtual objects (on the ground versus extending off the ground).","PeriodicalId":185507,"journal":{"name":"Proceedings of the 15th ACM Symposium on Applied Perception","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121671460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Deep learning of biomimetic visual perception for virtual humans 虚拟人仿生视觉的深度学习
Pub Date : 2018-08-10 DOI: 10.1145/3225153.3225161
Masaki Nakada, Honglin Chen, Demetri Terzopoulos
Future generations of advanced, autonomous virtual humans will likely require artificial vision systems that more accurately model the human biological vision system. With this in mind, we propose a strongly biomimetic model of visual perception within a novel framework for human sensorimotor control. Our framework features a biomechanically simulated, musculoskeletal human model actuated by numerous skeletal muscles, with two human-like eyes whose retinas have spatially nonuniform distributions of photoreceptors not unlike biological retinas. The retinal photoreceptors capture the scene irradiance that reaches them, which is computed using ray tracing. Within the sensory subsystem of our model, which continuously operates on the photoreceptor outputs, are 10 automatically-trained, deep neural networks (DNNs). A pair of DNNs drive eye and head movements, while the other 8 DNNs extract the sensory information needed to control the arms and legs. Thus, exclusively by means of its egocentric, active visual perception, our biomechanical virtual human learns, by synthesizing its own training data, efficient, online visuomotor control of its eyes, head, and limbs to perform tasks involving the foveation and visual pursuit of target objects coupled with visually-guided reaching actions to intercept the moving targets.
未来几代先进的、自主的虚拟人可能需要更准确地模拟人类生物视觉系统的人工视觉系统。考虑到这一点,我们在人类感觉运动控制的新框架内提出了一个强烈的仿生视觉感知模型。我们的框架具有生物力学模拟,由众多骨骼肌驱动的肌肉骨骼人体模型,具有两个类似人类的眼睛,其视网膜具有与生物视网膜不同的光感受器的空间不均匀分布。视网膜光感受器捕捉到达它们的场景辐照度,这是通过光线追踪计算的。在我们的模型的感觉子系统中,持续地对光感受器输出进行操作,有10个自动训练的深度神经网络(dnn)。一对dnn驱动眼睛和头部运动,而其他8个dnn提取控制手臂和腿部所需的感觉信息。因此,我们的生物力学虚拟人完全通过以自我为中心的主动视觉感知,通过综合自己的训练数据,学习有效的在线视觉运动控制其眼睛,头部和四肢,以执行涉及目标物体的注视和视觉追求的任务,以及视觉引导的到达动作,以拦截移动目标。
{"title":"Deep learning of biomimetic visual perception for virtual humans","authors":"Masaki Nakada, Honglin Chen, Demetri Terzopoulos","doi":"10.1145/3225153.3225161","DOIUrl":"https://doi.org/10.1145/3225153.3225161","url":null,"abstract":"Future generations of advanced, autonomous virtual humans will likely require artificial vision systems that more accurately model the human biological vision system. With this in mind, we propose a strongly biomimetic model of visual perception within a novel framework for human sensorimotor control. Our framework features a biomechanically simulated, musculoskeletal human model actuated by numerous skeletal muscles, with two human-like eyes whose retinas have spatially nonuniform distributions of photoreceptors not unlike biological retinas. The retinal photoreceptors capture the scene irradiance that reaches them, which is computed using ray tracing. Within the sensory subsystem of our model, which continuously operates on the photoreceptor outputs, are 10 automatically-trained, deep neural networks (DNNs). A pair of DNNs drive eye and head movements, while the other 8 DNNs extract the sensory information needed to control the arms and legs. Thus, exclusively by means of its egocentric, active visual perception, our biomechanical virtual human learns, by synthesizing its own training data, efficient, online visuomotor control of its eyes, head, and limbs to perform tasks involving the foveation and visual pursuit of target objects coupled with visually-guided reaching actions to intercept the moving targets.","PeriodicalId":185507,"journal":{"name":"Proceedings of the 15th ACM Symposium on Applied Perception","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134131017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Evaluating the effects of four VR locomotion methods: joystick, arm-cycling, point-tugging, and teleporting 评估四种VR运动方法的效果:操纵杆、手臂循环、点拖和传送
Pub Date : 2018-08-10 DOI: 10.1145/3225153.3225175
Noah Coomer, Sadler Bullard, W. Clinton, B. Sanders
In this work we present two novel methods of exploring a large immersive virtual environment (IVE) viewed through a head-mounted display (HMD) using the tracked controllers that come standard with commodity-level HMD systems. With the first method, "Point-Tugging," users reach and pull the controller trigger at a point in front of them and move in the direction of the point they "tug" with the controller. With the second method, "Arm-Cycling," users move their arms while pulling the trigger on the hand-held controllers to translate in the yaw direction that their head is facing. We perform a search task experiment to directly compare four locomotion techniques: Joystick, Arm-Cycling, Point-Tugging, and Teleporting. In the joystick condition, a joystick is used to translate the user in the yaw direction of gaze with physical rotations matching virtual rotations. In the teleporting condition, the controllers create an arched beam that allows the user to select a point on the ground and instantly teleport to this location. We find that Arm-Cycling has advantages over the other methods and could be suitable for wide-spread use.
在这项工作中,我们提出了两种探索大型沉浸式虚拟环境(IVE)的新方法,这些虚拟环境通过头戴式显示器(HMD)观察,使用商品级HMD系统标准配备的跟踪控制器。使用第一种方法,即“point - tug”,用户到达并拉动他们面前的控制器触发器,并朝着他们用控制器“拖曳”的点的方向移动。第二种方法是“手臂循环”,用户在拉动手持控制器上的扳机的同时移动他们的手臂,以改变他们头部面对的偏航方向。我们执行搜索任务实验来直接比较四种运动技术:操纵杆、手臂循环、点拖和传送。在操纵杆条件下,操纵杆通过物理旋转与虚拟旋转相匹配来实现用户在视线偏航方向上的平移。在传送条件下,控制器创建一个拱形光束,允许用户在地面上选择一个点,并立即传送到这个位置。我们发现手臂循环比其他方法有优势,适合广泛使用。
{"title":"Evaluating the effects of four VR locomotion methods: joystick, arm-cycling, point-tugging, and teleporting","authors":"Noah Coomer, Sadler Bullard, W. Clinton, B. Sanders","doi":"10.1145/3225153.3225175","DOIUrl":"https://doi.org/10.1145/3225153.3225175","url":null,"abstract":"In this work we present two novel methods of exploring a large immersive virtual environment (IVE) viewed through a head-mounted display (HMD) using the tracked controllers that come standard with commodity-level HMD systems. With the first method, \"Point-Tugging,\" users reach and pull the controller trigger at a point in front of them and move in the direction of the point they \"tug\" with the controller. With the second method, \"Arm-Cycling,\" users move their arms while pulling the trigger on the hand-held controllers to translate in the yaw direction that their head is facing. We perform a search task experiment to directly compare four locomotion techniques: Joystick, Arm-Cycling, Point-Tugging, and Teleporting. In the joystick condition, a joystick is used to translate the user in the yaw direction of gaze with physical rotations matching virtual rotations. In the teleporting condition, the controllers create an arched beam that allows the user to select a point on the ground and instantly teleport to this location. We find that Arm-Cycling has advantages over the other methods and could be suitable for wide-spread use.","PeriodicalId":185507,"journal":{"name":"Proceedings of the 15th ACM Symposium on Applied Perception","volume":"87 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133523280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 66
Analysis of hair shine using rendering and subjective evaluation 用渲染法和主观评价法分析头发光泽
Pub Date : 2018-08-10 DOI: 10.1145/3225153.3243891
G. Ramesh, M. Turner, Bjoern Schroeder, Franz-Josef Wortmann
{"title":"Analysis of hair shine using rendering and subjective evaluation","authors":"G. Ramesh, M. Turner, Bjoern Schroeder, Franz-Josef Wortmann","doi":"10.1145/3225153.3243891","DOIUrl":"https://doi.org/10.1145/3225153.3243891","url":null,"abstract":"","PeriodicalId":185507,"journal":{"name":"Proceedings of the 15th ACM Symposium on Applied Perception","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133916097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of virtual acoustics on target-word identification performance in multi-talker environments 多语环境下虚拟声学对目标词识别性能的影响
Pub Date : 2018-08-10 DOI: 10.1145/3225153.3225166
Atul Rungta, Nicholas Rewkowski, Carl Schissler, Philip Robinson, Ravish Mehra, Dinesh Manocha
Many virtual reality applications let multiple users communicate in a multi-talker environment, recreating the classic cocktail-party effect. While there is a vast body of research focusing on the perception and intelligibility of human speech in real-world scenarios with cocktail party effects, there is little work in accurately modeling and evaluating the effect in virtual environments. Given the goal of evaluating the impact of virtual acoustic simulation on the cocktail party effect, we conducted experiments to establish the signal-to-noise ratio (SNR) thresholds for target-word identification performance. Our evaluation was performed for sentences from the coordinate response measure corpus in presence of multi-talker babble. The thresholds were established under varying sound propagation and spatialization conditions. We used a state-of-the-art geometric acoustic system integrated into the Unity game engine to simulate varying conditions of reverberance (direct sound, direct sound & early reflections, direct sound and early reflections and late reverberation) and spatialization (mono, stereo, and binaural). Our results show that spatialization has the biggest effect on the ability of listeners to discern the target words in multi-talker virtual environments. Reverberance, on the other hand, slightly affects the target word discerning ability negatively.
许多虚拟现实应用程序允许多个用户在多对话环境中进行通信,再现了经典的鸡尾酒会效果。虽然有大量的研究集中在现实世界中鸡尾酒会效应下人类语言的感知和可理解性,但在虚拟环境中准确建模和评估效果的工作却很少。为了评估虚拟声学模拟对鸡尾酒会效应的影响,我们进行了实验,建立了目标词识别性能的信噪比阈值。我们的评价是在多说话人咿呀学语存在的情况下对来自协调反应测量语料库的句子进行的。在不同的声音传播和空间化条件下建立阈值。我们使用了整合到Unity游戏引擎中的最先进的几何声学系统来模拟不同条件的混响(直接声音,直接声音&早期反射,直接声音和早期反射以及后期混响)和空间化(单声道,立体声和双耳)。我们的研究结果表明,在多人交谈的虚拟环境中,空间化对听者识别目标单词的能力影响最大。另一方面,混响对目标词识别能力有轻微的负面影响。
{"title":"Effects of virtual acoustics on target-word identification performance in multi-talker environments","authors":"Atul Rungta, Nicholas Rewkowski, Carl Schissler, Philip Robinson, Ravish Mehra, Dinesh Manocha","doi":"10.1145/3225153.3225166","DOIUrl":"https://doi.org/10.1145/3225153.3225166","url":null,"abstract":"Many virtual reality applications let multiple users communicate in a multi-talker environment, recreating the classic cocktail-party effect. While there is a vast body of research focusing on the perception and intelligibility of human speech in real-world scenarios with cocktail party effects, there is little work in accurately modeling and evaluating the effect in virtual environments. Given the goal of evaluating the impact of virtual acoustic simulation on the cocktail party effect, we conducted experiments to establish the signal-to-noise ratio (SNR) thresholds for target-word identification performance. Our evaluation was performed for sentences from the coordinate response measure corpus in presence of multi-talker babble. The thresholds were established under varying sound propagation and spatialization conditions. We used a state-of-the-art geometric acoustic system integrated into the Unity game engine to simulate varying conditions of reverberance (direct sound, direct sound & early reflections, direct sound and early reflections and late reverberation) and spatialization (mono, stereo, and binaural). Our results show that spatialization has the biggest effect on the ability of listeners to discern the target words in multi-talker virtual environments. Reverberance, on the other hand, slightly affects the target word discerning ability negatively.","PeriodicalId":185507,"journal":{"name":"Proceedings of the 15th ACM Symposium on Applied Perception","volume":"163 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116668654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Expanding the sense of touch outside the body 扩展身体外的触觉
Pub Date : 2018-08-10 DOI: 10.1145/3225153.3225172
Christopher C. Berger, Mar González-Franco
Under normal circumstances, our sense of touch is limited to our body. Recent evidence suggests, however, that our perception of touch can also be expanded to objects we are holding when certain tactile illusions are elicited by delivering vibrotactile stimuli in a particular manner. Here, we examined whether an extra-corporeal illusory sense of touch could be elicited using vibrotactile stimuli delivered via two independent handheld controllers while in virtual reality. Our results suggest that under the right conditions, one's sense of touch in space can be extended outside the body, and even into the empty space that surrounds us. Specifically, we show, in virtual reality, that one's sense of touch can be extended to a virtual stick one is holding, and also into the empty space between one's hands. These findings provide a means with which to expand the sense of touch beyond the hands in VR systems using two independent controllers, and also have important implications for our understanding of the human representation of touch.
在正常情况下,我们的触觉仅限于我们的身体。然而,最近的证据表明,当以一种特定的方式传递振动触觉刺激时,我们对触觉的感知也可以扩展到我们拿着的物体上。在这里,我们研究了在虚拟现实中,通过两个独立的手持控制器传递的振动触觉刺激是否可以引发一种超肉体的虚幻触觉。我们的研究结果表明,在适当的条件下,一个人在空间中的触觉可以延伸到身体之外,甚至延伸到我们周围的空白空间。具体来说,我们展示了,在虚拟现实中,一个人的触觉可以扩展到一个人拿着的虚拟棍子,也可以扩展到一个人双手之间的空白空间。这些发现提供了一种方法,可以在VR系统中使用两个独立的控制器来扩展手以外的触觉,并且对我们理解人类的触觉表现也有重要的意义。
{"title":"Expanding the sense of touch outside the body","authors":"Christopher C. Berger, Mar González-Franco","doi":"10.1145/3225153.3225172","DOIUrl":"https://doi.org/10.1145/3225153.3225172","url":null,"abstract":"Under normal circumstances, our sense of touch is limited to our body. Recent evidence suggests, however, that our perception of touch can also be expanded to objects we are holding when certain tactile illusions are elicited by delivering vibrotactile stimuli in a particular manner. Here, we examined whether an extra-corporeal illusory sense of touch could be elicited using vibrotactile stimuli delivered via two independent handheld controllers while in virtual reality. Our results suggest that under the right conditions, one's sense of touch in space can be extended outside the body, and even into the empty space that surrounds us. Specifically, we show, in virtual reality, that one's sense of touch can be extended to a virtual stick one is holding, and also into the empty space between one's hands. These findings provide a means with which to expand the sense of touch beyond the hands in VR systems using two independent controllers, and also have important implications for our understanding of the human representation of touch.","PeriodicalId":185507,"journal":{"name":"Proceedings of the 15th ACM Symposium on Applied Perception","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116768723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Comparison of unobtrusive visual guidance methods in an immersive dome environment 沉浸式穹顶环境中不显眼的视觉制导方法的比较
Pub Date : 2018-08-10 DOI: 10.1145/3225153.3243888
S. Grogorick, Georgia Albuquerque, J. Tauscher, M. Magnor
In this paper, we evaluate various image-space modulation techniques that aim to unobtrusively guide viewers’ attention. While previous evaluations mainly target desktop settings, we examine their applicability to ultra wide field of view immersive environments, featuring technical characteristics expected for future-generation head-mounted displays. A custom-built, high-resolution immersive dome environment with high-precision eye tracking is used in our experiments. We investigate gaze guidance success rate and unobtrusiveness of five different techniques. Our results show promising guiding performance for four of the tested methods. With regard to unobtrusiveness we find that — while no method remains completely unnoticed — many participants do not report any distractions. The evaluated methods show promise to guide users’ attention also in wide field of virtual environment applications, e.g. virtually guided tours or field operation training.
在本文中,我们评估了各种图像空间调制技术,旨在不引人注目地引导观众的注意力。虽然之前的评估主要针对桌面设置,但我们研究了它们对超宽视场沉浸式环境的适用性,具有未来一代头戴式显示器的技术特征。在我们的实验中使用了一个定制的高分辨率沉浸式穹顶环境,并具有高精度的眼动追踪。我们研究了五种不同的注视引导技术的成功率和不突兀性。我们的结果表明,四种测试方法具有良好的指导效果。关于不干扰性,我们发现,虽然没有一种方法是完全不被注意的,但许多参与者没有报告任何干扰。评价的方法也显示出在虚拟环境应用的广泛领域引导用户的注意力,例如虚拟导游或现场操作培训。
{"title":"Comparison of unobtrusive visual guidance methods in an immersive dome environment","authors":"S. Grogorick, Georgia Albuquerque, J. Tauscher, M. Magnor","doi":"10.1145/3225153.3243888","DOIUrl":"https://doi.org/10.1145/3225153.3243888","url":null,"abstract":"In this paper, we evaluate various image-space modulation techniques that aim to unobtrusively guide viewers’ attention. While previous evaluations mainly target desktop settings, we examine their applicability to ultra wide field of view immersive environments, featuring technical characteristics expected for future-generation head-mounted displays. A custom-built, high-resolution immersive dome environment with high-precision eye tracking is used in our experiments. We investigate gaze guidance success rate and unobtrusiveness of five different techniques. Our results show promising guiding performance for four of the tested methods. With regard to unobtrusiveness we find that — while no method remains completely unnoticed — many participants do not report any distractions. The evaluated methods show promise to guide users’ attention also in wide field of virtual environment applications, e.g. virtually guided tours or field operation training.","PeriodicalId":185507,"journal":{"name":"Proceedings of the 15th ACM Symposium on Applied Perception","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124922010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
The semantic space for emotional speech and the influence of different methods for prosody isolation on its perception 情绪言语的语义空间及不同韵律隔离方法对其感知的影响
Pub Date : 2018-08-10 DOI: 10.1145/3225153.3225156
Martin Schorradt, Susana Castillo, D. Cunningham
Normally, when people talk to other people, they communicate not only using specific words, but also with intentional changes in their voice melody, facial expressions, and gestures. Not only is human communication inherently multimodal, it is also multi-layered. That is, it conveys more than simple semantic information, but also passes on a wide variety of social, emotional, and functional (e.g., conversation control) information. Previous work has examined the perception of socio-emotional information conveyed by words and facial expressions. Here, we build on that work and examine the perception of socio-emotional information based solely on prosody (e.g., speech melody, rate, tempo, intensity). To examine the perception of affective prosody, it is necessary to remove all semantics from the speech signal - without changing the prosody! In this paper, we compare several different state-of-the-art methods for removing semantics. We started by recording an audio database containing a German sentence spoken by 11 people in 62 different emotional states. We then removed or masked the semantics using three different techniques. We also recorded the same 62 states for a pseudo-language phrase. Each of these five sets of stimuli were subjected to a semantic differential rating task to derive and compare the semantic spaces for emotions. The results show that each of the methods successfully removed the semantic component, but also changed the perception of the emotional content. Interestingly, the pseudo-word stimuli diverged most from the normal sentences. Furthermore, although each of the filters affected the perception of the sentence in some manner, they did so in different ways.
通常,当人们与他人交谈时,他们不仅使用特定的词语,而且还会有意地改变他们的声音旋律、面部表情和手势。人类的交流不仅本质上是多模式的,而且是多层次的。也就是说,它传达的不仅仅是简单的语义信息,而且还传递各种各样的社会、情感和功能(例如,会话控制)信息。之前的研究研究了通过语言和面部表情传达的社会情感信息的感知。在此,我们以该工作为基础,研究了仅基于韵律(例如,语音旋律,速率,节奏,强度)的社会情感信息感知。为了检验情感韵律的感知,有必要从语音信号中去除所有语义——而不改变韵律!在本文中,我们比较了几种不同的最先进的去除语义的方法。我们首先录制了一个音频数据库,其中包含了11个人在62种不同情绪状态下所说的德语句子。然后,我们使用三种不同的技术删除或掩盖语义。我们还为一个伪语言短语记录了同样的62种状态。这五组刺激中的每一组都进行了语义差异评定任务,以推导和比较情绪的语义空间。结果表明,每种方法都成功地去除了语义成分,但也改变了对情感内容的感知。有趣的是,假词刺激与正常句子的差异最大。此外,尽管每个过滤器都以某种方式影响句子的感知,但它们的作用方式不同。
{"title":"The semantic space for emotional speech and the influence of different methods for prosody isolation on its perception","authors":"Martin Schorradt, Susana Castillo, D. Cunningham","doi":"10.1145/3225153.3225156","DOIUrl":"https://doi.org/10.1145/3225153.3225156","url":null,"abstract":"Normally, when people talk to other people, they communicate not only using specific words, but also with intentional changes in their voice melody, facial expressions, and gestures. Not only is human communication inherently multimodal, it is also multi-layered. That is, it conveys more than simple semantic information, but also passes on a wide variety of social, emotional, and functional (e.g., conversation control) information. Previous work has examined the perception of socio-emotional information conveyed by words and facial expressions. Here, we build on that work and examine the perception of socio-emotional information based solely on prosody (e.g., speech melody, rate, tempo, intensity). To examine the perception of affective prosody, it is necessary to remove all semantics from the speech signal - without changing the prosody! In this paper, we compare several different state-of-the-art methods for removing semantics. We started by recording an audio database containing a German sentence spoken by 11 people in 62 different emotional states. We then removed or masked the semantics using three different techniques. We also recorded the same 62 states for a pseudo-language phrase. Each of these five sets of stimuli were subjected to a semantic differential rating task to derive and compare the semantic spaces for emotions. The results show that each of the methods successfully removed the semantic component, but also changed the perception of the emotional content. Interestingly, the pseudo-word stimuli diverged most from the normal sentences. Furthermore, although each of the filters affected the perception of the sentence in some manner, they did so in different ways.","PeriodicalId":185507,"journal":{"name":"Proceedings of the 15th ACM Symposium on Applied Perception","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125693192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings of the 15th ACM Symposium on Applied Perception
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1