首页 > 最新文献

Proceedings of the 2016 Symposium on Spatial User Interaction最新文献

英文 中文
Biometric Authentication Using the Motion of a Hand 使用手部动作的生物识别认证
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2989210
Satoru Imura, H. Hosobe
We propose a hand gesture-based spatial interaction method for biometric authentication. It supports 3D gestures that allow the user to move his/her hand without touching an input device. Using the motions of fingertips and joints as biometric data, the method improves the accuracy of authentication. We present the results of experiments, where subjects performed three types of gestures.
提出了一种基于手势的空间交互生物识别方法。它支持3D手势,允许用户在不接触输入设备的情况下移动他/她的手。该方法利用指尖和关节的运动作为生物特征数据,提高了身份验证的准确性。我们展示了实验结果,实验对象做了三种类型的手势。
{"title":"Biometric Authentication Using the Motion of a Hand","authors":"Satoru Imura, H. Hosobe","doi":"10.1145/2983310.2989210","DOIUrl":"https://doi.org/10.1145/2983310.2989210","url":null,"abstract":"We propose a hand gesture-based spatial interaction method for biometric authentication. It supports 3D gestures that allow the user to move his/her hand without touching an input device. Using the motions of fingertips and joints as biometric data, the method improves the accuracy of authentication. We present the results of experiments, where subjects performed three types of gestures.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134433393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Large Scale Interactive AR Display Based on a Projector-Camera System 基于投影-摄像系统的大规模交互式AR显示
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2989183
Chun Xie, Y. Kameda, Kenji Suzuki, I. Kitahara
School gymnasium, which has an important role in either physical or mental development of children, is a necessary facility for most schools. In recent years, considering the individual differences among students in terms of gender, age, developmental level or interest, many new forms of gymnasium activity have been developed to make physical education more flexible. In some cases, introducing new physical activity is accompanied by a requirement of drawing new contents on the floor of a gymnasium. Ordinary, this is done by using line-tape. However, contents created by line-tape need periodic maintenance that is costly and time-consuming. Moreover, overlapping lines for different purposes can make users confused. Furthermore, the most critical problem is that line-tape can represent only simple and static contents, thus, the variety of new physical education activity are greatly limited. This paper proposes a projection-based AR system consisting of multiple projectors and cameras to deal with the problems described above. This system is aiming to provide extension functions to traditional school gymnasium by realizing not only representation of dynamic AR contents but also interactive display on the gymnasium floor.
学校体育馆对儿童的身心发展起着重要的作用,是大多数学校的必要设施。近年来,考虑到学生在性别、年龄、发展水平或兴趣等方面的个体差异,开发了许多新的体育活动形式,使体育教学更加灵活。在某些情况下,引入新的体育活动伴随着在体育馆地板上绘制新内容的要求。一般来说,这是用线带完成的。但是,线带创建的内容需要定期维护,这既昂贵又耗时。此外,不同用途的重叠线条会让用户感到困惑。此外,最关键的问题是线带只能代表简单和静态的内容,从而大大限制了新的体育活动的种类。本文提出了一种由多台投影机和摄像机组成的基于投影的AR系统来解决上述问题。本系统旨在为传统的学校体育馆提供扩展功能,既实现动态AR内容的呈现,又实现体育馆地板的交互显示。
{"title":"Large Scale Interactive AR Display Based on a Projector-Camera System","authors":"Chun Xie, Y. Kameda, Kenji Suzuki, I. Kitahara","doi":"10.1145/2983310.2989183","DOIUrl":"https://doi.org/10.1145/2983310.2989183","url":null,"abstract":"School gymnasium, which has an important role in either physical or mental development of children, is a necessary facility for most schools. In recent years, considering the individual differences among students in terms of gender, age, developmental level or interest, many new forms of gymnasium activity have been developed to make physical education more flexible. In some cases, introducing new physical activity is accompanied by a requirement of drawing new contents on the floor of a gymnasium. Ordinary, this is done by using line-tape. However, contents created by line-tape need periodic maintenance that is costly and time-consuming. Moreover, overlapping lines for different purposes can make users confused. Furthermore, the most critical problem is that line-tape can represent only simple and static contents, thus, the variety of new physical education activity are greatly limited. This paper proposes a projection-based AR system consisting of multiple projectors and cameras to deal with the problems described above. This system is aiming to provide extension functions to traditional school gymnasium by realizing not only representation of dynamic AR contents but also interactive display on the gymnasium floor.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"162 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129201766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Session details: Interaction I 会话细节:交互1
Pub Date : 2016-10-15 DOI: 10.1145/3248572
Barrett Ens
{"title":"Session details: Interaction I","authors":"Barrett Ens","doi":"10.1145/3248572","DOIUrl":"https://doi.org/10.1145/3248572","url":null,"abstract":"","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126660087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sharpen Your Carving Skills in Mixed Reality Space 锐化你的雕刻技能在混合现实空间
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2989188
Maho Kawagoe, M. Otsuki, F. Shibata, Asako Kimura
This paper proposes a virtual carving system using ToolDevice in a mixed reality (MR) space. By touching and moving the device over real objects, users can carve it virtually. Real-world wood carving with wood carving tools requires several steps such as carving a rough outline, shaping the wood, and carving patterns on its surface. In this paper, we focus on the step of carving patterns on a surface and implement it in our MR carving system.
提出了一种基于ToolDevice的混合现实(MR)空间虚拟雕刻系统。通过在真实物体上触摸和移动设备,用户可以虚拟地雕刻它。用木雕工具雕刻真实世界的木雕,需要雕刻出大致轮廓,塑造木材,在其表面雕刻图案等几个步骤。在本文中,我们重点研究了在表面上雕刻图案的步骤,并将其实现在我们的MR雕刻系统中。
{"title":"Sharpen Your Carving Skills in Mixed Reality Space","authors":"Maho Kawagoe, M. Otsuki, F. Shibata, Asako Kimura","doi":"10.1145/2983310.2989188","DOIUrl":"https://doi.org/10.1145/2983310.2989188","url":null,"abstract":"This paper proposes a virtual carving system using ToolDevice in a mixed reality (MR) space. By touching and moving the device over real objects, users can carve it virtually. Real-world wood carving with wood carving tools requires several steps such as carving a rough outline, shaping the wood, and carving patterns on its surface. In this paper, we focus on the step of carving patterns on a surface and implement it in our MR carving system.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122599307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
KnowWhat: Mid Field Sensemaking for the Visually Impaired 知道什么:视障人士的中场感知
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2989190
Sujeath Pareddy, A. Agarwal, Manohar Swaminathan
KnowWhat is our solution to help speed up mid-field sensemaking by visually impaired persons (VIPs). Our prototype combines a spectacle mounted camera, passive fiducial marker based tagging of the environment and 3D spatial audio to build a novel interaction technique. We present qualitative results of experiments to evaluate our solution.
KnowWhat是我们的解决方案,帮助视障人士(vip)加快中间语义。我们的原型结合了安装在眼镜上的相机,基于环境标记的被动基准标记和3D空间音频来构建一种新的交互技术。我们提出了定性的实验结果来评价我们的解决方案。
{"title":"KnowWhat: Mid Field Sensemaking for the Visually Impaired","authors":"Sujeath Pareddy, A. Agarwal, Manohar Swaminathan","doi":"10.1145/2983310.2989190","DOIUrl":"https://doi.org/10.1145/2983310.2989190","url":null,"abstract":"KnowWhat is our solution to help speed up mid-field sensemaking by visually impaired persons (VIPs). Our prototype combines a spectacle mounted camera, passive fiducial marker based tagging of the environment and 3D spatial audio to build a novel interaction technique. We present qualitative results of experiments to evaluate our solution.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122752368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
3D Camera Pose History Visualization 3D相机姿态历史可视化
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2989185
Mayra Donaji Barrera Machuca, W. Stuerzlinger
We present a 3D camera pose history visualization that can assist users of CAD software's, virtual worlds and scientific visualizations to revisit their navigation history. The contribution of this system is to enable users to move more efficiently through the virtual environment so they can focus on their main activity.
我们提出了一个三维相机姿态历史可视化,可以帮助CAD软件,虚拟世界和科学可视化的用户重新审视他们的导航历史。该系统的贡献是使用户能够更有效地在虚拟环境中移动,以便他们能够专注于他们的主要活动。
{"title":"3D Camera Pose History Visualization","authors":"Mayra Donaji Barrera Machuca, W. Stuerzlinger","doi":"10.1145/2983310.2989185","DOIUrl":"https://doi.org/10.1145/2983310.2989185","url":null,"abstract":"We present a 3D camera pose history visualization that can assist users of CAD software's, virtual worlds and scientific visualizations to revisit their navigation history. The contribution of this system is to enable users to move more efficiently through the virtual environment so they can focus on their main activity.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131917299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Optimising Free Hand Selection in Large Displays by Adapting to User's Physical Movements 通过适应用户的物理运动来优化大型显示器中的徒手选择
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2985754
Xiaolong Lou, A. X. Li, Ren Peng, Preben Hansen
Advance in motion sensing technologies such as Microsoft Kinect and ASUS Xtion has enabled users to select targets on a large display through natural hand gestures. In such interaction, the users move left and right to navigate the display, and they frequently adjust body proximity against the display thus to switch between overall views and focus views. These physical movements benefit information navigation, interaction modality switch, and user interface adaptation. But in more specific context of free hand selection in large displays, the effect of physical movements is less systematically investigated. To explore the potential of physical movements in free hand selection, a physical movements-adapted technique is developed and evaluated. The results show that the new technique has significant improvements in both selection efficiency and accuracy, the more difficult selection task the more obvious improvement in accuracy. Additionally, the new technique is preferred to the baseline of pointer acceleration (PA) technique by participants.
随着微软Kinect和华硕Xtion等动作感应技术的进步,用户可以通过自然的手势在大屏幕上选择目标。在这种交互中,用户左右移动来导航显示器,并且他们经常根据显示器调整身体距离,从而在整体视图和焦点视图之间切换。这些物理运动有利于信息导航、交互模式切换和用户界面适应。但在更具体的情况下,在大型显示器的自由选择,物理运动的影响较少系统的研究。为了探索物理运动在徒手选择中的潜力,我们开发并评估了一种适应物理运动的技术。结果表明,新技术在选择效率和准确率上都有显著提高,选择任务难度越大,准确率提高越明显。此外,参与者对新技术的偏好高于指针加速(PA)技术的基线。
{"title":"Optimising Free Hand Selection in Large Displays by Adapting to User's Physical Movements","authors":"Xiaolong Lou, A. X. Li, Ren Peng, Preben Hansen","doi":"10.1145/2983310.2985754","DOIUrl":"https://doi.org/10.1145/2983310.2985754","url":null,"abstract":"Advance in motion sensing technologies such as Microsoft Kinect and ASUS Xtion has enabled users to select targets on a large display through natural hand gestures. In such interaction, the users move left and right to navigate the display, and they frequently adjust body proximity against the display thus to switch between overall views and focus views. These physical movements benefit information navigation, interaction modality switch, and user interface adaptation. But in more specific context of free hand selection in large displays, the effect of physical movements is less systematically investigated. To explore the potential of physical movements in free hand selection, a physical movements-adapted technique is developed and evaluated. The results show that the new technique has significant improvements in both selection efficiency and accuracy, the more difficult selection task the more obvious improvement in accuracy. Additionally, the new technique is preferred to the baseline of pointer acceleration (PA) technique by participants.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"73 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134197187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The Reality of Mixed Reality 混合现实的现实
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2983311
S. Izadi
Since Ivan Sutherland's Sword of Damocles, researchers have been pushing to make augmented, virtual and mixed reality, a reality. In recent years, these technologies have exploded onto the grand stage, with many devices on the consumer market, with no apparent slowing down in terms of demand. However, whilst excitement and thirst for mixed reality technologies is at a high, there are still many challenges in making such technologies a reality for everyday consumers. In this talk, I will outline some of these challenges -- some technical, some experiential, almost all social -- and discuss how one of the key factors of taking mixed reality to the next level is around enhancing the way humans can ultimately interact and communicate. As part of this I will outline why real-time 3D capture, reconstruction and understanding of humans and the world around us is the key technology enabler in making this form of mixed reality truly ubiquitous.
自从伊万·萨瑟兰的达摩克利斯之剑问世以来,研究人员一直在努力使增强现实、虚拟现实和混合现实成为现实。近年来,这些技术已经在大舞台上爆发,消费市场上有许多设备,需求方面没有明显放缓。然而,尽管人们对混合现实技术的兴奋和渴望程度很高,但要使这种技术成为日常消费者的现实,仍然存在许多挑战。在这次演讲中,我将概述其中一些挑战——一些是技术上的,一些是经验上的,几乎都是社交上的——并讨论将混合现实提升到下一个水平的关键因素之一是如何围绕增强人类最终互动和沟通的方式。作为其中的一部分,我将概述为什么实时3D捕获,重建和理解人类和我们周围的世界是使这种形式的混合现实真正无处不在的关键技术。
{"title":"The Reality of Mixed Reality","authors":"S. Izadi","doi":"10.1145/2983310.2983311","DOIUrl":"https://doi.org/10.1145/2983310.2983311","url":null,"abstract":"Since Ivan Sutherland's Sword of Damocles, researchers have been pushing to make augmented, virtual and mixed reality, a reality. In recent years, these technologies have exploded onto the grand stage, with many devices on the consumer market, with no apparent slowing down in terms of demand. However, whilst excitement and thirst for mixed reality technologies is at a high, there are still many challenges in making such technologies a reality for everyday consumers. In this talk, I will outline some of these challenges -- some technical, some experiential, almost all social -- and discuss how one of the key factors of taking mixed reality to the next level is around enhancing the way humans can ultimately interact and communicate. As part of this I will outline why real-time 3D capture, reconstruction and understanding of humans and the world around us is the key technology enabler in making this form of mixed reality truly ubiquitous.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"281 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134410743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Improving Interaction in HMD-Based Vehicle Simulators through Real Time Object Reconstruction 通过实时对象重建改进基于hmd的车辆模拟器的交互性
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2985761
Michael Bottone, K. Johnsen
Bringing real objects into the virtual world has been shown to increase usability and presence in virtual reality applications. This paper presents a system to generate a real time virtual reconstruction of real world user interface elements for use in a head mounted display based driving simulator. Our system uses sensor fusion algorithms to combine data from depth and color cameras to generate an accurate, detailed, and fast rendering of the user's hands while using the simulator. We tested our system and show in our results that the inclusion of the participants real hands, the wheel, and the shifter in the virtual environment increases the immersion, presence, and usability of the simulation. Our system can also be used to bring other real objects into the virtual world, especially when accuracy, detail, and real time updates are desired.
将真实对象带入虚拟世界已被证明可以提高虚拟现实应用程序的可用性和存在感。本文提出了一种基于头戴式显示器的驾驶模拟器的用户界面元素实时虚拟重构系统。我们的系统使用传感器融合算法来结合深度和彩色相机的数据,在使用模拟器时生成准确、详细和快速的用户手部渲染。我们测试了我们的系统,并在结果中表明,在虚拟环境中包含参与者的真实手,方向盘和变速杆增加了模拟的沉浸感,存在感和可用性。我们的系统还可以用于将其他真实物体带入虚拟世界,特别是当需要准确性,细节和实时更新时。
{"title":"Improving Interaction in HMD-Based Vehicle Simulators through Real Time Object Reconstruction","authors":"Michael Bottone, K. Johnsen","doi":"10.1145/2983310.2985761","DOIUrl":"https://doi.org/10.1145/2983310.2985761","url":null,"abstract":"Bringing real objects into the virtual world has been shown to increase usability and presence in virtual reality applications. This paper presents a system to generate a real time virtual reconstruction of real world user interface elements for use in a head mounted display based driving simulator. Our system uses sensor fusion algorithms to combine data from depth and color cameras to generate an accurate, detailed, and fast rendering of the user's hands while using the simulator. We tested our system and show in our results that the inclusion of the participants real hands, the wheel, and the shifter in the virtual environment increases the immersion, presence, and usability of the simulation. Our system can also be used to bring other real objects into the virtual world, especially when accuracy, detail, and real time updates are desired.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133448914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
On Your Feet!: Enhancing Vection in Leaning-Based Interfaces through Multisensory Stimuli 站起来!通过多感官刺激增强基于学习界面的向量
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2985759
E. Kruijff, Alexander Marquardt, Christina Trepkowski, R. Lindeman, André Hinkenjann, Jens Maiero, B. Riecke
When navigating larger virtual environments and computer games, natural walking is often unfeasible. Here, we investigate how alternatives such as joystick- or leaning-based locomotion interfaces ("human joystick") can be enhanced by adding walking-related cues following a sensory substitution approach. Using a custom-designed foot haptics system and evaluating it in a multi-part study, we show that adding walking related auditory cues (footstep sounds), visual cues (simulating bobbing head-motions from walking), and vibrotactile cues (via vibrotactile transducers and bass-shakers under participants' feet) could all enhance participants' sensation of self-motion (vection) and involement/presence. These benefits occurred similarly for seated joystick and standing leaning locomotion. Footstep sounds and vibrotactile cues also enhanced participants' self-reported ability to judge self-motion velocities and distances traveled. Compared to seated joystick control, standing leaning enhanced self-motion sensations. Combining standing leaning with a minimal walking-in-place procedure showed no benefits and reduced usability, though. Together, results highlight the potential of incorporating walking-related auditory, visual, and vibrotactile cues for improving user experience and self-motion perception in applications such as virtual reality, gaming, and tele-presence.
在更大的虚拟环境和电脑游戏中,自然行走通常是不可行的。在这里,我们研究了如何通过添加与行走相关的线索来增强诸如操纵杆或基于学习的运动界面(“人类操纵杆”)的替代方法。使用定制的足部触觉系统并在多部分研究中对其进行评估,我们发现添加与行走相关的听觉线索(脚步声)、视觉线索(模拟行走时头部的摆动)和振动触觉线索(通过振动触觉传感器和脚下的低音振动器)都可以增强参与者的自我运动感(矢量感)和参与/在场感。这些好处同样发生在坐着的操纵杆和站立的倾斜运动中。脚步声和振动触觉提示也增强了参与者自我报告的判断自我运动速度和距离的能力。与坐着的操纵杆控制相比,站立倾斜增强了自我运动感觉。然而,将站立式倾斜与最小的原地行走相结合没有任何好处,而且降低了可用性。总之,研究结果强调了结合与步行相关的听觉、视觉和振动触觉线索的潜力,以改善虚拟现实、游戏和远程呈现等应用中的用户体验和自我运动感知。
{"title":"On Your Feet!: Enhancing Vection in Leaning-Based Interfaces through Multisensory Stimuli","authors":"E. Kruijff, Alexander Marquardt, Christina Trepkowski, R. Lindeman, André Hinkenjann, Jens Maiero, B. Riecke","doi":"10.1145/2983310.2985759","DOIUrl":"https://doi.org/10.1145/2983310.2985759","url":null,"abstract":"When navigating larger virtual environments and computer games, natural walking is often unfeasible. Here, we investigate how alternatives such as joystick- or leaning-based locomotion interfaces (\"human joystick\") can be enhanced by adding walking-related cues following a sensory substitution approach. Using a custom-designed foot haptics system and evaluating it in a multi-part study, we show that adding walking related auditory cues (footstep sounds), visual cues (simulating bobbing head-motions from walking), and vibrotactile cues (via vibrotactile transducers and bass-shakers under participants' feet) could all enhance participants' sensation of self-motion (vection) and involement/presence. These benefits occurred similarly for seated joystick and standing leaning locomotion. Footstep sounds and vibrotactile cues also enhanced participants' self-reported ability to judge self-motion velocities and distances traveled. Compared to seated joystick control, standing leaning enhanced self-motion sensations. Combining standing leaning with a minimal walking-in-place procedure showed no benefits and reduced usability, though. Together, results highlight the potential of incorporating walking-related auditory, visual, and vibrotactile cues for improving user experience and self-motion perception in applications such as virtual reality, gaming, and tele-presence.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121795936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 79
期刊
Proceedings of the 2016 Symposium on Spatial User Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1