首页 > 最新文献

2016 IEEE Symposium on 3D User Interfaces (3DUI)最新文献

英文 中文
Discriminative hand localization in depth images 深度图像的判别手定位
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460059
Max Ehrlich, Philippos Mordohai
We present a novel hand localization technique for 3D user interfaces. Our method is designed to overcome the difficulty of fitting anatomical models which fail to converge or converge with large errors in complex scenes or suboptimal imagery. We learn a discriminative model of the hand from depth images by using fast to compute features and a Random Forest classifier. The learned model is then combined with a spatial clustering algorithm to localize the hand position. We propose three formulations of low-level image features for use in model training. We evaluate the performance of our method by testing on low resolution depth maps of users two to three meters from the sensor in natural poses. Our method can detect an arbitrary number of hands per scene and preliminary results show that it is robust to suboptimal imagery.
提出了一种新的三维用户界面手部定位技术。我们的方法旨在克服在复杂场景或次优图像中解剖模型不收敛或收敛误差大的拟合困难。我们使用快速特征计算和随机森林分类器从深度图像中学习手的判别模型。然后将学习到的模型与空间聚类算法相结合来定位手的位置。我们提出了三种低级图像特征的公式,用于模型训练。我们通过在距离传感器2到3米的用户以自然姿势进行低分辨率深度图测试来评估我们的方法的性能。我们的方法可以在每个场景中检测任意数量的手,初步结果表明它对次优图像具有鲁棒性。
{"title":"Discriminative hand localization in depth images","authors":"Max Ehrlich, Philippos Mordohai","doi":"10.1109/3DUI.2016.7460059","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460059","url":null,"abstract":"We present a novel hand localization technique for 3D user interfaces. Our method is designed to overcome the difficulty of fitting anatomical models which fail to converge or converge with large errors in complex scenes or suboptimal imagery. We learn a discriminative model of the hand from depth images by using fast to compute features and a Random Forest classifier. The learned model is then combined with a spatial clustering algorithm to localize the hand position. We propose three formulations of low-level image features for use in model training. We evaluate the performance of our method by testing on low resolution depth maps of users two to three meters from the sensor in natural poses. Our method can detect an arbitrary number of hands per scene and preliminary results show that it is robust to suboptimal imagery.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134289308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Eye tracking for locomotion prediction in redirected walking 眼动追踪在重定向行走运动预测中的应用
Pub Date : 2016-03-19 DOI: 10.3929/ETHZ-A-010613910
Markus Zank, A. Kunz
Model predictive control was shown to be a powerful tool for Redirected Walking when used to plan and select future redirection techniques. However, to use it effectively, a good prediction of the user's future actions is crucial. Traditionally, this prediction is made based on the user's position or current direction of movement. In the area of cognitive sciences however, it was shown that a person's gaze can also be highly indicative of his intention in both selection and navigation tasks. In this paper, this effect is used the first time to predict a user's locomotion target during goal-directed locomotion in an immersive virtual environment. After discussing the general implications and challenges of using eye tracking for prediction in a locomotion context, we propose a prediction method for a user's intended locomotion target. This approach is then compared with position based approaches in terms of prediction time and accuracy based on data gathered in an experiment. The results show that, in certain situations, eye tracking allows an earlier prediction compared approaches currently used for redirected walking. However, other recently published prediction methods that are based on the user's position perform almost as well as the eye tracking based approaches presented in this paper.
模型预测控制被证明是一个强大的工具重定向行走时,用于规划和选择未来的重定向技术。然而,为了有效地使用它,对用户未来行为的良好预测至关重要。传统上,这种预测是基于用户的位置或当前移动方向做出的。然而,在认知科学领域,研究表明,一个人的凝视也可以高度表明他在选择和导航任务中的意图。本文首次将该效应用于沉浸式虚拟环境中目标导向运动中用户运动目标的预测。在讨论了在运动环境中使用眼动追踪进行预测的一般含义和挑战之后,我们提出了一种预测用户预期运动目标的方法。然后,根据实验收集的数据,将该方法与基于位置的方法在预测时间和精度方面进行比较。结果表明,在某些情况下,与目前用于重定向行走的方法相比,眼动追踪可以更早地预测。然而,最近发表的其他基于用户位置的预测方法的性能几乎与本文提出的基于眼动追踪的方法一样好。
{"title":"Eye tracking for locomotion prediction in redirected walking","authors":"Markus Zank, A. Kunz","doi":"10.3929/ETHZ-A-010613910","DOIUrl":"https://doi.org/10.3929/ETHZ-A-010613910","url":null,"abstract":"Model predictive control was shown to be a powerful tool for Redirected Walking when used to plan and select future redirection techniques. However, to use it effectively, a good prediction of the user's future actions is crucial. Traditionally, this prediction is made based on the user's position or current direction of movement. In the area of cognitive sciences however, it was shown that a person's gaze can also be highly indicative of his intention in both selection and navigation tasks. In this paper, this effect is used the first time to predict a user's locomotion target during goal-directed locomotion in an immersive virtual environment. After discussing the general implications and challenges of using eye tracking for prediction in a locomotion context, we propose a prediction method for a user's intended locomotion target. This approach is then compared with position based approaches in terms of prediction time and accuracy based on data gathered in an experiment. The results show that, in certain situations, eye tracking allows an earlier prediction compared approaches currently used for redirected walking. However, other recently published prediction methods that are based on the user's position perform almost as well as the eye tracking based approaches presented in this paper.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134642958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Guidance field: Potential field to guide users to target locations in virtual environments 引导场:引导用户到达虚拟环境中目标位置的势场
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460029
R. Tanaka, Takuji Narumi, T. Tanikawa, M. Hirose
It is known that interacting with virtual objects enhances users' understanding and interests more effectively than learning through passive media. However, compared with the passive media, the excessive amount of information and interactive options in most virtual reality settings may cause users to quit exploring before they experience the entire content of the virtual world. In this paper, we propose a new guidance method to implicitly lead users to pre-defined locations in the virtual environment while continuing to permit free explorations by using a kind of potential field (guidance field). The guidance field is composed of two independent mechanisms: locomotion guidance and rotation guidance. We implemented our method in a virtual museum exploring system and exhibited it in a real museum to evaluate the effectiveness of our method when used by a large number of people. The result suggests that our method successfully guides users to pre-defined locations and makes users aware of pre-defined objects. Moreover, the results suggest that our guidance may not interfere users' arbitrary explorations.
众所周知,与虚拟物体互动比通过被动媒体学习更有效地提高用户的理解和兴趣。然而,与被动媒体相比,大多数虚拟现实设置中过多的信息和交互选项可能会导致用户在体验到虚拟世界的全部内容之前就放弃探索。在本文中,我们提出了一种新的引导方法,利用一种势场(引导场)隐式地将用户引导到虚拟环境中的预定义位置,同时继续允许自由探索。制导场由两个独立的机构组成:运动制导和旋转制导。我们在一个虚拟博物馆探索系统中实现了我们的方法,并在一个真实的博物馆中进行了展示,以评估我们的方法在大量人群使用时的有效性。结果表明,我们的方法成功地将用户引导到预定义的位置,并使用户意识到预定义的对象。此外,结果表明,我们的指导可能不会干扰用户的任意探索。
{"title":"Guidance field: Potential field to guide users to target locations in virtual environments","authors":"R. Tanaka, Takuji Narumi, T. Tanikawa, M. Hirose","doi":"10.1109/3DUI.2016.7460029","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460029","url":null,"abstract":"It is known that interacting with virtual objects enhances users' understanding and interests more effectively than learning through passive media. However, compared with the passive media, the excessive amount of information and interactive options in most virtual reality settings may cause users to quit exploring before they experience the entire content of the virtual world. In this paper, we propose a new guidance method to implicitly lead users to pre-defined locations in the virtual environment while continuing to permit free explorations by using a kind of potential field (guidance field). The guidance field is composed of two independent mechanisms: locomotion guidance and rotation guidance. We implemented our method in a virtual museum exploring system and exhibited it in a real museum to evaluate the effectiveness of our method when used by a large number of people. The result suggests that our method successfully guides users to pre-defined locations and makes users aware of pre-defined objects. Moreover, the results suggest that our guidance may not interfere users' arbitrary explorations.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133932936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Gaitzilla: A game to study the effects of virtual embodiment in gait rehabilitation Gaitzilla:一款研究虚拟化身在步态康复中的效果的游戏
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460072
Sharif Shahnewaz, Imtiaz Afarat, Tanvir Irfan, G. Samaraweera, Mikael Dallaire-Cote, David R. Labbé, J. Quarles
The long term objective of this research is to improve gait (i.e., walking patterns) rehabilitation through the use of 3D user interfaces and virtual embodiment. Previous research has shown that virtual embodiment can elicit behavioral change and increased motivation for exercise. However, there has been minimal research on how virtual embodiment can affect persons undergoing physical rehabilitation. To enable the future study of this, we present Gaitzilla a novel gait rehabilitation game in which the user embodies a gigantic monster who is being attacked by small tanks on the ground. The user must step on the tanks to survive. The required movements in the game are inspired by real gait training exercises that focus on foot placement and control. We utilize 3D user interfaces for control of the user's avatar. This poster presents the concept and implementation of the game, the methodology behind the design, and future considerations for studying the effects of virtual embodiment on gait rehabilitation.
这项研究的长期目标是通过使用3D用户界面和虚拟化身来改善步态(即步行模式)康复。先前的研究表明,虚拟化身可以引发行为改变,增加锻炼的动力。然而,关于虚拟化身如何影响正在进行物理康复的人的研究很少。为了进一步研究这一点,我们为Gaitzilla设计了一款新颖的步态康复游戏,在游戏中,用户扮演一个巨大的怪物,正在被地面上的小型坦克攻击。使用者必须踩在坦克上才能生存。所需的运动在游戏的灵感来自真正的步态训练练习,重点放在脚的位置和控制。我们利用3D用户界面来控制用户的化身。这张海报介绍了游戏的概念和实现,设计背后的方法,以及未来研究虚拟化身对步态康复的影响的考虑。
{"title":"Gaitzilla: A game to study the effects of virtual embodiment in gait rehabilitation","authors":"Sharif Shahnewaz, Imtiaz Afarat, Tanvir Irfan, G. Samaraweera, Mikael Dallaire-Cote, David R. Labbé, J. Quarles","doi":"10.1109/3DUI.2016.7460072","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460072","url":null,"abstract":"The long term objective of this research is to improve gait (i.e., walking patterns) rehabilitation through the use of 3D user interfaces and virtual embodiment. Previous research has shown that virtual embodiment can elicit behavioral change and increased motivation for exercise. However, there has been minimal research on how virtual embodiment can affect persons undergoing physical rehabilitation. To enable the future study of this, we present Gaitzilla a novel gait rehabilitation game in which the user embodies a gigantic monster who is being attacked by small tanks on the ground. The user must step on the tanks to survive. The required movements in the game are inspired by real gait training exercises that focus on foot placement and control. We utilize 3D user interfaces for control of the user's avatar. This poster presents the concept and implementation of the game, the methodology behind the design, and future considerations for studying the effects of virtual embodiment on gait rehabilitation.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129030583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A hybrid projection to widen the vertical field of view with large screens to improve the perception of personal space in architectural project review 混合投影,用大屏幕扩大垂直视野,以改善建筑项目审查中的个人空间感知
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460052
Sabah Boustila, Antonio Capobianco, D. Bechmann, Olivier Génevaux
In this paper, we suggest using a hybrid projection to increase the vertical geometric field of view without incurring large deformations to preserve distance perception and to allow the seeing of the surrounding ground. We have conducted an experiment in furnished and unfurnished houses to evaluate the perception of distances and the spatial comprehension. Results show that the hybrid projection improves the perception of surrounding ground which leads to an improvement in the spatial comprehension. Moreover, it preserves the perception of distances and sizes by providing a performance similar to the perspective one in the task of distance estimation.
在本文中,我们建议使用混合投影来增加垂直几何视野,而不会引起大的变形,以保持距离感知并允许看到周围的地面。我们在带家具和不带家具的房子里做了一个实验,来评估人们对距离的感知和对空间的理解。结果表明,混合投影提高了对周围地面的感知,从而提高了空间理解能力。此外,它通过在距离估计任务中提供类似于透视图的性能来保留距离和大小的感知。
{"title":"A hybrid projection to widen the vertical field of view with large screens to improve the perception of personal space in architectural project review","authors":"Sabah Boustila, Antonio Capobianco, D. Bechmann, Olivier Génevaux","doi":"10.1109/3DUI.2016.7460052","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460052","url":null,"abstract":"In this paper, we suggest using a hybrid projection to increase the vertical geometric field of view without incurring large deformations to preserve distance perception and to allow the seeing of the surrounding ground. We have conducted an experiment in furnished and unfurnished houses to evaluate the perception of distances and the spatial comprehension. Results show that the hybrid projection improves the perception of surrounding ground which leads to an improvement in the spatial comprehension. Moreover, it preserves the perception of distances and sizes by providing a performance similar to the perspective one in the task of distance estimation.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121315206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Considerations on binocular mismatching in observation-based diminished reality 基于观测的消噪现实中双目失配问题的思考
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460070
Hitomi Matsuki, Shohei Mori, Sei Ikeda, F. Shibata, Asako Kimura, H. Tamura
In this paper, we introduce novel problems of binocular stereo (binocular mismatching) in observation-based diminished reality. To confirm these problems, we simulated an observation-based diminished reality system using a video see-through head-mounted display. We also demonstrated that simple methods can reduce such binocular mismatching.
本文介绍了基于观测的消弱现实中双目立体(双目失配)的新问题。为了证实这些问题,我们使用可视头戴式显示器模拟了一个基于观察的缩小现实系统。我们还证明了简单的方法可以减少这种双目不匹配。
{"title":"Considerations on binocular mismatching in observation-based diminished reality","authors":"Hitomi Matsuki, Shohei Mori, Sei Ikeda, F. Shibata, Asako Kimura, H. Tamura","doi":"10.1109/3DUI.2016.7460070","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460070","url":null,"abstract":"In this paper, we introduce novel problems of binocular stereo (binocular mismatching) in observation-based diminished reality. To confirm these problems, we simulated an observation-based diminished reality system using a video see-through head-mounted display. We also demonstrated that simple methods can reduce such binocular mismatching.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128355616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Motive compass: Navigation interface for locomotion in virtual environments constructed with spherical images 动机罗盘:在球形图像构造的虚拟环境中用于运动的导航界面
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460031
R. Tanaka, Takuji Narumi, T. Tanikawa, M. Hirose
In this paper, we propose Motive Compass, a new navigation interface for locomotion in virtual environment constructed with spherical images. In these virtual environments, users can gaze in all directions, but can only pass through lines where the camera has passed through during recording. Because the rotation of a virtual camera is not constrained to navigation paths, users may perceive that accessible locations are not constrained and they can freely move. Therefore, it is necessary to inform users of the accessible directions. Furthermore, velocity control is needed for close exploration in virtual environments. Therefore, we propose the Motive Compass input interface, which intuitively shows users the accessible directions and enables them to control the velocity. We conducted a large-scale demonstration experiment in a real exhibition to evaluate our interface and compared it with a conventional interface. The results show that our interface more effectively presents accessible directions and enables users to control their velocities, which supports the user's exploration in virtual environments.
本文提出了一种新的基于球形图像构建的虚拟环境运动导航界面——Motive Compass。在这些虚拟环境中,用户可以向各个方向凝视,但只能穿过摄像机在录制过程中经过的线路。由于虚拟摄像机的旋转不受导航路径的限制,用户可能会认为可访问的位置不受限制,他们可以自由移动。因此,有必要告知使用者无障碍方向。此外,在虚拟环境中进行近距离探测需要速度控制。因此,我们提出了Motive Compass输入界面,直观地向用户显示可访问的方向,并使用户能够控制速度。我们在一个真实的展览中进行了大规模的演示实验来评估我们的界面,并将其与传统的界面进行了比较。结果表明,我们的界面更有效地呈现出可访问的方向,并允许用户控制他们的速度,支持用户在虚拟环境中的探索。
{"title":"Motive compass: Navigation interface for locomotion in virtual environments constructed with spherical images","authors":"R. Tanaka, Takuji Narumi, T. Tanikawa, M. Hirose","doi":"10.1109/3DUI.2016.7460031","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460031","url":null,"abstract":"In this paper, we propose Motive Compass, a new navigation interface for locomotion in virtual environment constructed with spherical images. In these virtual environments, users can gaze in all directions, but can only pass through lines where the camera has passed through during recording. Because the rotation of a virtual camera is not constrained to navigation paths, users may perceive that accessible locations are not constrained and they can freely move. Therefore, it is necessary to inform users of the accessible directions. Furthermore, velocity control is needed for close exploration in virtual environments. Therefore, we propose the Motive Compass input interface, which intuitively shows users the accessible directions and enables them to control the velocity. We conducted a large-scale demonstration experiment in a real exhibition to evaluate our interface and compared it with a conventional interface. The results show that our interface more effectively presents accessible directions and enables users to control their velocities, which supports the user's exploration in virtual environments.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124048162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Evaluation of user-centric optical see-through head-mounted display calibration using a leap motion controller 评估以用户为中心的光学透明头戴式显示器校准使用跳跃运动控制器
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460047
Kenneth R. Moser, J. Edward Swan
Advances in optical see-through head-mounted display technology have yielded a number of consumer accessible options, such as the Google Glass and Epson Moverio BT-200, and have paved the way for promising next generation hardware, including the Microsoft HoloLens and Epson Pro BT-2000. The release of consumer devices, though, has also been accompanied by an ever increasing need for standardized optical see-through display calibration procedures easily implemented and performed by researchers, developers, and novice users alike. Automatic calibration techniques offer the possibility for ubiquitous environment independent solutions, un-reliant upon user interaction. These processes, however, require the use of additional eye tracking hardware and algorithms not natively present in current display offerings. User dependent approaches, therefore, remain the only viable option for effective calibration of current generation optical see-through hardware. Inclusion of depth sensors and hand tracking cameras, promised in forthcoming consumer models, offer further potential to improve these manual methods and provide practical intuitive calibration options accessible to a wide user base. In this work, we evaluate the accuracy and precision of manual optical see-through head-mounted display calibration performed using a Leap Motion controller. Both hand and stylus based methods for monocular and stereo procedures are examined, along with several on-screen reticle designs for improving alignment context during calibration. Our study shows, that while enhancing the context of reticles for hand based alignments does yield improved results, Leap Motion calibrations performed with a stylus offer the most accurate and consistent performance, comparable to that found in previous studies for environment-centric routines. In addition, we found that stereo calibration further improved precision in every case. We believe that our findings not only validate the potential of hand and gesture based trackers in facilitating optical see-through calibration methodologies, but also provide a suitable benchmark to help guide future efforts in standardizing calibration practices for user friendly consumer systems.
光学透明头戴式显示技术的进步已经产生了许多消费者可访问的选择,如谷歌眼镜和爱普生Moverio BT-200,并为有前途的下一代硬件铺平了道路,包括微软HoloLens和爱普生Pro BT-2000。然而,消费者设备的发布也伴随着对标准化光学透明显示校准程序的不断增长的需求,这些程序可以由研究人员、开发人员和新手用户轻松实现和执行。自动校准技术提供了无处不在的环境独立解决方案的可能性,不依赖于用户交互。然而,这些过程需要使用额外的眼动追踪硬件和算法,而当前的显示产品中没有这些硬件和算法。因此,依赖于用户的方法仍然是当前一代光学透明硬件有效校准的唯一可行选择。在即将推出的消费者模型中,深度传感器和手部跟踪摄像头的加入,为改进这些手动方法提供了进一步的潜力,并为广大用户提供了实用的直观校准选项。在这项工作中,我们评估了使用Leap Motion控制器进行手动光学透明头戴式显示器校准的准确性和精度。手和笔为基础的方法,单眼和立体程序进行了检查,以及几个屏幕上的十字设计,以改善校准过程中的对齐环境。我们的研究表明,虽然增强基于手的校准的线的上下文确实产生了改进的结果,但使用触控笔进行的Leap Motion校准提供了最准确和一致的性能,可与之前的研究中发现的以环境为中心的例程相比较。此外,我们发现立体标定在每种情况下都进一步提高了精度。我们相信,我们的研究结果不仅验证了基于手势和手势的跟踪器在促进光学透明校准方法方面的潜力,而且还提供了一个合适的基准,以帮助指导未来为用户友好型消费者系统标准化校准实践的努力。
{"title":"Evaluation of user-centric optical see-through head-mounted display calibration using a leap motion controller","authors":"Kenneth R. Moser, J. Edward Swan","doi":"10.1109/3DUI.2016.7460047","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460047","url":null,"abstract":"Advances in optical see-through head-mounted display technology have yielded a number of consumer accessible options, such as the Google Glass and Epson Moverio BT-200, and have paved the way for promising next generation hardware, including the Microsoft HoloLens and Epson Pro BT-2000. The release of consumer devices, though, has also been accompanied by an ever increasing need for standardized optical see-through display calibration procedures easily implemented and performed by researchers, developers, and novice users alike. Automatic calibration techniques offer the possibility for ubiquitous environment independent solutions, un-reliant upon user interaction. These processes, however, require the use of additional eye tracking hardware and algorithms not natively present in current display offerings. User dependent approaches, therefore, remain the only viable option for effective calibration of current generation optical see-through hardware. Inclusion of depth sensors and hand tracking cameras, promised in forthcoming consumer models, offer further potential to improve these manual methods and provide practical intuitive calibration options accessible to a wide user base. In this work, we evaluate the accuracy and precision of manual optical see-through head-mounted display calibration performed using a Leap Motion controller. Both hand and stylus based methods for monocular and stereo procedures are examined, along with several on-screen reticle designs for improving alignment context during calibration. Our study shows, that while enhancing the context of reticles for hand based alignments does yield improved results, Leap Motion calibrations performed with a stylus offer the most accurate and consistent performance, comparable to that found in previous studies for environment-centric routines. In addition, we found that stereo calibration further improved precision in every case. We believe that our findings not only validate the potential of hand and gesture based trackers in facilitating optical see-through calibration methodologies, but also provide a suitable benchmark to help guide future efforts in standardizing calibration practices for user friendly consumer systems.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115448417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Navigation in virtual environments: Design and comparison of two anklet vibration patterns for guidance 虚拟环境中的导航:两种用于导航的脚环振动模式的设计与比较
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460071
Jérémy Plouzeau, Aida Erfanian, Cynthia Chiu, F. Mérienne, Yaoping Hu
In this study, we present a preliminary exploration about the added value of vibration information for guiding navigation in a VE. The exploration consists of two parts. Firstly, we designed two different vibration patterns. These patterns, pushing pattern and compass pattern, differ conceptually in the levels of abstraction. Secondly, we undertook an experiment to compare the two patterns in guiding navigation in a VE. The objective of the comparison is to establish a baseline for examining the suitability of using vibration patterns to guide navigation.
在本研究中,我们对振动信息在VE导航中的附加价值进行了初步的探索。探索包括两个部分。首先,我们设计了两种不同的振动模式。这些模式,推动模式和指南针模式,在抽象层次上是不同的。其次,我们进行了实验,比较了两种模式在VE引导导航中的作用。比较的目的是为检查使用振动模式来指导导航的适用性建立一个基线。
{"title":"Navigation in virtual environments: Design and comparison of two anklet vibration patterns for guidance","authors":"Jérémy Plouzeau, Aida Erfanian, Cynthia Chiu, F. Mérienne, Yaoping Hu","doi":"10.1109/3DUI.2016.7460071","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460071","url":null,"abstract":"In this study, we present a preliminary exploration about the added value of vibration information for guiding navigation in a VE. The exploration consists of two parts. Firstly, we designed two different vibration patterns. These patterns, pushing pattern and compass pattern, differ conceptually in the levels of abstraction. Secondly, we undertook an experiment to compare the two patterns in guiding navigation in a VE. The objective of the comparison is to establish a baseline for examining the suitability of using vibration patterns to guide navigation.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115334952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Interpreting 2D gesture annotations in 3D augmented reality 在3D增强现实中解释2D手势注释
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460046
B. Nuernberger, Kuo-Chin Lien, Tobias Höllerer, M. Turk
A 2D gesture annotation provides a simple way to annotate the physical world in augmented reality for a range of applications such as remote collaboration. When rendered from novel viewpoints, these annotations have previously only worked with statically positioned cameras or planar scenes. However, if the camera moves and is observing an arbitrary environment, 2D gesture annotations can easily lose their meaning when shown from novel viewpoints due to perspective effects. In this paper, we present a new approach towards solving this problem by using a gesture enhanced annotation interpretation. By first classifying which type of gesture the user drew, we show that it is possible to render the 2D annotations in 3D in a way that conforms more to the original intention of the user than with traditional methods. We first determined a generic vocabulary of important 2D gestures for an augmented reality enhanced remote collaboration scenario by running an Amazon Mechanical Turk study with 88 participants. Next, we designed a novel real-time method to automatically handle the two most common 2D gesture annotations - arrows and circles - and give a detailed analysis of the ambiguities that must be handled in each case. Arrow gestures are interpreted by identifying their anchor points and using scene surface normals for better perspective rendering. For circle gestures, we designed a novel energy function to help infer the object of interest using both 2D image cues and 3D geometric cues. Results indicate that our method outperforms previous approaches by better conveying the meaning of the original drawing from different viewpoints.
2D手势注释提供了一种简单的方法来注释增强现实中的物理世界,用于远程协作等一系列应用程序。当从新的视点渲染时,这些注释以前只适用于静态定位的相机或平面场景。然而,如果摄像机移动并观察任意环境,2D手势注释很容易因透视效果而从新视点显示时失去其意义。在本文中,我们提出了一种新的方法来解决这个问题,即使用手势增强注释解释。通过首先对用户绘制的手势类型进行分类,我们表明可以以一种比传统方法更符合用户初衷的方式在3D中呈现2D注释。我们首先通过对88名参与者进行亚马逊土耳其机器人研究,确定了用于增强现实增强远程协作场景的重要2D手势的通用词汇表。接下来,我们设计了一种新颖的实时方法来自动处理两种最常见的2D手势注释-箭头和圆圈-并详细分析了每种情况下必须处理的歧义。箭头手势通过识别它们的锚点和使用场景表面法线来解释,以获得更好的透视渲染。对于圆形手势,我们设计了一个新的能量函数来帮助使用二维图像线索和三维几何线索推断感兴趣的对象。结果表明,我们的方法优于以往的方法,可以更好地从不同的角度传达原图的含义。
{"title":"Interpreting 2D gesture annotations in 3D augmented reality","authors":"B. Nuernberger, Kuo-Chin Lien, Tobias Höllerer, M. Turk","doi":"10.1109/3DUI.2016.7460046","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460046","url":null,"abstract":"A 2D gesture annotation provides a simple way to annotate the physical world in augmented reality for a range of applications such as remote collaboration. When rendered from novel viewpoints, these annotations have previously only worked with statically positioned cameras or planar scenes. However, if the camera moves and is observing an arbitrary environment, 2D gesture annotations can easily lose their meaning when shown from novel viewpoints due to perspective effects. In this paper, we present a new approach towards solving this problem by using a gesture enhanced annotation interpretation. By first classifying which type of gesture the user drew, we show that it is possible to render the 2D annotations in 3D in a way that conforms more to the original intention of the user than with traditional methods. We first determined a generic vocabulary of important 2D gestures for an augmented reality enhanced remote collaboration scenario by running an Amazon Mechanical Turk study with 88 participants. Next, we designed a novel real-time method to automatically handle the two most common 2D gesture annotations - arrows and circles - and give a detailed analysis of the ambiguities that must be handled in each case. Arrow gestures are interpreted by identifying their anchor points and using scene surface normals for better perspective rendering. For circle gestures, we designed a novel energy function to help infer the object of interest using both 2D image cues and 3D geometric cues. Results indicate that our method outperforms previous approaches by better conveying the meaning of the original drawing from different viewpoints.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124970962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
期刊
2016 IEEE Symposium on 3D User Interfaces (3DUI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1