首页 > 最新文献

2016 IEEE Symposium on 3D User Interfaces (3DUI)最新文献

英文 中文
Designing capsule, an input device to support the manipulation of biological datasets 设计胶囊,一种支持生物数据集操作的输入设备
Pub Date : 2016-04-28 DOI: 10.1109/3DUI.2016.7460067
W. Lages, Gustavo A. Arango, D. Laidlaw, J. Socha, D. Bowman
In this paper we present the design process of Capsule, an inertial input device to support 3D manipulation of biological datasets. Our motivation is to improve the scientist's workflow during the analysis of 3D biological data such as proteins, CT scans or neuron fibers. We discuss the design process and possibilities for this device.
在本文中,我们提出了胶囊的设计过程,一个惯性输入设备,以支持生物数据集的三维操作。我们的动机是改善科学家在分析3D生物数据(如蛋白质、CT扫描或神经元纤维)时的工作流程。我们讨论了该装置的设计过程和可能性。
{"title":"Designing capsule, an input device to support the manipulation of biological datasets","authors":"W. Lages, Gustavo A. Arango, D. Laidlaw, J. Socha, D. Bowman","doi":"10.1109/3DUI.2016.7460067","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460067","url":null,"abstract":"In this paper we present the design process of Capsule, an inertial input device to support 3D manipulation of biological datasets. Our motivation is to improve the scientist's workflow during the analysis of 3D biological data such as proteins, CT scans or neuron fibers. We discuss the design process and possibilities for this device.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129464382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A part-task haptic simulator for ophthalmic surgical training 用于眼科外科训练的部分任务触觉模拟器
Pub Date : 2016-04-26 DOI: 10.1109/3DUI.2016.7460069
Jia Luo, Patrick Kania, P. Banerjee, Shammema Sikder, C. Luciano, W. Myers
This poster presents a part-task haptic simulator for ophthalmic surgical training developed for the MicrovisTouch simulation platform. This ophthalmic surgical simulator provides both realistic 3D visualization and haptic feedback. Trainees are able to learn interacting with a physics-based dynamic virtual eye model and receiving tactile feedback by handling virtual instruments while simulating a series of surgical tasks and exercises, including micro-dexterity and corneal incision. A pilot study was conducted on the micro-dexterity to measure its effectiveness to evaluate trainees' skills performing precise instrument motion required for ophthalmic surgical procedures.
这张海报展示了为microvisouch仿真平台开发的用于眼科外科训练的部分任务触觉模拟器。这个眼科手术模拟器提供了真实的3D可视化和触觉反馈。学员能够学习与基于物理的动态虚拟眼模型进行交互,并通过处理虚拟仪器来接收触觉反馈,同时模拟一系列手术任务和练习,包括微灵巧和角膜切口。我们对微灵巧度进行了初步研究,以衡量其有效性,以评估受训者在眼科手术过程中执行精确仪器运动所需的技能。
{"title":"A part-task haptic simulator for ophthalmic surgical training","authors":"Jia Luo, Patrick Kania, P. Banerjee, Shammema Sikder, C. Luciano, W. Myers","doi":"10.1109/3DUI.2016.7460069","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460069","url":null,"abstract":"This poster presents a part-task haptic simulator for ophthalmic surgical training developed for the MicrovisTouch simulation platform. This ophthalmic surgical simulator provides both realistic 3D visualization and haptic feedback. Trainees are able to learn interacting with a physics-based dynamic virtual eye model and receiving tactile feedback by handling virtual instruments while simulating a series of surgical tasks and exercises, including micro-dexterity and corneal incision. A pilot study was conducted on the micro-dexterity to measure its effectiveness to evaluate trainees' skills performing precise instrument motion required for ophthalmic surgical procedures.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130839240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
DesktopGlove: A multi-finger force feedback interface separating degrees of freedom between hands DesktopGlove:一个多指力反馈接口,分离双手之间的自由度
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460024
Merwan Achibet, Géry Casiez, M. Marchal
In virtual environments, interacting directly with our hands and fingers greatly contributes to immersion, especially when force feedback is provided for simulating the touch of virtual objects. Yet, common haptic interfaces are unfit for multi-finger manipulation and only costly and cumbersome grounded exoskeletons do provide all the efforts expected from object manipulation. To make multi-finger haptic interaction more accessible, we propose to combine two affordable haptic interfaces into a bimanual setup named DesktopGlove. With this approach, each hand is in charge of different components of object manipulation: one commands the global motion of a virtual hand while the other controls its fingers for grasping. In addition, each hand is subjected to forces that relate to its own degrees of freedom so that users perceive a variety of haptic effects through both of them. Our results show that (1) users are able to integrate the separated degrees of freedom of DesktopGlove to efficiently control a virtual hand in a posing task, (2) DesktopGlove shows overall better performance than a traditional data glove and is preferred by users, and (3) users considered the separated haptic feedback realistic and accurate for manipulating objects in virtual environments.
在虚拟环境中,直接与我们的手和手指互动极大地有助于沉浸感,特别是当为模拟虚拟物体的触摸提供力反馈时。然而,普通的触觉界面不适合多指操作,只有昂贵且笨重的接地外骨骼才能提供物体操作所需的所有努力。为了使多指触觉交互更容易实现,我们建议将两个实惠的触觉界面组合成一个名为DesktopGlove的手动设置。通过这种方法,每只手负责物体操作的不同组成部分:一只手命令虚拟手的全局运动,而另一只手控制它的手指进行抓取。此外,每只手都受到与其自身自由度相关的力,因此用户可以通过两只手感知各种触觉效果。我们的研究结果表明:(1)用户能够整合DesktopGlove的分离自由度来有效地控制摆位任务中的虚拟手;(2)总体而言,DesktopGlove的性能优于传统的数据手套,受到用户的青睐;(3)用户认为分离的触觉反馈对于虚拟环境中操作物体是真实准确的。
{"title":"DesktopGlove: A multi-finger force feedback interface separating degrees of freedom between hands","authors":"Merwan Achibet, Géry Casiez, M. Marchal","doi":"10.1109/3DUI.2016.7460024","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460024","url":null,"abstract":"In virtual environments, interacting directly with our hands and fingers greatly contributes to immersion, especially when force feedback is provided for simulating the touch of virtual objects. Yet, common haptic interfaces are unfit for multi-finger manipulation and only costly and cumbersome grounded exoskeletons do provide all the efforts expected from object manipulation. To make multi-finger haptic interaction more accessible, we propose to combine two affordable haptic interfaces into a bimanual setup named DesktopGlove. With this approach, each hand is in charge of different components of object manipulation: one commands the global motion of a virtual hand while the other controls its fingers for grasping. In addition, each hand is subjected to forces that relate to its own degrees of freedom so that users perceive a variety of haptic effects through both of them. Our results show that (1) users are able to integrate the separated degrees of freedom of DesktopGlove to efficiently control a virtual hand in a posing task, (2) DesktopGlove shows overall better performance than a traditional data glove and is preferred by users, and (3) users considered the separated haptic feedback realistic and accurate for manipulating objects in virtual environments.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116449010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Looking into HMD: A method of latency measurement for head mounted display 头戴式显示器的延迟测量方法研究
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460064
R. Kijima, Kento Miyajima
Latency is an important specification of the Head Mounted Display (HMD). The dynamical characteristics of the display and that of the lag compensation is the non-negligible part in the remaining latency after the lag compensation for the state-of-the-art HMD. By examining the past method, it was revealed that the evaluation of the subjective view was necessary to grasp such values. The result of measurement proved the capability to evaluate such dynamical characteristics, as well as the average latency.
延迟是头戴式显示器(HMD)的一个重要指标。在最先进的HMD进行滞后补偿后的剩余延迟中,显示的动态特性和滞后补偿的动态特性是不可忽略的部分。通过对以往方法的考察,揭示了主观观点的评价是把握这些价值的必要条件。测量结果证明了这种动态特性的评估能力,以及平均延迟。
{"title":"Looking into HMD: A method of latency measurement for head mounted display","authors":"R. Kijima, Kento Miyajima","doi":"10.1109/3DUI.2016.7460064","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460064","url":null,"abstract":"Latency is an important specification of the Head Mounted Display (HMD). The dynamical characteristics of the display and that of the lag compensation is the non-negligible part in the remaining latency after the lag compensation for the state-of-the-art HMD. By examining the past method, it was revealed that the evaluation of the subjective view was necessary to grasp such values. The result of measurement proved the capability to evaluate such dynamical characteristics, as well as the average latency.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"242 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122719344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Batmen - Hybrid collaborative object manipulation using mobile devices 蝙蝠侠-使用移动设备的混合协作对象操作
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460077
M. Cabral, Gabriel Roque, Mario Nagamura, Andre Montes, Eduardo Zilles Borba, C. Kurashima, M. Zuffo
We present an interactive and collaborative 3D object manipulation system using off the shelf mobile devices coupled with Augmented Reality (AR) technology that allows multiple users to collaborate concurrently on a scene. Each user interested in participating in this collaboration uses both a mobile device running android and a desktop (or laptop) working in tandem. The 3D scene is visualized by the user in the desktop system. The changes in the scene viewpoint and the object manipulation are performed using a mobile device through object tracking. Multiple users can collaborate on object manipulation by using a laptop and a mobile device each. The system leverages user's knowledge of common tasks performed on current mobile devices using gestures. We built a prototype system that allows users to complete the requested tasks and performed an informal user study with experienced VR researchers to validate the system.
我们提出了一个交互式和协作的3D对象操作系统,使用现成的移动设备和增强现实(AR)技术,允许多个用户在一个场景中同时协作。每个有兴趣参与此协作的用户都使用运行android的移动设备和同时工作的台式机(或笔记本电脑)。三维场景由用户在桌面系统中可视化。通过目标跟踪,利用移动设备实现场景视点的变化和目标的操纵。多个用户可以通过各自使用一台笔记本电脑和一台移动设备来协作对象操作。该系统利用用户对当前移动设备上使用手势执行的常见任务的了解。我们建立了一个原型系统,允许用户完成要求的任务,并与经验丰富的VR研究人员进行了非正式的用户研究,以验证系统。
{"title":"Batmen - Hybrid collaborative object manipulation using mobile devices","authors":"M. Cabral, Gabriel Roque, Mario Nagamura, Andre Montes, Eduardo Zilles Borba, C. Kurashima, M. Zuffo","doi":"10.1109/3DUI.2016.7460077","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460077","url":null,"abstract":"We present an interactive and collaborative 3D object manipulation system using off the shelf mobile devices coupled with Augmented Reality (AR) technology that allows multiple users to collaborate concurrently on a scene. Each user interested in participating in this collaboration uses both a mobile device running android and a desktop (or laptop) working in tandem. The 3D scene is visualized by the user in the desktop system. The changes in the scene viewpoint and the object manipulation are performed using a mobile device through object tracking. Multiple users can collaborate on object manipulation by using a laptop and a mobile device each. The system leverages user's knowledge of common tasks performed on current mobile devices using gestures. We built a prototype system that allows users to complete the requested tasks and performed an informal user study with experienced VR researchers to validate the system.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"256 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122927690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Let your fingers do the walking: A unified approach for efficient short-, medium-, and long-distance travel in VR 让你的手指走路:一种统一的方法,有效的短,中,长途旅行的虚拟现实
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460027
Zhixin Yan, R. Lindeman, Arindam Dey
The tradeoff between speed and precision is one of the challenging problems of travel interfaces. Sometimes users want to travel long distances (e.g., fly) and care less about precise movement, while other times they want to approach nearby objects in a more-precise way (e.g., walk), and care less about how quickly they move. Between these two extremes there are scenarios when both speed and precision become equally important. In real life, we often seamlessly combine these modes. However, most VR systems support a single travel metaphor, which may only be good for one range of travel, but not others. We present a new VR travel framework which supports three separate multi-touch travel techniques, one for each distance range, but that all use the same device. We use a unifying metaphor of the user's fingers becoming their legs for each of the techniques. We are investigating the usability and user acceptance of the fingers-as-legs metaphor, as well as the efficiency and naturalness of switching between the different travel modes. We conducted an experiment focusing on user performance using the three travel modes, and compared our multi-touch, gesture-based approach with a traditional Gamepad travel interface. The results suggest that participants using a Gamepad interface are more time efficient. However, the quality of completing the tasks with the two input devices was similar, while ForcePad user response was faster for switching between travel modes.
速度和精度之间的权衡是旅行接口的一个具有挑战性的问题。有时用户想要长途旅行(例如,飞行),而不太关心精确的移动,而有时他们想要以更精确的方式接近附近的物体(例如,步行),而不太关心他们移动的速度。在这两个极端之间,有些情况下速度和精度同样重要。在现实生活中,我们经常将这些模式无缝地结合起来。然而,大多数VR系统都支持单一的旅行隐喻,这可能只适用于一个范围的旅行,而不适用于其他范围。我们提出了一个新的VR旅行框架,它支持三种不同的多点触摸旅行技术,每种距离范围一种,但都使用相同的设备。对于每一种技术,我们使用一个统一的比喻,即用户的手指成为他们的腿。我们正在调查手指当腿比喻的可用性和用户接受度,以及在不同的旅行模式之间切换的效率和自然度。我们对这三种旅行模式的用户表现进行了实验,并将我们的多点触控、基于手势的方式与传统的Gamepad旅行界面进行了比较。结果表明,使用Gamepad界面的参与者更节省时间。然而,使用两种输入设备完成任务的质量是相似的,而ForcePad用户在切换旅行模式时的响应速度更快。
{"title":"Let your fingers do the walking: A unified approach for efficient short-, medium-, and long-distance travel in VR","authors":"Zhixin Yan, R. Lindeman, Arindam Dey","doi":"10.1109/3DUI.2016.7460027","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460027","url":null,"abstract":"The tradeoff between speed and precision is one of the challenging problems of travel interfaces. Sometimes users want to travel long distances (e.g., fly) and care less about precise movement, while other times they want to approach nearby objects in a more-precise way (e.g., walk), and care less about how quickly they move. Between these two extremes there are scenarios when both speed and precision become equally important. In real life, we often seamlessly combine these modes. However, most VR systems support a single travel metaphor, which may only be good for one range of travel, but not others. We present a new VR travel framework which supports three separate multi-touch travel techniques, one for each distance range, but that all use the same device. We use a unifying metaphor of the user's fingers becoming their legs for each of the techniques. We are investigating the usability and user acceptance of the fingers-as-legs metaphor, as well as the efficiency and naturalness of switching between the different travel modes. We conducted an experiment focusing on user performance using the three travel modes, and compared our multi-touch, gesture-based approach with a traditional Gamepad travel interface. The results suggest that participants using a Gamepad interface are more time efficient. However, the quality of completing the tasks with the two input devices was similar, while ForcePad user response was faster for switching between travel modes.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122126369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Effects of user physical fitness on performance in virtual reality 用户体质对虚拟现实中表现的影响
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460057
Aryabrata Basu, Catherine Ball, Benjamin Manning, K. Johnsen
A person's level of physical fitness can affect their health and many other factors in their lives. However, little is known about the effect of physical fitness on factors relevant to virtual environments. Towards addressing this knowledge gap, we performed a research study examining the relationship of several physical fitness measures with performance, presence, and simulator sickness during use of an HMD-based maze-type virtual environment. We recorded the trajectory of each participant through the maze. Following the virtual environment, participants reported simulator sickness, presence, and provided written and verbal feedback. Our analysis of the data shows a positive correlation between self-reported physical fitness and user performance. Further research is necessary to establish a causal relationship, and methods to make use of this new information in the design of virtual environments.
一个人的身体健康水平可以影响他们的健康和生活中的许多其他因素。然而,人们对身体健康对虚拟环境相关因素的影响知之甚少。为了解决这一知识差距,我们进行了一项研究,研究了在使用基于hmd的迷宫型虚拟环境时,几种身体健康测量与表现、存在和模拟器疾病的关系。我们记录了每个参与者穿过迷宫的轨迹。在虚拟环境之后,参与者报告了模拟器疾病、存在感,并提供了书面和口头反馈。我们对数据的分析显示,自我报告的身体健康与用户表现之间存在正相关关系。有必要进一步研究建立因果关系,以及在虚拟环境设计中利用这些新信息的方法。
{"title":"Effects of user physical fitness on performance in virtual reality","authors":"Aryabrata Basu, Catherine Ball, Benjamin Manning, K. Johnsen","doi":"10.1109/3DUI.2016.7460057","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460057","url":null,"abstract":"A person's level of physical fitness can affect their health and many other factors in their lives. However, little is known about the effect of physical fitness on factors relevant to virtual environments. Towards addressing this knowledge gap, we performed a research study examining the relationship of several physical fitness measures with performance, presence, and simulator sickness during use of an HMD-based maze-type virtual environment. We recorded the trajectory of each participant through the maze. Following the virtual environment, participants reported simulator sickness, presence, and provided written and verbal feedback. Our analysis of the data shows a positive correlation between self-reported physical fitness and user performance. Further research is necessary to establish a causal relationship, and methods to make use of this new information in the design of virtual environments.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128245019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Evaluation of hands-free HMD-based navigation techniques for immersive data analysis 基于hmd的沉浸式数据分析免提导航技术的评估
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460040
Daniel Zielasko, Sven Horn, Sebastian Freitag, B. Weyers, T. Kuhlen
To use the full potential of immersive data analysis when wearing a head-mounted display, users have to be able to navigate through the spatial data. We collected, developed and evaluated 5 different hands-free navigation methods that are usable while seated in the analyst's usual workplace. All methods meet the requirements of being easy to learn and inexpensive to integrate into existing workplaces. We conducted a user study with 23 participants which showed that a body leaning metaphor and an accelerometer pedal metaphor performed best. In the given task the participants had to determine the shortest path between various pairs of vertices in a large 3D graph.
佩戴头戴式显示器时,为了充分利用沉浸式数据分析的潜力,用户必须能够在空间数据中进行导航。我们收集、开发并评估了5种不同的免提导航方法,这些方法可以在分析师通常的工作场所使用。所有的方法都符合易学和廉价的要求,以融入现有的工作场所。我们对23名参与者进行了一项用户研究,结果表明身体倾斜隐喻和加速计踏板隐喻的效果最好。在给定的任务中,参与者必须确定大型3D图中不同顶点对之间的最短路径。
{"title":"Evaluation of hands-free HMD-based navigation techniques for immersive data analysis","authors":"Daniel Zielasko, Sven Horn, Sebastian Freitag, B. Weyers, T. Kuhlen","doi":"10.1109/3DUI.2016.7460040","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460040","url":null,"abstract":"To use the full potential of immersive data analysis when wearing a head-mounted display, users have to be able to navigate through the spatial data. We collected, developed and evaluated 5 different hands-free navigation methods that are usable while seated in the analyst's usual workplace. All methods meet the requirements of being easy to learn and inexpensive to integrate into existing workplaces. We conducted a user study with 23 participants which showed that a body leaning metaphor and an accelerometer pedal metaphor performed best. In the given task the participants had to determine the shortest path between various pairs of vertices in a large 3D graph.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129640126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 75
Towards a comparative evaluation of visually guided physical reach motions during 3D interactions in real and virtual environments 在真实和虚拟环境中进行三维交互时,视觉引导物理到达运动的比较评估
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460058
Elham Ebrahimi, Sabarish V. Babu, C. Pagano, S. Jörg
In an initial study, we characterize the properties of human reach motion with and without visual guidance in real and virtual worlds in interaction space. We aim to understand how the perceptual characteristics between real and virtual worlds affect physical reaches during 3D interaction. Typically, participants spatially reach to the perceived location of objects in 3D to perform selection and manipulation activities. These physical reach motions include those of virtual assembly tasks or rehabilitation exercises in which the participants only have approximate perceptual information in the virtual world compared to the real world situation due to technological limitations such as minimal visual field of view and resolution as well as latency and jitter associated with physical movements. In this poster, we try to understand how the motor responses of participants differ between visually guided versus non-visually guided situations. We compared and contrasted the motor component of 3D interaction between the virtual and physical world by investigating factors such as accuracy and velocity of each reaching task.
在初步研究中,我们描述了在交互空间中真实世界和虚拟世界中有视觉指导和没有视觉指导的人类触手运动的特性。我们的目标是了解真实世界和虚拟世界之间的感知特征如何影响三维交互过程中的物理到达。通常,参与者在空间上到达三维物体的感知位置来执行选择和操作活动。这些物理到达运动包括虚拟装配任务或康复练习,由于技术限制,例如最小的视野和分辨率,以及与物理运动相关的延迟和抖动,参与者在虚拟世界中只有与现实世界情况相比的近似感知信息。在这张海报中,我们试图了解参与者在视觉引导和非视觉引导情况下的运动反应有何不同。我们通过研究每个到达任务的准确性和速度等因素,比较和对比了虚拟世界和物理世界之间3D交互的运动成分。
{"title":"Towards a comparative evaluation of visually guided physical reach motions during 3D interactions in real and virtual environments","authors":"Elham Ebrahimi, Sabarish V. Babu, C. Pagano, S. Jörg","doi":"10.1109/3DUI.2016.7460058","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460058","url":null,"abstract":"In an initial study, we characterize the properties of human reach motion with and without visual guidance in real and virtual worlds in interaction space. We aim to understand how the perceptual characteristics between real and virtual worlds affect physical reaches during 3D interaction. Typically, participants spatially reach to the perceived location of objects in 3D to perform selection and manipulation activities. These physical reach motions include those of virtual assembly tasks or rehabilitation exercises in which the participants only have approximate perceptual information in the virtual world compared to the real world situation due to technological limitations such as minimal visual field of view and resolution as well as latency and jitter associated with physical movements. In this poster, we try to understand how the motor responses of participants differ between visually guided versus non-visually guided situations. We compared and contrasted the motor component of 3D interaction between the virtual and physical world by investigating factors such as accuracy and velocity of each reaching task.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120909076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
VUME: The voluntary-use methodology for evaluations vme:自愿使用的评估方法
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460042
Jian Ma, Prathamesh Potnis, Alec G. Moore, Ryan P. McMahan
In an attempt to better understand how controlled research results impact actual voluntary use of 3D user interfaces (3D Uls), we developed a new evaluation approach. Using this approach, we conducted two studies evaluating two head-mounted displays (HMDs) - a Sensics zSight and an Oculus Rift Development Kit 1 (DK1). The results of the first study indicate that the DK1 affords significantly better user performances. In the second study, we used a between-subjects design to determine if participants would voluntarily explore and interact with a virtual environment more with the DK1 than the zSight. We did not find a significant difference between the two HMDs, but statistically proved that the HMDs were equivalent. This indicates that results found in controlled evaluations do not always play a significant role in the voluntary use of a 3D UI.
为了更好地理解受控研究结果如何影响实际自愿使用3D用户界面(3D Uls),我们开发了一种新的评估方法。使用这种方法,我们进行了两项研究,评估了两种头戴式显示器(hmd) - Sensics zSight和Oculus Rift Development Kit 1 (DK1)。第一项研究的结果表明,DK1提供了显著更好的用户性能。在第二项研究中,我们使用了受试者之间的设计来确定参与者是否会更多地使用DK1而不是zSight来自愿探索虚拟环境并与之互动。我们没有发现两种头戴式显示器之间有显著差异,但统计证明两种头戴式显示器是等效的。这表明在受控评估中发现的结果并不总是在自愿使用3D UI中发挥重要作用。
{"title":"VUME: The voluntary-use methodology for evaluations","authors":"Jian Ma, Prathamesh Potnis, Alec G. Moore, Ryan P. McMahan","doi":"10.1109/3DUI.2016.7460042","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460042","url":null,"abstract":"In an attempt to better understand how controlled research results impact actual voluntary use of 3D user interfaces (3D Uls), we developed a new evaluation approach. Using this approach, we conducted two studies evaluating two head-mounted displays (HMDs) - a Sensics zSight and an Oculus Rift Development Kit 1 (DK1). The results of the first study indicate that the DK1 affords significantly better user performances. In the second study, we used a between-subjects design to determine if participants would voluntarily explore and interact with a virtual environment more with the DK1 than the zSight. We did not find a significant difference between the two HMDs, but statistically proved that the HMDs were equivalent. This indicates that results found in controlled evaluations do not always play a significant role in the voluntary use of a 3D UI.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134557196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2016 IEEE Symposium on 3D User Interfaces (3DUI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1