Pub Date : 2011-03-19DOI: 10.1109/3DUI.2011.5759214
Evan A. Suma, D. Krum, Samantha L. Finkelstein, M. Bolas
We report a user study that investigated the effect of redirection in an immersive virtual environment on spatial orientation relative to both real world and virtual stimuli. Participants performed a series of spatial pointing tasks with real and virtual targets, during which they experienced three within-subjects conditions: rotation-based redirection, change blindness redirection, and no redirection. Our results indicate that when using the rotation technique, participants spatially updated both their virtual and real world orientations during redirection, resulting in pointing accuracy to the targets' recomputed positions that was strikingly similar to the control condition. While our data also suggest that a similar spatial updating may have occurred when using a change blindness technique, the realignment of targets appeared to be more complicated than a simple rotation, and was thus difficult to measure quantitatively.
{"title":"Effects of redirection on spatial orientation in real and virtual environments","authors":"Evan A. Suma, D. Krum, Samantha L. Finkelstein, M. Bolas","doi":"10.1109/3DUI.2011.5759214","DOIUrl":"https://doi.org/10.1109/3DUI.2011.5759214","url":null,"abstract":"We report a user study that investigated the effect of redirection in an immersive virtual environment on spatial orientation relative to both real world and virtual stimuli. Participants performed a series of spatial pointing tasks with real and virtual targets, during which they experienced three within-subjects conditions: rotation-based redirection, change blindness redirection, and no redirection. Our results indicate that when using the rotation technique, participants spatially updated both their virtual and real world orientations during redirection, resulting in pointing accuracy to the targets' recomputed positions that was strikingly similar to the control condition. While our data also suggest that a similar spatial updating may have occurred when using a change blindness technique, the realignment of targets appeared to be more complicated than a simple rotation, and was thus difficult to measure quantitatively.","PeriodicalId":230131,"journal":{"name":"2011 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117324014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/3DUI.2011.5759231
M. Näf, E. Ferranti
This poster presents a multi-touch navigation interface for a building energy management system with a three-dimensional data model. It extends well established “rubber-band” 2D interaction gestures to work with a 3D world-in-hand paradigm with the help of a navigation widget to select the active manipulation axis. A nested, semi-transparent display of the data hierarchy requires careful selection of the manipulation pivot. A hit-testing scheme is introduced to select the most likely object within the hierarchy.
{"title":"Multi-touch 3D navigation for a building energy management system","authors":"M. Näf, E. Ferranti","doi":"10.1109/3DUI.2011.5759231","DOIUrl":"https://doi.org/10.1109/3DUI.2011.5759231","url":null,"abstract":"This poster presents a multi-touch navigation interface for a building energy management system with a three-dimensional data model. It extends well established “rubber-band” 2D interaction gestures to work with a 3D world-in-hand paradigm with the help of a navigation widget to select the active manipulation axis. A nested, semi-transparent display of the data hierarchy requires careful selection of the manipulation pivot. A hit-testing scheme is introduced to select the most likely object within the hierarchy.","PeriodicalId":230131,"journal":{"name":"2011 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121731259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/3DUI.2011.5759211
M. Cabral, P. Vangorp, G. Chaurasia, E. Chapoulie, M. Hachet, G. Drettakis
We present a new immersive system which allows initial conceptual design of simple architectural models, including lighting. Our system allows the manipulation of simple elements such as windows, doors and rooms while the overall model is automatically adjusted to the manipulation. The system runs on a four-sided stereoscopic, head-tracked immersive display. We also provide simple lighting design capabilities, with an abstract representation of sunlight and its effects when shining through a window. Our system provides three different modes of interaction, a miniature-model table mode, a fullscale immersive mode and a combination of table and immersive which we call mixed mode. We performed an initial pilot user test to evaluate the relative merits of each mode for a set of basic tasks such as resizing and moving windows or walls, and a basic light-matching task. The study indicates that users appreciated the immersive nature of the system, and found interaction to be natural and pleasant. In addition, the results indicate that the mean performance times seem quite similar in the different modes, opening up the possibility for their combined usage for effective immersive modeling systems for novice users.
{"title":"A multimode immersive conceptual design system for architectural modeling and lighting","authors":"M. Cabral, P. Vangorp, G. Chaurasia, E. Chapoulie, M. Hachet, G. Drettakis","doi":"10.1109/3DUI.2011.5759211","DOIUrl":"https://doi.org/10.1109/3DUI.2011.5759211","url":null,"abstract":"We present a new immersive system which allows initial conceptual design of simple architectural models, including lighting. Our system allows the manipulation of simple elements such as windows, doors and rooms while the overall model is automatically adjusted to the manipulation. The system runs on a four-sided stereoscopic, head-tracked immersive display. We also provide simple lighting design capabilities, with an abstract representation of sunlight and its effects when shining through a window. Our system provides three different modes of interaction, a miniature-model table mode, a fullscale immersive mode and a combination of table and immersive which we call mixed mode. We performed an initial pilot user test to evaluate the relative merits of each mode for a set of basic tasks such as resizing and moving windows or walls, and a basic light-matching task. The study indicates that users appreciated the immersive nature of the system, and found interaction to be natural and pleasant. In addition, the results indicate that the mean performance times seem quite similar in the different modes, opening up the possibility for their combined usage for effective immersive modeling systems for novice users.","PeriodicalId":230131,"journal":{"name":"2011 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115889024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/3DUI.2011.5759220
Sebastian Knödel, M. Hachet
The RST multi-touch technique allows one to simultaneously control Rotations, Scaling, and Translations from multi-touch gestures. We conducted a user study to better understand the impact of directness on user performance for a RST docking task, for both 2D and 3D visualization conditions. This study showed that direct-touch shortens completion times, but indirect interaction improves efficiency and precision, and this is particularly true for 3D visualizations. The study also showed that users' trajectories are comparable for all conditions (2D/3D and direct/indirect). This tends to show that indirect RST control may be valuable for interactive visualization of 3D content. To illustrate this finding, we present a demo application that allows novice users to arrange 3D objects on a 2D virtual plane in an easy and efficient way.
{"title":"Multi-touch RST in 2D and 3D spaces: Studying the impact of directness on user performance","authors":"Sebastian Knödel, M. Hachet","doi":"10.1109/3DUI.2011.5759220","DOIUrl":"https://doi.org/10.1109/3DUI.2011.5759220","url":null,"abstract":"The RST multi-touch technique allows one to simultaneously control Rotations, Scaling, and Translations from multi-touch gestures. We conducted a user study to better understand the impact of directness on user performance for a RST docking task, for both 2D and 3D visualization conditions. This study showed that direct-touch shortens completion times, but indirect interaction improves efficiency and precision, and this is particularly true for 3D visualizations. The study also showed that users' trajectories are comparable for all conditions (2D/3D and direct/indirect). This tends to show that indirect RST control may be valuable for interactive visualization of 3D content. To illustrate this finding, we present a demo application that allows novice users to arrange 3D objects on a 2D virtual plane in an easy and efficient way.","PeriodicalId":230131,"journal":{"name":"2011 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115628317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/3DUI.2011.5759232
T. Ohnishi, R. Lindeman, K. Kiyokawa
We propose a 3D selection method with multiple multi-touch touchpads. The method enables 3D region selection requiring fewer actions assuming some constraints, such as that the 3D region is defined by a rectangular parallelepiped. Our method uses an asymmetric bimanual technique to define a 3D region which in the best case requires only a single action. We employ two touchpads, each recognizing input from up to two fingers, and all actions can be executed while the user is resting her arms on the table, reducing fatigue caused when interacting with multi-touch displays. The technique also supports other typical manipulations, such as object and camera translation and rotation. The 3D region selection technique can be applied to define visualization regions in volumetric rendering or objects within a scene.
{"title":"Multiple multi-touch touchpads for 3D selection","authors":"T. Ohnishi, R. Lindeman, K. Kiyokawa","doi":"10.1109/3DUI.2011.5759232","DOIUrl":"https://doi.org/10.1109/3DUI.2011.5759232","url":null,"abstract":"We propose a 3D selection method with multiple multi-touch touchpads. The method enables 3D region selection requiring fewer actions assuming some constraints, such as that the 3D region is defined by a rectangular parallelepiped. Our method uses an asymmetric bimanual technique to define a 3D region which in the best case requires only a single action. We employ two touchpads, each recognizing input from up to two fingers, and all actions can be executed while the user is resting her arms on the table, reducing fatigue caused when interacting with multi-touch displays. The technique also supports other typical manipulations, such as object and camera translation and rotation. The 3D region selection technique can be applied to define visualization regions in volumetric rendering or objects within a scene.","PeriodicalId":230131,"journal":{"name":"2011 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"221 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114587454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/3DUI.2011.5759210
Pierre Martin, P. Bourdot, Damien Touraine
Many studies have been carried out on multimodal and collaborative systems in VR. Although these two aspects are usually studied separately, they share interesting similarities. This paper focuses on the reconfigurable aspect and the implementation of a multimodal and collaborative supervisor for Virtual Environments (VEs). The aim of this supervisor is to ensure the merge of pieces of information from VR devices in order to control immersive multi-user applications through the main communication and sensorimotor channels of humans. Beyond the architectural aspect, we give indications on the modularity and the genericity of our system, implemented in C++, which could be embedded into different VR platforms. Moreover, its XML-based configuration system allows it to be easily applicable to many different contexts. The reconfigurable features are then illustrated via two scenarios: a cognitive oriented assembly task with single user multimodal interactions, and an industrial assembly task with multimodal and collaborative interactions in a co-located multi-user environment.
{"title":"A reconfigurable architecture for multimodal and collaborative interactions in Virtual Environments","authors":"Pierre Martin, P. Bourdot, Damien Touraine","doi":"10.1109/3DUI.2011.5759210","DOIUrl":"https://doi.org/10.1109/3DUI.2011.5759210","url":null,"abstract":"Many studies have been carried out on multimodal and collaborative systems in VR. Although these two aspects are usually studied separately, they share interesting similarities. This paper focuses on the reconfigurable aspect and the implementation of a multimodal and collaborative supervisor for Virtual Environments (VEs). The aim of this supervisor is to ensure the merge of pieces of information from VR devices in order to control immersive multi-user applications through the main communication and sensorimotor channels of humans. Beyond the architectural aspect, we give indications on the modularity and the genericity of our system, implemented in C++, which could be embedded into different VR platforms. Moreover, its XML-based configuration system allows it to be easily applicable to many different contexts. The reconfigurable features are then illustrated via two scenarios: a cognitive oriented assembly task with single user multimodal interactions, and an industrial assembly task with multimodal and collaborative interactions in a co-located multi-user environment.","PeriodicalId":230131,"journal":{"name":"2011 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117103668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/3DUI.2011.5759225
M. Covarrubias, M. Bordegoni, U. Cugini, M. Antolini
This research work describes the Desktop Strip haptic interface, a device which is used for exploration of virtual surfaces with aesthetic value. Such a device allows a continuous, free hand contact on a developable plastic tape actuated by a modular servo-controlled mechanism using the tessellation approach. The device has enabled users to interact with and feel a wide variety of virtual objects by using the palm of the hand. This research work discusses the design concept, novel kinematics and mechanics of the Desktop Strip.
{"title":"Desktop haptic Strip for exploration of virtual objects","authors":"M. Covarrubias, M. Bordegoni, U. Cugini, M. Antolini","doi":"10.1109/3DUI.2011.5759225","DOIUrl":"https://doi.org/10.1109/3DUI.2011.5759225","url":null,"abstract":"This research work describes the Desktop Strip haptic interface, a device which is used for exploration of virtual surfaces with aesthetic value. Such a device allows a continuous, free hand contact on a developable plastic tape actuated by a modular servo-controlled mechanism using the tessellation approach. The device has enabled users to interact with and feel a wide variety of virtual objects by using the palm of the hand. This research work discusses the design concept, novel kinematics and mechanics of the Desktop Strip.","PeriodicalId":230131,"journal":{"name":"2011 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132651120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/3DUI.2011.5759201
P. Figueroa, David Castro
We present a library of reusable, abstract, low granularity components for the development of novel interaction techniques. Based on the InTml language and through an iterative process, we have designed 7 selection and 5 travel techniques from [5] as dataflows of reusable components. The result is a compact set of 30 components that represent interactive content and useful behavior for interaction. We added a library of 20 components for device handling, in order to create complete, portable applications. By design, we achieved a 68% of component reusability, measured as the number of components used in more than one technique, over the total number of used components. As a reusability test, we used this library to describe some interaction techniques in [1], a task that required only 2% of new components.
{"title":"A reusable library of 3D interaction techniques","authors":"P. Figueroa, David Castro","doi":"10.1109/3DUI.2011.5759201","DOIUrl":"https://doi.org/10.1109/3DUI.2011.5759201","url":null,"abstract":"We present a library of reusable, abstract, low granularity components for the development of novel interaction techniques. Based on the InTml language and through an iterative process, we have designed 7 selection and 5 travel techniques from [5] as dataflows of reusable components. The result is a compact set of 30 components that represent interactive content and useful behavior for interaction. We added a library of 20 components for device handling, in order to create complete, portable applications. By design, we achieved a 68% of component reusability, measured as the number of components used in more than one technique, over the total number of used components. As a reusability test, we used this library to describe some interaction techniques in [1], a task that required only 2% of new components.","PeriodicalId":230131,"journal":{"name":"2011 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122146120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/3DUI.2011.5759217
Q. Ang, B. Horan, Z. Najdovski, S. Nahavandi
Haptic interaction has received increasing research interest in recent years. Currently, most commercially available haptic devices provide the user with a single point of interaction. Multi-point haptic devices present a logical progression in device design and enable the operator to experience a far wider range of haptic interactions, particularly the ability to grasp via multiple fingers. This is highly desirable for various haptically enabled applications including virtual training, telesurgery and telemanipulation. This paper presents a gripper attachment which utilises two low-cost commercially available haptic devices to facilitate multi-point haptic grasping. It provides the ability to render forces to the user's fingers independently and using Phantom Omni haptic devices offers several benefits over more complex approaches such as low-cost, reliability, and ease of programming. The workspace of the gripper attachment is considered and in order to haptically render the desired forces to the user's fingers, kinematic analysis is discussed and necessary formulations presented. The integrated multi-point haptic platform is presented and exploration of a virtual environment using CHAI 3D is demonstrated.
{"title":"Enabling multi-point haptic grasping in virtual environments","authors":"Q. Ang, B. Horan, Z. Najdovski, S. Nahavandi","doi":"10.1109/3DUI.2011.5759217","DOIUrl":"https://doi.org/10.1109/3DUI.2011.5759217","url":null,"abstract":"Haptic interaction has received increasing research interest in recent years. Currently, most commercially available haptic devices provide the user with a single point of interaction. Multi-point haptic devices present a logical progression in device design and enable the operator to experience a far wider range of haptic interactions, particularly the ability to grasp via multiple fingers. This is highly desirable for various haptically enabled applications including virtual training, telesurgery and telemanipulation. This paper presents a gripper attachment which utilises two low-cost commercially available haptic devices to facilitate multi-point haptic grasping. It provides the ability to render forces to the user's fingers independently and using Phantom Omni haptic devices offers several benefits over more complex approaches such as low-cost, reliability, and ease of programming. The workspace of the gripper attachment is considered and in order to haptically render the desired forces to the user's fingers, kinematic analysis is discussed and necessary formulations presented. The integrated multi-point haptic platform is presented and exploration of a virtual environment using CHAI 3D is demonstrated.","PeriodicalId":230131,"journal":{"name":"2011 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129986095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/3DUI.2011.5759233
Rok Orel, Bojan Blazica
In this paper we present a novel approach to camera manipulation in 3D space with three fingers on a 2D surface. We are developing a user friendly and intuitive interface that manipulates our view with all 6 degrees of freedom on a multitouch display.
{"title":"3fMT - A technique for camera manipulation in 3D space with a multitouch display","authors":"Rok Orel, Bojan Blazica","doi":"10.1109/3DUI.2011.5759233","DOIUrl":"https://doi.org/10.1109/3DUI.2011.5759233","url":null,"abstract":"In this paper we present a novel approach to camera manipulation in 3D space with three fingers on a 2D surface. We are developing a user friendly and intuitive interface that manipulates our view with all 6 degrees of freedom on a multitouch display.","PeriodicalId":230131,"journal":{"name":"2011 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"272 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131481820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}