Pub Date : 2009-03-14DOI: 10.1109/3DUI.2009.4811209
Daniel Iaboni, C. MacGregor
Trails are a proven means of improving performance in virtual environments (VE) but there is very little understanding or support for the role of the trailblazer. The Use-IT Lab is currently designing a tool, the VTrail System, to support trailblazing in VE's. The objective of this document is to introduce the concept of trailblazing, present the initial prototype for a tool designed specifically to support trailblazing and discuss results from an initial usability study.
{"title":"Tech-note: Vtrail: Supporting trailblazing in virtual environments","authors":"Daniel Iaboni, C. MacGregor","doi":"10.1109/3DUI.2009.4811209","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811209","url":null,"abstract":"Trails are a proven means of improving performance in virtual environments (VE) but there is very little understanding or support for the role of the trailblazer. The Use-IT Lab is currently designing a tool, the VTrail System, to support trailblazing in VE's. The objective of this document is to introduce the concept of trailblazing, present the initial prototype for a tool designed specifically to support trailblazing and discuss results from an initial usability study.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116398574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-14DOI: 10.1109/3DUI.2009.4811208
G. Bruder, Frank Steinicke, K. Hinrichs
In this paper we propose the Arch-Explore user interface, which supports natural exploration of architectural 3D models at different scales in a real walking virtual reality (VR) environment such as head-mounted display (HMD) or CAVE setups. We discuss in detail how user movements can be transferred to the virtual world to enable walking through virtual indoor environments. To overcome the limited interaction space in small VR laboratory setups, we have implemented redirected walking techniques to support natural exploration of comparably large-scale virtual models. Furthermore, the concept of virtual portals provides a means to cover long distances intuitively within architectural models. We describe the software and hardware setup and discuss benefits of Arch-Explore.
{"title":"Arch-Explore: A natural user interface for immersive architectural walkthroughs","authors":"G. Bruder, Frank Steinicke, K. Hinrichs","doi":"10.1109/3DUI.2009.4811208","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811208","url":null,"abstract":"In this paper we propose the Arch-Explore user interface, which supports natural exploration of architectural 3D models at different scales in a real walking virtual reality (VR) environment such as head-mounted display (HMD) or CAVE setups. We discuss in detail how user movements can be transferred to the virtual world to enable walking through virtual indoor environments. To overcome the limited interaction space in small VR laboratory setups, we have implemented redirected walking techniques to support natural exploration of comparably large-scale virtual models. Furthermore, the concept of virtual portals provides a means to cover long distances intuitively within architectural models. We describe the software and hardware setup and discuss benefits of Arch-Explore.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114356105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-14DOI: 10.1109/3DUI.2009.4811202
Sarah Peck, Chris North, D. Bowman
This paper explores the link between users' physical navigation, specifically their distance from their current object(s) of focus, and their interaction scale. We define a new 3D interaction technique, called multiscale interaction, which links users' scale of perception and their scale of interaction. The technique exploits users' physical navigation in the 3D space in front of a large high-resolution display, using it to explicitly control scale of interaction, in addition to scale of perception. Other interaction techniques for large displays have not previously considered physical navigation to this degree. We identify the design space of the technique, which other researchers can continue to explore and build on, and evaluate one implementation of multiscale interaction to begin to quantify the benefits of the technique. We show evidence of a natural psychological link between scale of perception and scale of interaction and that exploiting it as an explicit control in the user interface can be beneficial to users in problem solving tasks. In addition, we show that designing against this philosophy can be detrimental.
{"title":"A multiscale interaction technique for large, high-resolution displays","authors":"Sarah Peck, Chris North, D. Bowman","doi":"10.1109/3DUI.2009.4811202","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811202","url":null,"abstract":"This paper explores the link between users' physical navigation, specifically their distance from their current object(s) of focus, and their interaction scale. We define a new 3D interaction technique, called multiscale interaction, which links users' scale of perception and their scale of interaction. The technique exploits users' physical navigation in the 3D space in front of a large high-resolution display, using it to explicitly control scale of interaction, in addition to scale of perception. Other interaction techniques for large displays have not previously considered physical navigation to this degree. We identify the design space of the technique, which other researchers can continue to explore and build on, and evaluate one implementation of multiscale interaction to begin to quantify the benefits of the technique. We show evidence of a natural psychological link between scale of perception and scale of interaction and that exploiting it as an explicit control in the user interface can be beneficial to users in problem solving tasks. In addition, we show that designing against this philosophy can be detrimental.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128071199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-14DOI: 10.1109/3DUI.2009.4811233
Malachi Wurpts
Training systems based on hardware mockups provide physical fidelity at the expense of flexibility. Maintaining concurrency in these mock-ups can be time-consuming and expensive. Over the past several years, Southwest Research Institute (SwRI®) has worked with the United States Air Force to develop a generalized approach which uses purely virtual assets to provide training in a flexible environment in which configuration changes are made solely through software modifications. While a purely virtual approach has proven effective and flexible, it lacks the realistic interactions supported by the physical constraints present in mock-ups. This paper describes a novel approach to provide a more realistic interface while also extending the useful life of a training system which might otherwise be rendered obsolete. The technique combines virtual and real assets to provide haptic feedback. The interaction technique uses precision finger tracking combined with tactile sensors attached to the finger tip.
{"title":"Poster: Updating an obsolete trainer using passive haptics and pressure sensors","authors":"Malachi Wurpts","doi":"10.1109/3DUI.2009.4811233","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811233","url":null,"abstract":"Training systems based on hardware mockups provide physical fidelity at the expense of flexibility. Maintaining concurrency in these mock-ups can be time-consuming and expensive. Over the past several years, Southwest Research Institute (SwRI®) has worked with the United States Air Force to develop a generalized approach which uses purely virtual assets to provide training in a flexible environment in which configuration changes are made solely through software modifications. While a purely virtual approach has proven effective and flexible, it lacks the realistic interactions supported by the physical constraints present in mock-ups. This paper describes a novel approach to provide a more realistic interface while also extending the useful life of a training system which might otherwise be rendered obsolete. The technique combines virtual and real assets to provide haptic feedback. The interaction technique uses precision finger tracking combined with tactile sensors attached to the finger tip.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129806514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-14DOI: 10.1109/3DUI.2009.4811221
Omar Gomez, H. Trefftz, P. Boulanger, W. Bischof
Virtual collaborative systems are vital tools for accessing and sharing scientific data visualizations. This paper shows how two different modes of collaboration can affect user performance in a specific exploration task. Experiments with groups of users that are working in pairs showed that the lack of mobility can affect the ability to achieve specific exploration goals in a virtual environment. Our analysis reveals that the task was completed more efficiently when users were allowed to move freely and independently instead of working with limited mobility. In these systems, users adapted their own abilities and minimized the effect of mobility restrictions.
{"title":"Poster: Collaborative data exploration using two navigation strategies","authors":"Omar Gomez, H. Trefftz, P. Boulanger, W. Bischof","doi":"10.1109/3DUI.2009.4811221","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811221","url":null,"abstract":"Virtual collaborative systems are vital tools for accessing and sharing scientific data visualizations. This paper shows how two different modes of collaboration can affect user performance in a specific exploration task. Experiments with groups of users that are working in pairs showed that the lack of mobility can affect the ability to achieve specific exploration goals in a virtual environment. Our analysis reveals that the task was completed more efficiently when users were allowed to move freely and independently instead of working with limited mobility. In these systems, users adapted their own abilities and minimized the effect of mobility restrictions.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"186 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132196157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-14DOI: 10.1109/3DUI.2009.4811219
Johannes Schöning, Frank Steinicke, A. Krüger, K. Hinrichs
In recent years visualization of and interaction with 3D data have become more and more popular and widespread due to the requirements of numerous application areas. Two-dimensional desktop systems are often limited in cases where natural and intuitive interfaces are desired. Sophisticated 3D user interfaces, as they are provided by virtual reality (VR) systems consisting of stereoscopic projection and tracked input devices, are rarely adopted by ordinary users or even by experts. Since most applications dealing with 3D data still use traditional 2D GUIs, current user interface designs lack adequate efficiency. Multi-touch interaction has received considerable attention in the last few years, in particular for non-immersive, natural 2D interaction. Interactive multi-touch surfaces even support three degrees of freedom in terms of 2D position on the surface and varying levels of pressure. Since multi-touch interfaces represent a good trade-off between intuitive, constrained interaction on a touch surface providing tangible feedback, and unrestricted natural interaction without any instrumentation, they have the potential to form the fundaments of the next generation 2D and 3D user interfaces. Indeed, stereoscopic display of 3D data provides an additional depth cue, but until now challenges and limitations for multi-touch interaction in this context have not been considered. In this paper we present new multi-touch paradigms that combine traditional 2D interaction performed in monoscopic mode with 3D interaction and stereoscopic projection, which we refer to as interscopic multi-touch surfaces (iMUTS).
{"title":"Poster: Interscopic multi-touch surfaces: Using bimanual interaction for intuitive manipulation of spatial data","authors":"Johannes Schöning, Frank Steinicke, A. Krüger, K. Hinrichs","doi":"10.1109/3DUI.2009.4811219","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811219","url":null,"abstract":"In recent years visualization of and interaction with 3D data have become more and more popular and widespread due to the requirements of numerous application areas. Two-dimensional desktop systems are often limited in cases where natural and intuitive interfaces are desired. Sophisticated 3D user interfaces, as they are provided by virtual reality (VR) systems consisting of stereoscopic projection and tracked input devices, are rarely adopted by ordinary users or even by experts. Since most applications dealing with 3D data still use traditional 2D GUIs, current user interface designs lack adequate efficiency. Multi-touch interaction has received considerable attention in the last few years, in particular for non-immersive, natural 2D interaction. Interactive multi-touch surfaces even support three degrees of freedom in terms of 2D position on the surface and varying levels of pressure. Since multi-touch interfaces represent a good trade-off between intuitive, constrained interaction on a touch surface providing tangible feedback, and unrestricted natural interaction without any instrumentation, they have the potential to form the fundaments of the next generation 2D and 3D user interfaces. Indeed, stereoscopic display of 3D data provides an additional depth cue, but until now challenges and limitations for multi-touch interaction in this context have not been considered. In this paper we present new multi-touch paradigms that combine traditional 2D interaction performed in monoscopic mode with 3D interaction and stereoscopic projection, which we refer to as interscopic multi-touch surfaces (iMUTS).","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"174 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133283922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-14DOI: 10.1109/3DUI.2009.4811201
Aaron Kotranza, K. Johnsen, J. Cendan, Bayard Miller, D. Lind, Benjamin C. Lok
A common approach when simulating face-to-face interpersonal scenarios with virtual humans is to afford users only verbal interaction while providing rich verbal and non-verbal interaction from the virtual human. This is due to the difficulty in providing robust recognition of user non-verbal behavior and interpretation of these behaviors within the context of the verbal interaction between user and virtual human. To afford robust hand and tool-based non-verbal interaction with life-sized virtual humans, we propose virtual multi-tools. A single hand-held, tracked interaction device acts as a surrogate for the virtual multi-tools: the user's hand, multiple tools, and other objects. By combining six degree-of-freedom, high update rate tracking with extra degrees of freedom provided by buttons and triggers, a commodity device, the Nintendo Wii Remote, provides the kinesthetic and haptic feedback necessary to provide a high-fidelity estimation of the natural, unencumbered interaction provided by one's hands and physical hand-held tools. These qualities allow virtual multi-tools to be a less error-prone interface to social and task-oriented non-verbal interaction with a life-sized virtual human. This paper discusses the implementation of virtual multi-tools for hand and tool-based interaction with life-sized virtual humans, and provides an initial evaluation of the usability of virtual multi-tools in the medical education scenario of conducting a neurological exam of a virtual human.
{"title":"Virtual multi-tools for hand and tool-based interaction with life-size virtual human agents","authors":"Aaron Kotranza, K. Johnsen, J. Cendan, Bayard Miller, D. Lind, Benjamin C. Lok","doi":"10.1109/3DUI.2009.4811201","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811201","url":null,"abstract":"A common approach when simulating face-to-face interpersonal scenarios with virtual humans is to afford users only verbal interaction while providing rich verbal and non-verbal interaction from the virtual human. This is due to the difficulty in providing robust recognition of user non-verbal behavior and interpretation of these behaviors within the context of the verbal interaction between user and virtual human. To afford robust hand and tool-based non-verbal interaction with life-sized virtual humans, we propose virtual multi-tools. A single hand-held, tracked interaction device acts as a surrogate for the virtual multi-tools: the user's hand, multiple tools, and other objects. By combining six degree-of-freedom, high update rate tracking with extra degrees of freedom provided by buttons and triggers, a commodity device, the Nintendo Wii Remote, provides the kinesthetic and haptic feedback necessary to provide a high-fidelity estimation of the natural, unencumbered interaction provided by one's hands and physical hand-held tools. These qualities allow virtual multi-tools to be a less error-prone interface to social and task-oriented non-verbal interaction with a life-sized virtual human. This paper discusses the implementation of virtual multi-tools for hand and tool-based interaction with life-sized virtual humans, and provides an initial evaluation of the usability of virtual multi-tools in the medical education scenario of conducting a neurological exam of a virtual human.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"501 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122210107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-14DOI: 10.1109/3DUI.2009.4811225
Dustin B. Chertoff, Ross W. Byers, J. Laviola
With the rise in popularity of 3D spatial interaction in console gaming, such as the Nintendo Wii, it is important to determine whether existing menuing technique findings still hold true when using a 3D pointing device such as the Wii Controller. Linear menus were compared with two other menu techniques: radial menus and rotary menus. Effectiveness was measured through task completion time and the number of task errors. A subjective measure was also taken to determine participant preferences. Participants performed faster and made fewer errors when using the radial menu technique. Radial menus were also preferred by participants. These results indicate that radial menus are an effective menu technique when used with a 3D pointing device. This is consistent with previous work regarding radial menus and indicates that the usage of radial menus in gaming applications should be investigated further.
{"title":"Poster: Evaluation of menu techniques using a 3D game input device","authors":"Dustin B. Chertoff, Ross W. Byers, J. Laviola","doi":"10.1109/3DUI.2009.4811225","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811225","url":null,"abstract":"With the rise in popularity of 3D spatial interaction in console gaming, such as the Nintendo Wii, it is important to determine whether existing menuing technique findings still hold true when using a 3D pointing device such as the Wii Controller. Linear menus were compared with two other menu techniques: radial menus and rotary menus. Effectiveness was measured through task completion time and the number of task errors. A subjective measure was also taken to determine participant preferences. Participants performed faster and made fewer errors when using the radial menu technique. Radial menus were also preferred by participants. These results indicate that radial menus are an effective menu technique when used with a 3D pointing device. This is consistent with previous work regarding radial menus and indicates that the usage of radial menus in gaming applications should be investigated further.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126329001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-14DOI: 10.1109/3DUI.2009.4811214
G. D. Haan, Josef Scheuer, R. D. Vries, F. Post
Current surveillance systems can display many individual video streams within spatial context in a 2D map or 3D Virtual Environment (VE). The aim of this is to overcome some problems in traditional systems, e.g. to avoid intensive mental effort to maintain orientation and to ease tracking of motions between different screens. However, such integrated environments introduce new challenges in navigation and comprehensive viewing, caused by imperfect video alignment and complex 3D interaction. In this paper, we propose a novel, first-person viewing and navigation interface for integrated surveillance monitoring in a VE. It is currently designed for egocentric tasks, such a tracking persons or vehicles along several cameras. For these tasks, it aims to minimize the operator's 3D navigation effort while maximizing coherence between video streams and spatial context. The user can easily navigate between adjacent camera views and is guided along 3D guidance paths. To achieve visual coherence, we use dynamic video embedding: according to the viewer's position, translucent 3D video canvases are smoothly transformed and blended in the simplified 3D environment. The animated first-person view provides fluent visual flow which facilitates easier maintenance of orientation and can aid in spatial awareness. We discuss design considerations, the implementation of our proposed interface in our prototype surveillance system and demonstrate its use and limitations in various surveillance environments.
{"title":"Egocentric navigation for video surveillance in 3D Virtual Environments","authors":"G. D. Haan, Josef Scheuer, R. D. Vries, F. Post","doi":"10.1109/3DUI.2009.4811214","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811214","url":null,"abstract":"Current surveillance systems can display many individual video streams within spatial context in a 2D map or 3D Virtual Environment (VE). The aim of this is to overcome some problems in traditional systems, e.g. to avoid intensive mental effort to maintain orientation and to ease tracking of motions between different screens. However, such integrated environments introduce new challenges in navigation and comprehensive viewing, caused by imperfect video alignment and complex 3D interaction. In this paper, we propose a novel, first-person viewing and navigation interface for integrated surveillance monitoring in a VE. It is currently designed for egocentric tasks, such a tracking persons or vehicles along several cameras. For these tasks, it aims to minimize the operator's 3D navigation effort while maximizing coherence between video streams and spatial context. The user can easily navigate between adjacent camera views and is guided along 3D guidance paths. To achieve visual coherence, we use dynamic video embedding: according to the viewer's position, translucent 3D video canvases are smoothly transformed and blended in the simplified 3D environment. The animated first-person view provides fluent visual flow which facilitates easier maintenance of orientation and can aid in spatial awareness. We discuss design considerations, the implementation of our proposed interface in our prototype surveillance system and demonstrate its use and limitations in various surveillance environments.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115147288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-14DOI: 10.1109/3DUI.2009.4811220
Sean White, David Feng, Steven K. Feiner
Shake menus are a novel method for activating, displaying, and selecting options presented relative to a tangible object or manipulator in a 3D user interface. They provide ready-to-hand interaction, including facile selection and placement of objects. We present the technique, several alternative methods for presenting shake menus (world-referenced, display-referenced, and object-referenced), and an evaluation of menu placement.
{"title":"Poster: Shake menus: Towards activation and placement techniques for prop-based 3D graphical menus","authors":"Sean White, David Feng, Steven K. Feiner","doi":"10.1109/3DUI.2009.4811220","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811220","url":null,"abstract":"Shake menus are a novel method for activating, displaying, and selecting options presented relative to a tangible object or manipulator in a 3D user interface. They provide ready-to-hand interaction, including facile selection and placement of objects. We present the technique, several alternative methods for presenting shake menus (world-referenced, display-referenced, and object-referenced), and an evaluation of menu placement.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128942642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}