Traditional 3D User Interfaces (3DUI) in immersive virtual reality can be inefficient in tasks that involve diversities in scale, perspective, reference frame, and dimension. This paper proposes a solution to this problem using a coordinated, tablet- and HMD-based, hybrid virtual environment system. Wearing a non-occlusive HMD, the user is able to view and interact with a tablet mounted on the non-dominant forearm, which provides a multi-touch interaction surface, as well as an exocentric God view of the virtual world. To reduce transition gaps across 3D interaction tasks and interfaces, four coordination mechanisms are proposed, two of which were implemented, and one was evaluated in a user study featuring complex level-editing tasks. Based on subjective ratings, task performance, interview feedback, and video analysis, we found that having multiple Interaction Contexts (ICs) with complementary benefits can lead to good performance and user experience, despite the complexity of learning and using the hybrid system. The results also suggest keeping 3DUI tasks synchronized across the ICs, as this can help users understand their relationships, smoothen within- and between-task IC transitions, and inspire more creative use of different interfaces.
{"title":"Coordinated 3D interaction in tablet- and HMD-based hybrid virtual environments","authors":"Jia Wang, R. Lindeman","doi":"10.1145/2659766.2659777","DOIUrl":"https://doi.org/10.1145/2659766.2659777","url":null,"abstract":"Traditional 3D User Interfaces (3DUI) in immersive virtual reality can be inefficient in tasks that involve diversities in scale, perspective, reference frame, and dimension. This paper proposes a solution to this problem using a coordinated, tablet- and HMD-based, hybrid virtual environment system. Wearing a non-occlusive HMD, the user is able to view and interact with a tablet mounted on the non-dominant forearm, which provides a multi-touch interaction surface, as well as an exocentric God view of the virtual world. To reduce transition gaps across 3D interaction tasks and interfaces, four coordination mechanisms are proposed, two of which were implemented, and one was evaluated in a user study featuring complex level-editing tasks. Based on subjective ratings, task performance, interview feedback, and video analysis, we found that having multiple Interaction Contexts (ICs) with complementary benefits can lead to good performance and user experience, despite the complexity of learning and using the hybrid system. The results also suggest keeping 3DUI tasks synchronized across the ICs, as this can help users understand their relationships, smoothen within- and between-task IC transitions, and inspire more creative use of different interfaces.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124035263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When attending a conference some audiences lose attention following points on the screen. Although presenters usually use a pointer rod or a laser pointer, they are not convenient or easily visible on a large screen. A camera and another screen are also needed to show gestures. In this paper we propose using intuitive interface presentation support software. A presenter is superimposed onto a screen, and the person can draw there interactively. Realizing presenter movement on screen by recognizing natural and small actions, the person can move within a limited stage space. Presenters can point to any important areas and draw supplementary items with their own hand through our software, and of course show gestures on a large screen. It is expected that audiences will be better able to understand and focus.
{"title":"Getting yourself superimposed on a presentation screen","authors":"Kenji Funahashi, Yusuke Nakae","doi":"10.1145/2659766.2661203","DOIUrl":"https://doi.org/10.1145/2659766.2661203","url":null,"abstract":"When attending a conference some audiences lose attention following points on the screen. Although presenters usually use a pointer rod or a laser pointer, they are not convenient or easily visible on a large screen. A camera and another screen are also needed to show gestures. In this paper we propose using intuitive interface presentation support software. A presenter is superimposed onto a screen, and the person can draw there interactively. Realizing presenter movement on screen by recognizing natural and small actions, the person can move within a limited stage space. Presenters can point to any important areas and draw supplementary items with their own hand through our software, and of course show gestures on a large screen. It is expected that audiences will be better able to understand and focus.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115897629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we present an initial user study that explores the use of a dedicated inertial measurement unit (IMU) to achieve spatial awareness in Multi-surface Environments (MSE's). Our initial results suggest that measurements provided by an IMU may not provide value over sensor fusion techniques for spatially-aware MSE's, but warrant further exploration.
{"title":"Investigating inertial measurement units for spatial awareness in multi-surface environments","authors":"A. Azazi, T. Seyed, F. Maurer","doi":"10.1145/2659766.2661217","DOIUrl":"https://doi.org/10.1145/2659766.2661217","url":null,"abstract":"In this work, we present an initial user study that explores the use of a dedicated inertial measurement unit (IMU) to achieve spatial awareness in Multi-surface Environments (MSE's). Our initial results suggest that measurements provided by an IMU may not provide value over sensor fusion techniques for spatially-aware MSE's, but warrant further exploration.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114244124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this poster, we present an experiment to capture user's natural pointing posture in distal pointing tasks at large displays and to examine the effect of pointing posture on the performance of distal pointing tasks. There were two types of pointing posture: stretched arm posture (69% of the participants) and bended arm posture (31% of the participants). The types did not affect movement angle, but affected angular error, task completion time and mean angular velocity.
{"title":"Natural pointing posture in distal pointing tasks","authors":"Heejin Kim, Seungjae Oh, Sung H. Han, M. Chung","doi":"10.1145/2659766.2661213","DOIUrl":"https://doi.org/10.1145/2659766.2661213","url":null,"abstract":"In this poster, we present an experiment to capture user's natural pointing posture in distal pointing tasks at large displays and to examine the effect of pointing posture on the performance of distal pointing tasks. There were two types of pointing posture: stretched arm posture (69% of the participants) and bended arm posture (31% of the participants). The types did not affect movement angle, but affected angular error, task completion time and mean angular velocity.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121186877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical imaging is essential to support most diagnosis. It often requires visualizing individual 2D slices from 3D volumetric datasets and switching between both representations. Combining an overview with a detailed view of the data [1] enables to keep the user in context when looking in detail at a slice. Given both their mobility and their adequacy to support direct manipulation, tablets are attractive devices to ease imaging analysis tasks. They have been successfully combined with tabletops [3], allowing new ways to explore volumetric data. However, while touch allows for a more direct manipulation, it suffers from the well-known fat fnger problem which can interfere with the display, making it hard to understand subtle visual changes. To overcome this problem, we propose to explore the space around tablet devices. Such approach has been used for displays [2] to separate several workspaces of the desktop. Here, we use such space to invoke commands that are not required to be performed on the tablet, thus maximizing the visualization space during manipulations.
{"title":"Exploring tablet surrounding interaction spaces for medical imaging","authors":"H. Rateau, L. Grisoni, B. Araújo","doi":"10.1145/2659766.2661215","DOIUrl":"https://doi.org/10.1145/2659766.2661215","url":null,"abstract":"Medical imaging is essential to support most diagnosis. It often requires visualizing individual 2D slices from 3D volumetric datasets and switching between both representations. Combining an overview with a detailed view of the data [1] enables to keep the user in context when looking in detail at a slice. Given both their mobility and their adequacy to support direct manipulation, tablets are attractive devices to ease imaging analysis tasks. They have been successfully combined with tabletops [3], allowing new ways to explore volumetric data. However, while touch allows for a more direct manipulation, it suffers from the well-known fat fnger problem which can interfere with the display, making it hard to understand subtle visual changes. To overcome this problem, we propose to explore the space around tablet devices. Such approach has been used for displays [2] to separate several workspaces of the desktop. Here, we use such space to invoke commands that are not required to be performed on the tablet, thus maximizing the visualization space during manipulations.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116460424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
3D printing has shown great potential in creating tactile picture books for blind children to develop emergent literacy. Sighted children can be motivated to contribute to the modeling of more tactile picture books. But current 3D design tools are too difficult to use. Can sighted children model a tactile book by LEGO pieces instead? Can a LEGO be converted to a digital model that can be then printed?
{"title":"Using LEGO to model 3D tactile picture books by sighted children for blind children","authors":"Jeeeun Kim, Abigale Stangl, Tom Yeh","doi":"10.1145/2659766.2661211","DOIUrl":"https://doi.org/10.1145/2659766.2661211","url":null,"abstract":"3D printing has shown great potential in creating tactile picture books for blind children to develop emergent literacy. Sighted children can be motivated to contribute to the modeling of more tactile picture books. But current 3D design tools are too difficult to use. Can sighted children model a tactile book by LEGO pieces instead? Can a LEGO be converted to a digital model that can be then printed?","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126866872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philipp Tiefenbacher, Tobias Gehrlich, G. Rigoll, Takashi Nagamatsu
Remote guidance enables untrained users to solve complex tasks with the help of experts. These tasks often include the positioning of physical objects to certain poses. The expert indicates the final pose to the user. Therefore, the quality of annotations majorly influences the success of the remote collaboration. This work compares two kinds of annotation methods (2D and 3D) in two scenarios of different complexity. A pilot study indicates that 3D annotations reduce the execution time of the user in the complex scenario.
{"title":"Supporting remote guidance through 3D annotations","authors":"Philipp Tiefenbacher, Tobias Gehrlich, G. Rigoll, Takashi Nagamatsu","doi":"10.1145/2659766.2661206","DOIUrl":"https://doi.org/10.1145/2659766.2661206","url":null,"abstract":"Remote guidance enables untrained users to solve complex tasks with the help of experts. These tasks often include the positioning of physical objects to certain poses. The expert indicates the final pose to the user. Therefore, the quality of annotations majorly influences the success of the remote collaboration. This work compares two kinds of annotation methods (2D and 3D) in two scenarios of different complexity. A pilot study indicates that 3D annotations reduce the execution time of the user in the complex scenario.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127021992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Translational gains in head-mounted display (HMD) systems allow a user to walk at one rate in the real world while seeing themselves move at a faster or slower rate. Although several studies have measured how large gains must be for people to recognize them, little is known about how quickly the gains can be changed without people noticing. We conducted an experiment where participants were asked to walk on a straight path while wearing an HMD while we dynamically increased or decreased their virtual world translation speed. Participants indicated if their speed increased or decreased during their walk. In general, we found that the starting gain affected the detection and that, in most cases, there was little difference between gradual and instantaneous gain changes. The results of this work can help inform redirected walking implementations and other HMD applications where translational gains are not constant.
{"title":"Human sensitivity to dynamic translational gains in head-mounted displays","authors":"Ruimin Zhang, Bochao Li, S. Kuhl","doi":"10.1145/2659766.2659783","DOIUrl":"https://doi.org/10.1145/2659766.2659783","url":null,"abstract":"Translational gains in head-mounted display (HMD) systems allow a user to walk at one rate in the real world while seeing themselves move at a faster or slower rate. Although several studies have measured how large gains must be for people to recognize them, little is known about how quickly the gains can be changed without people noticing. We conducted an experiment where participants were asked to walk on a straight path while wearing an HMD while we dynamically increased or decreased their virtual world translation speed. Participants indicated if their speed increased or decreased during their walk. In general, we found that the starting gain affected the detection and that, in most cases, there was little difference between gradual and instantaneous gain changes. The results of this work can help inform redirected walking implementations and other HMD applications where translational gains are not constant.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131915566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Flat surfaces in 3d space","authors":"Tobias Höllerer","doi":"10.1145/3247431","DOIUrl":"https://doi.org/10.1145/3247431","url":null,"abstract":"","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133557008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cassandra Hoef, Jasmine Davis, Orit Shaer, E. Solovey
We present a user study that takes an in-depth look at the effect of immersion cues on 3D spatial problem solving by combining traditional performance and experience measures with brain data.
{"title":"An in-depth look at the benefits of immersion cues on spatial 3D problem solving","authors":"Cassandra Hoef, Jasmine Davis, Orit Shaer, E. Solovey","doi":"10.1145/2659766.2661222","DOIUrl":"https://doi.org/10.1145/2659766.2661222","url":null,"abstract":"We present a user study that takes an in-depth look at the effect of immersion cues on 3D spatial problem solving by combining traditional performance and experience measures with brain data.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122809727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}