We present Extramission, a method to a large scale interactive virtual environment. It consists of dual head mounted pico projectors and retro-reflective materials. With high-accuracy retro-reflective materials, laser beams scanned on user’s retina makes clear and free-focus vision. In this retinal scanning configuration, even if the luminance of the projector is low, scanned images can be seen clearly, which helps to evade overlaps between projected images. Due to small overlaps, Extramission can provide multi-user virtual experiences showing different images to each individual, and dual pico projectors can provide each user with stereoscopic vision. Moreover, the tolerance of low luminance allows larger distance between users and retro-reflectors, which is required for large scale virtual experiences using head mounted projectors. In this paper, we describe the principle and the implementation of Extramission. We also see its performance of displaying images.
{"title":"Extramission: A Large Scale Interactive Virtual Environment Using Head Mounted Projectors and Retro-reflectors","authors":"Hiroto Aoki, J. Rekimoto","doi":"10.1145/3357251.3357592","DOIUrl":"https://doi.org/10.1145/3357251.3357592","url":null,"abstract":"We present Extramission, a method to a large scale interactive virtual environment. It consists of dual head mounted pico projectors and retro-reflective materials. With high-accuracy retro-reflective materials, laser beams scanned on user’s retina makes clear and free-focus vision. In this retinal scanning configuration, even if the luminance of the projector is low, scanned images can be seen clearly, which helps to evade overlaps between projected images. Due to small overlaps, Extramission can provide multi-user virtual experiences showing different images to each individual, and dual pico projectors can provide each user with stereoscopic vision. Moreover, the tolerance of low luminance allows larger distance between users and retro-reflectors, which is required for large scale virtual experiences using head mounted projectors. In this paper, we describe the principle and the implementation of Extramission. We also see its performance of displaying images.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132228520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of this study is to operate various computers around us using our own smartphones. Methods for operating computers around the home by voice, such as the Internet of Things (IoT) appliances, are now widespread. However, there are problems with operation by voice; it is limited in terms of instruction patterns that can be expressed, and it cannot be used simultaneously by many users. To solve the problem, we propose a method to determine the location pointed to by a user with a smartphone gyro sensor. This method achieves controller integration, multiple functions, and simultaneous use by multiple people.
{"title":"Object Manipulation by Absolute Pointing with a Smartphone Gyro Sensor","authors":"Koki Sato, Mitsunori Matsushita","doi":"10.1145/3357251.3360006","DOIUrl":"https://doi.org/10.1145/3357251.3360006","url":null,"abstract":"The purpose of this study is to operate various computers around us using our own smartphones. Methods for operating computers around the home by voice, such as the Internet of Things (IoT) appliances, are now widespread. However, there are problems with operation by voice; it is limited in terms of instruction patterns that can be expressed, and it cannot be used simultaneously by many users. To solve the problem, we propose a method to determine the location pointed to by a user with a smartphone gyro sensor. This method achieves controller integration, multiple functions, and simultaneous use by multiple people.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"22 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132433876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nahal Norouzi, A. Erickson, Kangsoo Kim, Ryan Schubert, J. Laviola, G. Bruder, G. Welch
Augmented reality (AR) technologies provide a shared platform for users to collaborate in a physical context involving both real and virtual content. To enhance the quality of interaction between AR users, researchers have proposed augmenting users’ interpersonal space with embodied cues such as their gaze direction. While beneficial in achieving improved interpersonal spatial communication, such shared gaze environments suffer from multiple types of errors related to eye tracking and networking, that can reduce objective performance and subjective experience. In this paper, we conducted a human-subject study to understand the impact of accuracy, precision, latency, and dropout based errors on users’ performance when using shared gaze cues to identify a target among a crowd of people. We simulated varying amounts of errors and the target distances and measured participants’ objective performance through their response time and error rate, and their subjective experience and cognitive load through questionnaires. We found some significant differences suggesting that the simulated error levels had stronger effects on participants’ performance than target distance with accuracy and latency having a high impact on participants’ error rate. We also observed that participants assessed their own performance as lower than it objectively was, and we discuss implications for practical shared gaze applications.
{"title":"Effects of Shared Gaze Parameters on Visual Target Identification Task Performance in Augmented Reality","authors":"Nahal Norouzi, A. Erickson, Kangsoo Kim, Ryan Schubert, J. Laviola, G. Bruder, G. Welch","doi":"10.1145/3357251.3357587","DOIUrl":"https://doi.org/10.1145/3357251.3357587","url":null,"abstract":"Augmented reality (AR) technologies provide a shared platform for users to collaborate in a physical context involving both real and virtual content. To enhance the quality of interaction between AR users, researchers have proposed augmenting users’ interpersonal space with embodied cues such as their gaze direction. While beneficial in achieving improved interpersonal spatial communication, such shared gaze environments suffer from multiple types of errors related to eye tracking and networking, that can reduce objective performance and subjective experience. In this paper, we conducted a human-subject study to understand the impact of accuracy, precision, latency, and dropout based errors on users’ performance when using shared gaze cues to identify a target among a crowd of people. We simulated varying amounts of errors and the target distances and measured participants’ objective performance through their response time and error rate, and their subjective experience and cognitive load through questionnaires. We found some significant differences suggesting that the simulated error levels had stronger effects on participants’ performance than target distance with accuracy and latency having a high impact on participants’ error rate. We also observed that participants assessed their own performance as lower than it objectively was, and we discuss implications for practical shared gaze applications.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127890871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Two major form factors for virtual reality are head-mounted displays and large display environments such as CAVE®and the LCD-based successor CAVE2®. Each of these has distinct advantages and limitations based on how they’re used. This work explores preserving the high resolution and sense of presence of CAVE2 environments in full stereoscopic mode by using a see-though augmented reality HMD to expand the user’s field of regard beyond the physical display walls. In our explorative study, we found that in a visual search task in a stereoscopic CAVE2, the addition of the HoloLens to expand the field of regard did not hinder the performance or accuracy of the participant, but promoted more physical navigation which in post-study interviews participants felt aided in their spatial awareness of the virtual environment.
{"title":"Extending Virtual Reality Display Wall Environments Using Augmented Reality","authors":"Arthur Nishimoto, Andrew E. Johnson","doi":"10.1145/3357251.3357579","DOIUrl":"https://doi.org/10.1145/3357251.3357579","url":null,"abstract":"Two major form factors for virtual reality are head-mounted displays and large display environments such as CAVE®and the LCD-based successor CAVE2®. Each of these has distinct advantages and limitations based on how they’re used. This work explores preserving the high resolution and sense of presence of CAVE2 environments in full stereoscopic mode by using a see-though augmented reality HMD to expand the user’s field of regard beyond the physical display walls. In our explorative study, we found that in a visual search task in a stereoscopic CAVE2, the addition of the HoloLens to expand the field of regard did not hinder the performance or accuracy of the participant, but promoted more physical navigation which in post-study interviews participants felt aided in their spatial awareness of the virtual environment.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126110545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Eiberger, P. Kristensson, S. Mayr, M. Kranz, Jens Grubert
Optical see-through head-mounted displays (OST HMDs) typically display virtual content at a fixed focal distance while users need to integrate this information with real-world information at different depth layers. This problem is pronounced in body-proximate multi-display systems, such as when an OST HMD is combined with a smartphone or smartwatch. While such joint systems open up a new design space, they also reduce users’ ability to integrate visual information. We quantify this cost by presenting the results of an experiment (n=24) that evaluates human performance in a visual search task across an OST HMD and a body-proximate display at 30 cm. The results reveal that task completion time increases significantly by approximately 50% and the error rate increases significantly by approximately 100% compared to visual search on a single depth layer. These results highlight a design trade-off when designing joint OST HMD-body proximate display systems.
{"title":"Effects of Depth Layer Switching between an Optical See-Through Head-Mounted Display and a Body-Proximate Display","authors":"Anna Eiberger, P. Kristensson, S. Mayr, M. Kranz, Jens Grubert","doi":"10.1145/3357251.3357588","DOIUrl":"https://doi.org/10.1145/3357251.3357588","url":null,"abstract":"Optical see-through head-mounted displays (OST HMDs) typically display virtual content at a fixed focal distance while users need to integrate this information with real-world information at different depth layers. This problem is pronounced in body-proximate multi-display systems, such as when an OST HMD is combined with a smartphone or smartwatch. While such joint systems open up a new design space, they also reduce users’ ability to integrate visual information. We quantify this cost by presenting the results of an experiment (n=24) that evaluates human performance in a visual search task across an OST HMD and a body-proximate display at 30 cm. The results reveal that task completion time increases significantly by approximately 50% and the error rate increases significantly by approximately 100% compared to visual search on a single depth layer. These results highlight a design trade-off when designing joint OST HMD-body proximate display systems.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132259627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jesse Paterson, Jiwoong Han, T. Cheng, P. Laker, D. McPherson, Joseph Menke, A. Yang
As the capability and complexity of UAVs continue to increase, specifying the complex 3D flight paths necessary for instructing gets more complicated. Immersive interfaces, such as those afforded by virtual reality (VR), have several unique traits which may improve the user’s ability to perceive and specify 3D information. These traits include stereoscopic depth cues which induce a sense of physical space as well as six degrees of freedom (DoF) natural head-pose and gesture interactions. This work introduces an open-source platform for 3D aerial path planning in VR and compares it to existing UAV piloting interfaces. Our study has found statistically significant improvements in safety over a manual control interface and in efficiency over a 2D touchscreen interface. The results illustrate that immersive interfaces provide a viable alternative to touchscreen interfaces for UAV path planning.
{"title":"Improving Usability, Efficiency, and Safety of UAV Path Planning through a Virtual Reality Interface","authors":"Jesse Paterson, Jiwoong Han, T. Cheng, P. Laker, D. McPherson, Joseph Menke, A. Yang","doi":"10.1145/3357251.3362742","DOIUrl":"https://doi.org/10.1145/3357251.3362742","url":null,"abstract":"As the capability and complexity of UAVs continue to increase, specifying the complex 3D flight paths necessary for instructing gets more complicated. Immersive interfaces, such as those afforded by virtual reality (VR), have several unique traits which may improve the user’s ability to perceive and specify 3D information. These traits include stereoscopic depth cues which induce a sense of physical space as well as six degrees of freedom (DoF) natural head-pose and gesture interactions. This work introduces an open-source platform for 3D aerial path planning in VR and compares it to existing UAV piloting interfaces. Our study has found statistically significant improvements in safety over a manual control interface and in efficiency over a 2D touchscreen interface. The results illustrate that immersive interfaces provide a viable alternative to touchscreen interfaces for UAV path planning.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133583799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We report a 3D selection study comparing stereo and head-tracking with both mouse and pen pointing. Results indicate stereo was primarily beneficial to the pen mode, but slightly hindered mouse speed. Head tracking had fewer noticeable effects.
{"title":"Effects of stereo and head tracking in 3d selection tasks","authors":"Bartosz Bajer, Robert J. Teather, W. Stuerzlinger","doi":"10.1145/2491367.2491392","DOIUrl":"https://doi.org/10.1145/2491367.2491392","url":null,"abstract":"We report a 3D selection study comparing stereo and head-tracking with both mouse and pen pointing. Results indicate stereo was primarily beneficial to the pen mode, but slightly hindered mouse speed. Head tracking had fewer noticeable effects.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116686615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This poster focuses on research investigating the control of an immersive virtual environment using the Emotiv EPOC, a consumer-grade brain computer interface. The primary emphasis of the work is to determine the feasibility of the Emotiv EPOC at manipulating elements of an interactive virtual environment. We have developed a system utilizing the Emotiv EPOC as the main interface to a custom testing environment comprised of the Blender Game Engine, Python, and a VRPN system. A series of experiments that measure response time, reliability, and accuracy have been developed and the current results are described. Our poster presents the current state of the project including preliminary efforts in piloting the experiments. These findings provide insight into potential results from experimentation with active subjects and prove to be promising.
{"title":"Effectiveness of commodity BCI devices as means to control an immersive virtual environment","authors":"Jerald Thomas, Steve Jungst, P. Willemsen","doi":"10.1145/2491367.2491403","DOIUrl":"https://doi.org/10.1145/2491367.2491403","url":null,"abstract":"This poster focuses on research investigating the control of an immersive virtual environment using the Emotiv EPOC, a consumer-grade brain computer interface. The primary emphasis of the work is to determine the feasibility of the Emotiv EPOC at manipulating elements of an interactive virtual environment. We have developed a system utilizing the Emotiv EPOC as the main interface to a custom testing environment comprised of the Blender Game Engine, Python, and a VRPN system. A series of experiments that measure response time, reliability, and accuracy have been developed and the current results are described.\u0000 Our poster presents the current state of the project including preliminary efforts in piloting the experiments. These findings provide insight into potential results from experimentation with active subjects and prove to be promising.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125839748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonas Forsslund, Sara C. Schvartzman, S. Girod, Rebeka G. Silva, J. Salisbury, Sonny Chan, B. Jo
We have created a novel virtual assembly tool that uses two haptic devices for bimanual manipulation. The project is focused on the manipulation of fractured jaw bones for patient-specific surgical planning, but can be extended to any assembly task of organic shaped objects (Figure 1). Spatial input devices that support virtual object manipulation through direct mapping are easier and more natural to use for tasks that are fundamentally in 3D, like assembly tasks. Employing both hands further provides a frame of reference which improves spatial understanding of the manipulated objects [2]. Few studies have been carried out on the importance of haptic feedback for bimanual interactions, but it has been showed meaningful even for unimanual tasks [4]. We are showing a demo of our work in progress to bring high-fidelity haptic rendering to bimanually operated spatial interfaces. As bimanual direct manipulation interaction improves performance even without collision response, we hypothesize that haptic feedback improves it further.
{"title":"Bimanual spatial haptic interface for assembly tasks","authors":"Jonas Forsslund, Sara C. Schvartzman, S. Girod, Rebeka G. Silva, J. Salisbury, Sonny Chan, B. Jo","doi":"10.1145/2491367.2491398","DOIUrl":"https://doi.org/10.1145/2491367.2491398","url":null,"abstract":"We have created a novel virtual assembly tool that uses two haptic devices for bimanual manipulation. The project is focused on the manipulation of fractured jaw bones for patient-specific surgical planning, but can be extended to any assembly task of organic shaped objects (Figure 1). Spatial input devices that support virtual object manipulation through direct mapping are easier and more natural to use for tasks that are fundamentally in 3D, like assembly tasks. Employing both hands further provides a frame of reference which improves spatial understanding of the manipulated objects [2]. Few studies have been carried out on the importance of haptic feedback for bimanual interactions, but it has been showed meaningful even for unimanual tasks [4]. We are showing a demo of our work in progress to bring high-fidelity haptic rendering to bimanually operated spatial interfaces. As bimanual direct manipulation interaction improves performance even without collision response, we hypothesize that haptic feedback improves it further.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117035724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent developments in touch and display technologies have laid the groundwork to combine touch-sensitive display systems with stereoscopic three-dimensional (3D) display. Although this combination provides a compelling user experience, interaction with objects stereoscopically displayed in front of the screen poses some fundamental challenges: Traditionally, touch-sensitive surfaces capture only direct contacts such that the user has to penetrate the visually perceived object to touch the 2D surface behind the object. Conversely, recent technologies support capturing finger positions in front of the display, enabling users to interact with intangible objects in mid-air 3D space. In this paper we perform a comparison between such 2D touch and 3D mid-air interactions in a Fitts' Law experiment for objects with varying stereoscopical parallax. The results show that the 2D touch technique is more efficient close to the screen, whereas for targets further away from the screen, 3D selection outperforms 2D touch. Based on the results, we present implications for the design and development of future touch-sensitive interfaces for stereoscopic displays.
{"title":"To touch or not to touch?: comparing 2D touch and 3D mid-air interaction on stereoscopic tabletop surfaces","authors":"G. Bruder, Frank Steinicke, W. Stuerzlinger","doi":"10.1145/2491367.2491369","DOIUrl":"https://doi.org/10.1145/2491367.2491369","url":null,"abstract":"Recent developments in touch and display technologies have laid the groundwork to combine touch-sensitive display systems with stereoscopic three-dimensional (3D) display. Although this combination provides a compelling user experience, interaction with objects stereoscopically displayed in front of the screen poses some fundamental challenges: Traditionally, touch-sensitive surfaces capture only direct contacts such that the user has to penetrate the visually perceived object to touch the 2D surface behind the object. Conversely, recent technologies support capturing finger positions in front of the display, enabling users to interact with intangible objects in mid-air 3D space. In this paper we perform a comparison between such 2D touch and 3D mid-air interactions in a Fitts' Law experiment for objects with varying stereoscopical parallax. The results show that the 2D touch technique is more efficient close to the screen, whereas for targets further away from the screen, 3D selection outperforms 2D touch. Based on the results, we present implications for the design and development of future touch-sensitive interfaces for stereoscopic displays.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129443187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}